Science.gov

Sample records for accident prediction model

  1. An alternative accident prediction model for highway-rail interfaces.

    PubMed

    Austin, Ross D; Carson, Jodi L

    2002-01-01

    Safety levels at highway/rail interfaces continue to be of major concern despite an ever-increasing focus on improved design and appurtenance application practices. Despite the encouraging trend towards improved safety, accident frequencies remain high, many of which result in fatalities. More than half of these accidents occur at public crossings, where active warning devices (i.e. gates, lights, bells, etc.) are in place and functioning properly. This phenomenon speaks directly to the need to re-examine both safety evaluation (i.e. accident prediction) methods and design practices at highway-rail crossings. With respect to earlier developed accident prediction methods, the Peabody Dimmick Formula, the New Hampshire Index and the National Cooperative Highway Research Program (NCHRP) Hazard Index, all lack descriptive capabilities due to their limited number of explanatory variables. Further, each has unique limitations that are detailed in this paper. The US Department of Transportation's (USDOT) Accident Prediction Formula, which is most widely, also has limitations related to the complexity of the three-stage formula and its decline in accident prediction model accuracy over time. This investigation resulted in the development of an alternate highway-rail crossing accident prediction model, using negative binomial regression that shows great promise. The benefit to be gained through the application of this alternate model is (1) a greatly simplified, one-step estimation process; (2) comparable supporting data requirements and (3) interpretation of both the magnitude and direction of the effect of the factors found to significantly influence highway-rail crossing accident frequencies.

  2. Estimating vehicle roadside encroachment frequency using accident prediction models

    SciTech Connect

    Miaou, S.-P.

    1996-07-01

    The existing data to support the development of roadside encroachment- based accident models are extremely limited and largely outdated. Under the sponsorship of the Federal Highway Administration and Transportation Research Board, several roadside safety projects have attempted to address this issue by providing rather comprehensive data collection plans and conducting pilot data collection efforts. It is clear from the results of these studies that the required field data collection efforts will be expensive. Furthermore, the validity of any field collected encroachment data may be questionable because of the technical difficulty to distinguish intentional from unintentional encroachments. This paper proposes an alternative method for estimating the basic roadside encroachment data without actually field collecting them. The method is developed by exploring the probabilistic relationships between a roadside encroachment event and a run-off-the-road event With some mild assumptions, the method is capable of providing a wide range of basic encroachment data from conventional accident prediction models. To illustrate the concept and use of such a method, some basic encroachment data are estimated for rural two-lane undivided roads. In addition, the estimated encroachment data are compared with the existing collected data. The illustration shows that the method described in this paper can be a viable approach to estimating basic encroachment data without actually collecting them which can be very costly.

  3. Accident prediction model for railway-highway interfaces.

    PubMed

    Oh, Jutaek; Washington, Simon P; Nam, Doohee

    2006-03-01

    Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes.

  4. Accident prediction model for public highway-rail grade crossings.

    PubMed

    Lu, Pan; Tolliver, Denver

    2016-05-01

    Considerable research has focused on roadway accident frequency analysis, but relatively little research has examined safety evaluation at highway-rail grade crossings. Highway-rail grade crossings are critical spatial locations of utmost importance for transportation safety because traffic crashes at highway-rail grade crossings are often catastrophic with serious consequences. The Poisson regression model has been employed to analyze vehicle accident frequency as a good starting point for many years. The most commonly applied variations of Poisson including negative binomial, and zero-inflated Poisson. These models are used to deal with common crash data issues such as over-dispersion (sample variance is larger than the sample mean) and preponderance of zeros (low sample mean and small sample size). On rare occasions traffic crash data have been shown to be under-dispersed (sample variance is smaller than the sample mean) and traditional distributions such as Poisson or negative binomial cannot handle under-dispersion well. The objective of this study is to investigate and compare various alternate highway-rail grade crossing accident frequency models that can handle the under-dispersion issue. The contributions of the paper are two-fold: (1) application of probability models to deal with under-dispersion issues and (2) obtain insights regarding to vehicle crashes at public highway-rail grade crossings.

  5. Predicting System Accidents with Model Analysis During Hybrid Simulation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land D.; Throop, David R.

    2002-01-01

    Standard discrete event simulation is commonly used to identify system bottlenecks and starving and blocking conditions in resources and services. The CONFIG hybrid discrete/continuous simulation tool can simulate such conditions in combination with inputs external to the simulation. This provides a means for evaluating the vulnerability to system accidents of a system's design, operating procedures, and control software. System accidents are brought about by complex unexpected interactions among multiple system failures , faulty or misleading sensor data, and inappropriate responses of human operators or software. The flows of resource and product materials play a central role in the hazardous situations that may arise in fluid transport and processing systems. We describe the capabilities of CONFIG for simulation-time linear circuit analysis of fluid flows in the context of model-based hazard analysis. We focus on how CONFIG simulates the static stresses in systems of flow. Unlike other flow-related properties, static stresses (or static potentials) cannot be represented by a set of state equations. The distribution of static stresses is dependent on the specific history of operations performed on a system. We discuss the use of this type of information in hazard analysis of system designs.

  6. Model predictions of wind and turbulence profiles associated with an ensemble of aircraft accidents

    NASA Technical Reports Server (NTRS)

    Williamson, G. G.; Lewellen, W. S.; Teske, M. E.

    1977-01-01

    The feasibility of predicting conditions under which wind/turbulence environments hazardous to aviation operations exist is studied by examining a number of different accidents in detail. A model of turbulent flow in the atmospheric boundary layer is used to reconstruct wind and turbulence profiles which may have existed at low altitudes at the time of the accidents. The predictions are consistent with available flight recorder data, but neither the input boundary conditions nor the flight recorder observations are sufficiently precise for these studies to be interpreted as verification tests of the model predictions.

  7. Random parameter models for accident prediction on two-lane undivided highways in India.

    PubMed

    Dinu, R R; Veeraragavan, A

    2011-02-01

    Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Predictive model for motorcycle accidents at three-legged priority junctions.

    PubMed

    Harnen, S; Umar, R S Radin; Wong, S V; Wan Hashim, W I

    2003-12-01

    In conjunction with a nationwide motorcycle safety program, the provision of exclusive motorcycle lanes has been implemented to overcome link-motorcycle accidents along trunk roads in Malaysia. However, not much work has been done to address accidents at junctions involving motorcycles. This article presents the development of predictive model for motorcycle accidents at three-legged major-minor priority junctions of urban roads in Malaysia. The generalized linear modeling technique was used to develop the model. The final model reveals that motorcycle accidents are proportional to the power of traffic flow. An increase in nonmotorcycle and motorcycle flows entering the junctions is associated with an increase in motorcycle accidents. Nonmotorcycle flow on major roads had the highest effect on the probability of motorcycle accidents. Approach speed, lane width, number of lanes, shoulder width, and land use were found to be significant in explaining motorcycle accidents at the three-legged major-minor priority junctions. These findings should enable traffic engineers to specifically design appropriate junction treatment criteria for nonexclusive motorcycle lane facilities.

  9. Combined Prediction Model of Death Toll for Road Traffic Accidents Based on Independent and Dependent Variables

    PubMed Central

    Zhong-xiang, Feng; Shi-sheng, Lu; Wei-hua, Zhang; Nan-nan, Zhang

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability. PMID:25610454

  10. Combined prediction model of death toll for road traffic accidents based on independent and dependent variables.

    PubMed

    Feng, Zhong-xiang; Lu, Shi-sheng; Zhang, Wei-hua; Zhang, Nan-nan

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.

  11. Impact of rainstorm and runoff modeling on predicted consequences of atmospheric releases from nuclear reactor accidents

    SciTech Connect

    Ritchie, L.T.; Brown, W.D.; Wayland, J.R.

    1980-05-01

    A general temperate latitude cyclonic rainstorm model is presented which describes the effects of washout and runoff on consequences of atmospheric releases of radioactive material from potential nuclear reactor accidents. The model treats the temporal and spatial variability of precipitation processes. Predicted air and ground concentrations of radioactive material and resultant health consequences for the new model are compared to those of the original WASH-1400 model under invariant meteorological conditions and for realistic weather events using observed meteorological sequences. For a specific accident under a particular set of meteorological conditions, the new model can give significantly different results from those predicted by the WASH-1400 model, but the aggregate consequences produced for a large number of meteorological conditions are similar.

  12. Sensitivity analysis of an accident prediction model by the fractional factorial method.

    PubMed

    Akgüngör, Ali P; Yildiz, Osman

    2007-01-01

    Sensitivity analysis of a model can help us determine relative effects of model parameters on model results. In this study, the sensitivity of the accident prediction model proposed by Zegeer et al. [Zegeer, C.V., Reinfurt, D., Hummer, J., Herf, L., Hunter, W., 1987. Safety Effect of Cross-section Design for Two-lane Roads, vols. 1-2. Report FHWA-RD-87/008 and 009 Federal Highway Administration, Department of Transportation, USA] to its parameters was investigated by the fractional factorial analysis method. The reason for selecting this particular model is that it incorporates both traffic and road geometry parameters besides terrain characteristics. The evaluation of sensitivity analysis indicated that average daily traffic (ADT), lane width (W), width of paved shoulder (PA), median (H) and their interactions (i.e., ADT-W, ADT-PA and ADT-H) have significant effects on number of accidents. Based on the absolute value of parameter effects at the three- and two-standard deviation thresholds ADT was found to be of primary importance, while the remaining identified parameters seemed to be of secondary importance. This agrees with the fact that ADT is among the most effective parameters to determine road geometry and therefore, it is directly related to number of accidents. Overall, the fractional factorial method was found to be an efficient tool to examine the relative importance of the selected accident prediction model parameters.

  13. An Evaluation of the Hazard Prediction and Assessment Capability (HPAC) Software’s Ability to Model the Chornobyl Accident

    DTIC Science & Technology

    2002-03-01

    source term. Several publications provided a thorough accounting of the accident, including “ Chernobyl Record” [Mould], and the NRC technical report...Report on the Accident at the Chernobyl Nuclear Power Station” [NUREG-1250]. The most comprehensive study of transport models to predict the...from the Chernobyl Accident: The ATMES Report” [Klug, et al.]. The Atmospheric Transport 5 Model Evaluation Study (ATMES) report used data

  14. Application of Gray Markov SCGM(1,1) c Model to Prediction of Accidents Deaths in Coal Mining.

    PubMed

    Lan, Jian-Yi; Zhou, Ying

    2014-01-01

    The prediction of mine accident is the basis of aviation safety assessment and decision making. Gray prediction is suitable for such kinds of system objects with few data, short time, and little fluctuation, and Markov chain theory is just suitable for forecasting stochastic fluctuating dynamic process. Analyzing the coal mine accident human error cause, combining the advantages of both Gray prediction and Markov theory, an amended Gray Markov SCGM(1,1) c model is proposed. The gray SCGM(1,1) c model is applied to imitate the development tendency of the mine safety accident, and adopt the amended model to improve prediction accuracy, while Markov prediction is used to predict the fluctuation along the tendency. Finally, the new model is applied to forecast the mine safety accident deaths from 1990 to 2010 in China, and, 2011-2014 coal accidents deaths were predicted. The results show that the new model not only discovers the trend of the mine human error accident death toll but also overcomes the random fluctuation of data affecting precision. It possesses stronger engineering application.

  15. Compartment model for long-term contamination prediction in deciduous fruit trees after a nuclear accident

    SciTech Connect

    Antonopoulos-Domis, M.; Clouvas, A.; Gagianas, A. )

    1990-06-01

    Radiocesium contamination from the Chernobyl accident of different parts (fruits, leaves, and shoots) of selected apricot trees in North Greece was systematically measured in 1987 and 1988. The results are presented and discussed in the framework of a simple compartment model describing the long-term contamination uptake mechanism of deciduous fruit trees after a nuclear accident.

  16. M5 model tree based predictive modeling of road accidents on non-urban sections of highways in India.

    PubMed

    Singh, Gyanendra; Sachdeva, S N; Pal, Mahesh

    2016-11-01

    This work examines the application of M5 model tree and conventionally used fixed/random effect negative binomial (FENB/RENB) regression models for accident prediction on non-urban sections of highway in Haryana (India). Road accident data for a period of 2-6 years on different sections of 8 National and State Highways in Haryana was collected from police records. Data related to road geometry, traffic and road environment related variables was collected through field studies. Total two hundred and twenty two data points were gathered by dividing highways into sections with certain uniform geometric characteristics. For prediction of accident frequencies using fifteen input parameters, two modeling approaches: FENB/RENB regression and M5 model tree were used. Results suggest that both models perform comparably well in terms of correlation coefficient and root mean square error values. M5 model tree provides simple linear equations that are easy to interpret and provide better insight, indicating that this approach can effectively be used as an alternative to RENB approach if the sole purpose is to predict motor vehicle crashes. Sensitivity analysis using M5 model tree also suggests that its results reflect the physical conditions. Both models clearly indicate that to improve safety on Indian highways minor accesses to the highways need to be properly designed and controlled, the service roads to be made functional and dispersion of speeds is to be brought down. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. A combined M5P tree and hazard-based duration model for predicting urban freeway traffic accident durations.

    PubMed

    Lin, Lei; Wang, Qian; Sadek, Adel W

    2016-06-01

    The duration of freeway traffic accidents duration is an important factor, which affects traffic congestion, environmental pollution, and secondary accidents. Among previous studies, the M5P algorithm has been shown to be an effective tool for predicting incident duration. M5P builds a tree-based model, like the traditional classification and regression tree (CART) method, but with multiple linear regression models as its leaves. The problem with M5P for accident duration prediction, however, is that whereas linear regression assumes that the conditional distribution of accident durations is normally distributed, the distribution for a "time-to-an-event" is almost certainly nonsymmetrical. A hazard-based duration model (HBDM) is a better choice for this kind of a "time-to-event" modeling scenario, and given this, HBDMs have been previously applied to analyze and predict traffic accidents duration. Previous research, however, has not yet applied HBDMs for accident duration prediction, in association with clustering or classification of the dataset to minimize data heterogeneity. The current paper proposes a novel approach for accident duration prediction, which improves on the original M5P tree algorithm through the construction of a M5P-HBDM model, in which the leaves of the M5P tree model are HBDMs instead of linear regression models. Such a model offers the advantage of minimizing data heterogeneity through dataset classification, and avoids the need for the incorrect assumption of normality for traffic accident durations. The proposed model was then tested on two freeway accident datasets. For each dataset, the first 500 records were used to train the following three models: (1) an M5P tree; (2) a HBDM; and (3) the proposed M5P-HBDM, and the remainder of data were used for testing. The results show that the proposed M5P-HBDM managed to identify more significant and meaningful variables than either M5P or HBDMs. Moreover, the M5P-HBDM had the lowest overall mean

  18. Predicting road accidents: Structural time series approach

    NASA Astrophysics Data System (ADS)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-07-01

    In this paper, the model for occurrence of road accidents in Malaysia between the years of 1970 to 2010 was developed and throughout this model the number of road accidents have been predicted by using the structural time series approach. The models are developed by using stepwise method and the residual of each step has been analyzed. The accuracy of the model is analyzed by using the mean absolute percentage error (MAPE) and the best model is chosen based on the smallest Akaike information criterion (AIC) value. A structural time series approach found that local linear trend model is the best model to represent the road accidents. This model allows level and slope component to be varied over time. In addition, this approach also provides useful information on improving the conventional time series method.

  19. Correspondence model of occupational accidents.

    PubMed

    Conte, Juan C; Rubio, Emilio A; García, Ana I; Cano, Francisco J

    2011-09-01

    We present a new generalized model for the diagnosis and prediction of accidents among the Spanish workforce. Based on observational data of the accident rate in all Spanish companies over eleven years (7,519,732 accidents), we classified them in a new risk-injury contingency table (19×19). Through correspondence analysis, we obtained a structure composed of three axes whose combination identifies three separate risk and injury groups, which we used as a general Spanish pattern. The most likely or frequent relationships between the risk and injuries identified in the pattern facilitated the decision-making process in companies at an early stage of risk assessment. Each risk-injury group has its own characteristics, which are understandable within the phenomenological framework of the accident. The main advantages of this model are its potential application to any other country and the feasibility of contrasting different country results. One limiting factor, however, is the need to set a common classification framework for risks and injuries to enhance comparison, a framework that does not exist today. The model aims to manage work-related accidents automatically at any level.

  20. Review the number of accidents in Tehran over a two-year period and prediction of the number of events based on a time-series model

    PubMed Central

    Teymuri, Ghulam Heidar; Sadeghian, Marzieh; Kangavari, Mehdi; Asghari, Mehdi; Madrese, Elham; Abbasinia, Marzieh; Ahmadnezhad, Iman; Gholizadeh, Yavar

    2013-01-01

    Background: One of the significant dangers that threaten people’s lives is the increased risk of accidents. Annually, more than 1.3 million people die around the world as a result of accidents, and it has been estimated that approximately 300 deaths occur daily due to traffic accidents in the world with more than 50% of that number being people who were not even passengers in the cars. The aim of this study was to examine traffic accidents in Tehran and forecast the number of future accidents using a time-series model. Methods: The study was a cross-sectional study that was conducted in 2011. The sample population was all traffic accidents that caused death and physical injuries in Tehran in 2010 and 2011, as registered in the Tehran Emergency ward. The present study used Minitab 15 software to provide a description of accidents in Tehran for the specified time period as well as those that occurred during April 2012. Results: The results indicated that the average number of daily traffic accidents in Tehran in 2010 was 187 with a standard deviation of 83.6. In 2011, there was an average of 180 daily traffic accidents with a standard deviation of 39.5. One-way analysis of variance indicated that the average number of accidents in the city was different for different months of the year (P < 0.05). Most of the accidents occurred in March, July, August, and September. Thus, more accidents occurred in the summer than in the other seasons. The number of accidents was predicted based on an auto-regressive, moving average (ARMA) for April 2012. The number of accidents displayed a seasonal trend. The prediction of the number of accidents in the city during April of 2012 indicated that a total of 4,459 accidents would occur with mean of 149 accidents per day during these three months. Conclusion: The number of accidents in Tehran displayed a seasonal trend, and the number of accidents was different for different seasons of the year. PMID:26120405

  1. Predicting cognitive impairment and accident risk.

    PubMed

    Raslear, Thomas G; Hursh, Steven R; Van Dongen, Hans P A

    2011-01-01

    Sleep and cognition are temporally regulated by a homeostatic process generating pressure for sleep as a function of sleep/wake history, and a circadian process generating pressure for wakefulness as a function of time of day. Under normal nocturnal sleep conditions, these two processes are aligned in such a manner as to provide optimal daytime performance and consolidated nighttime sleep. Under conditions of sleep deprivation, shift work or transmeridian travel, the two processes are misaligned, resulting in fatigue and cognitive deficits. Mathematical models of fatigue and performance have been developed to predict these cognitive deficits. Recent studies showing long-term effects on performance of chronic sleep restriction suggest that the homeostatic process undergoes gradual changes that are slow to recover. New developments in mathematical modeling of performance are focused on capturing these gradual changes and their effects on fatigue. Accident risk increases as a function of fatigue severity as well as the duration of exposure to fatigue. Work schedule and accident rate information from an operational setting can thus be used to calibrate a mathematical model of fatigue and performance to predict accident risk. This provides a fatigue risk management tool that helps to direct mitigation resources to where they would have the greatest mitigating effect. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Simple computational model for the prediction of fuel pin failure during a transient-overpower accident. [LMFBR

    SciTech Connect

    Mast, P.K.

    1980-01-01

    A fuel pin failure model is developed and incorported into a fast-running computer program. The model is designed to predict irradiated fuel-pin cladding rupture during a hypothetical transient-overpower (TOP) accident in a liquid metal fast breeder reactor. The principal failure mechanisms of fuel-cladding differential thermal expansion and fission-gas pressurization are accounted for. The prediction of cladding failure is based on a mechanistic calculation of the time-dependent cladding temperature and stress. A finite-difference thermal solution is used to obtain the radial temperature distribution in the pin. The pin mechanics calculation uses a very efficient few-fuel-node/single-cladding-node algorithm that utilizes the Tresca yield criterion to determine the onset of cladding plastic deformation. Comparisons are made between model predictions and the results of a number of Transient Reactor Test Facility TOP experiments. The importance of accurately modeling the fuel radial and circumferential crack characterization is investigated and discussed. The effect of model limitations is discussed and recommendations for future work are made.

  3. Predictions of structural integrity of steam generator tubes under normal operating, accident, and severe accident conditions

    SciTech Connect

    Majumdar, S.

    1996-09-01

    Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation is confirmed by further tests at high temperatures as well as by finite element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation is confirmed by finite element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure is developed and validated by tests under varying temperature and pressure loading expected during severe accidents.

  4. Updating long-range transport model predictions using real-time monitoring data in case of nuclear accidents with release to the atmosphere

    NASA Astrophysics Data System (ADS)

    Raes, Frank; Tassone, Caterina; Grippa, Gianni; Zarimpas, Nicolas; Graziani, Giovanni

    A procedure is developed to reduce the uncertainties of long-range transport model predictions, in case of a large scale nuclear accident. It is based on the availability in 'real time' of the concentrations of airborne radioactive aerosols from automatic on-line monitors, which are presently being installed throughout Europe. Essentially, the procedure consists of: (1) constructing new (area) source terms from the measured field data as they become available; and (2) restart the prediction with these sources, rather than with the original (point) source. The procedure is applied to the Chernobyl accident. It is shown that the procedure is feasible and might result in an improvement of the prediction of the location of the cloud by several hundreds of kilometers and the actual levels with an order of magnitude. The weak point is the treatment of the vertical structure and transport of the cloud, which can only be solved when 'real-time' upper air observations are also available.

  5. Investigation of adolescent accident predictive variables in hilly regions.

    PubMed

    Mohanty, Malaya; Gupta, Ankit

    2016-09-01

    The study aims to determine the significant personal and environmental factors in predicting the adolescent accidents in the hilly regions taking into account two cities Hamirpur and Dharamshala, which lie at an average elevation of 700--1000 metres above the mean sea level (MSL). Detailed comparisons between the results of 2 cities are also studied. The results are analyzed to provide the list of most significant factors responsible for adolescent accidents. Data were collected from different schools and colleges of the city with the help of a questionnaire survey. Around 690 responses from Hamirpur and 460 responses from Dharamshala were taken for study and analysis. Standard deviations (SD) of various factors affecting accidents were calculated and factors with relatively very low SD were discarded and other variables were considered for correlations. Correlation was developed using Kendall's-tau and chi-square tests and factors those were found significant were used for modelling. They were - the victim's age, the character of road, the speed of vehicle, and the use of helmet for Hamirpur and for Dharamshala, the kind of vehicle involved was an added variable found responsible for adolescent accidents. A logistic regression was performed to know the effect of each category present in a variable on the occurrence of accidents. Though the age and the speed of vehicle were considered to be important factors for accident occurrence according to Indian accident data records, even the use of helmet comes out as a major concern. The age group of 15-18 and 18-21 years were found to be more susceptible to accidents than the higher age groups. Due to the presence of hilly area, the character of road becomes a major concern for cause of accidents and the topography of the area makes the kind of vehicle involved as a major variable for determining the severity of accidents.

  6. Modelling road accidents: An approach using structural time series

    NASA Astrophysics Data System (ADS)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  7. Predictions of structural integrity of steam generator tubes under normal operating, accident, an severe accident conditions

    SciTech Connect

    Majumdar, S.

    1997-02-01

    Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation was confirmed by further tests at high temperatures, as well as by finite-element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation was confirmed by finite-element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate-sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure was developed and validated by tests under various temperature and pressure loadings that can occur during postulated severe accidents.

  8. Do Cognitive Models Help in Predicting the Severity of Posttraumatic Stress Disorder, Phobia, and Depression After Motor Vehicle Accidents? A Prospective Longitudinal Study

    PubMed Central

    Ehring, Thomas; Ehlers, Anke; Glucksman, Edward

    2008-01-01

    The study investigated the power of theoretically derived cognitive variables to predict posttraumatic stress disorder (PTSD), travel phobia, and depression following injury in a motor vehicle accident (MVA). MVA survivors (N = 147) were assessed at the emergency department on the day of their accident and 2 weeks, 1 month, 3 months, and 6 months later. Diagnoses were established with the Structured Clinical Interview for DSM–IV. Predictors included initial symptom severities; variables established as predictors of PTSD in E. J. Ozer, S. R. Best, T. L. Lipsey, and D. S. Weiss's (2003) meta-analysis; and variables derived from cognitive models of PTSD, phobia, and depression. Results of nonparametric multiple regression analyses showed that the cognitive variables predicted subsequent PTSD and depression severities over and above what could be predicted from initial symptom levels. They also showed greater predictive power than the established predictors, although the latter showed similar effect sizes as in the meta-analysis. In addition, the predictors derived from cognitive models of PTSD and depression were disorder-specific. The results support the role of cognitive factors in the maintenance of emotional disorders following trauma. PMID:18377119

  9. Do cognitive models help in predicting the severity of posttraumatic stress disorder, phobia, and depression after motor vehicle accidents? A prospective longitudinal study.

    PubMed

    Ehring, Thomas; Ehlers, Anke; Glucksman, Edward

    2008-04-01

    The study investigated the power of theoretically derived cognitive variables to predict posttraumatic stress disorder (PTSD), travel phobia, and depression following injury in a motor vehicle accident (MVA). MVA survivors (N = 147) were assessed at the emergency department on the day of their accident and 2 weeks, 1 month, 3 months, and 6 months later. Diagnoses were established with the Structured Clinical Interview for DSM-IV. Predictors included initial symptom severities; variables established as predictors of PTSD in E. J. Ozer, S. R. Best, T. L. Lipsey, and D. S. Weiss's (2003) meta-analysis; and variables derived from cognitive models of PTSD, phobia, and depression. Results of nonparametric multiple regression analyses showed that the cognitive variables predicted subsequent PTSD and depression severities over and above what could be predicted from initial symptom levels. They also showed greater predictive power than the established predictors, although the latter showed similar effect sizes as in the meta-analysis. In addition, the predictors derived from cognitive models of PTSD and depression were disorder-specific. The results support the role of cognitive factors in the maintenance of emotional disorders following trauma.

  10. Do Cognitive Models Help in Predicting the Severity of Posttraumatic Stress Disorder, Phobia, and Depression after Motor Vehicle Accidents? A Prospective Longitudinal Study

    ERIC Educational Resources Information Center

    Ehring, Thomas; Ehlers, Anke; Glucksman, Edward

    2008-01-01

    The study investigated the power of theoretically derived cognitive variables to predict posttraumatic stress disorder (PTSD), travel phobia, and depression following injury in a motor vehicle accident (MVA). MVA survivors (N = 147) were assessed at the emergency department on the day of their accident and 2 weeks, 1 month, 3 months, and 6 months…

  11. Do Cognitive Models Help in Predicting the Severity of Posttraumatic Stress Disorder, Phobia, and Depression after Motor Vehicle Accidents? A Prospective Longitudinal Study

    ERIC Educational Resources Information Center

    Ehring, Thomas; Ehlers, Anke; Glucksman, Edward

    2008-01-01

    The study investigated the power of theoretically derived cognitive variables to predict posttraumatic stress disorder (PTSD), travel phobia, and depression following injury in a motor vehicle accident (MVA). MVA survivors (N = 147) were assessed at the emergency department on the day of their accident and 2 weeks, 1 month, 3 months, and 6 months…

  12. Assessing causality in multivariate accident models.

    PubMed

    Elvik, Rune

    2011-01-01

    This paper discusses the application of operational criteria of causality to multivariate statistical models developed to identify sources of systematic variation in accident counts, in particular the effects of variables representing safety treatments. Nine criteria of causality serving as the basis for the discussion have been developed. The criteria resemble criteria that have been widely used in epidemiology. To assess whether the coefficients estimated in a multivariate accident prediction model represent causal relationships or are non-causal statistical associations, all criteria of causality are relevant, but the most important criterion is how well a model controls for potentially confounding factors. Examples are given to show how the criteria of causality can be applied to multivariate accident prediction models in order to assess the relationships included in these models. It will often be the case that some of the relationships included in a model can reasonably be treated as causal, whereas for others such an interpretation is less supported. The criteria of causality are indicative only and cannot provide a basis for stringent logical proof of causality. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Modelling Accident Tolerant Fuel Concepts

    SciTech Connect

    Hales, Jason Dean; Gamble, Kyle Allan Lawrence

    2016-05-01

    The catastrophic events that occurred at the Fukushima-Daiichi nuclear power plant in 2011 have led to widespread interest in research of alternative fuels and claddings that are proposed to be accident tolerant. The United States Department of Energy (DOE) through its Nuclear Energy Advanced Modeling and Simulation (NEAMS) program has funded an Accident Tolerant Fuel (ATF) High Impact Problem (HIP). The ATF HIP is a three-year project to perform research on two accident tolerant concepts. The final outcome of the ATF HIP will be an in-depth report to the DOE Advanced Fuels Campaign (AFC) giving a recommendation on whether either of the two concepts should be included in their lead test assembly scheduled for placement into a commercial reactor in 2022. The two ATF concepts under investigation in the HIP are uranium silicide fuel and iron-chromium-aluminum (FeCrAl) alloy cladding. Utilizing the expertise of three national laboratory participants (Idaho National Laboratory, Los Alamos National Laboratory, and Argonne National Laboratory), a comprehensive multiscale approach to modeling is being used that includes atomistic modeling, molecular dynamics, rate theory, phase-field, and fuel performance simulations. Model development and fuel performance analysis are critical since a full suite of experimental studies will not be complete before AFC must prioritize concepts for focused development. In this paper, we present simulations of the two proposed accident tolerance fuel systems: U3Si2 fuel with Zircaloy-4 cladding, and UO2 fuel with FeCrAl cladding. Sensitivity analyses are completed using Sandia National Laboratories’ Dakota software to determine which input parameters (e.g., fuel specific heat) have the greatest influence on the output metrics of interest (e.g., fuel centerline temperature). We also outline the multiscale modelling approach being employed. Considerable additional work is required prior to preparing the recommendation report for the Advanced

  14. Predicting at-fault car accidents of older drivers.

    PubMed

    De Raedt, R; Ponjaert-Kristoffersen, I

    2001-11-01

    Considerable research shows car accidents are difficult to predict using screening tests. The objective of this exploratory study is to determine whether detailed accident analysis taking into account the specific accident type might enhance the predictive power of a standardised road test and a set of selected neuropsychological tests. Moreover, this study addresses the validity and reliability of performance-based driving evaluation. The sample consisted of 84 older drivers between 65 and 96 years of age who were referred for a fitness-to-drive evaluation. Using discriminant analyses, the subjects were classified as drivers with and without at-fault accidents. We compared the accuracy of neuropsychological tests and a road test for postdicting all accidents, accidents classified into two categories and accidents classified into four different categories. The percentages of correctly classified subject were highest at the level of the most detailed classification. These results suggest that, although accident prediction is difficult, the predictability of car accidents by neurocognitive measurements and a road test increases when the kind of accident is specified.

  15. Motorcycle accidents, rider behaviour, and psychological models.

    PubMed

    Özkan, Türker; Lajunen, Timo; Doğruyol, Burak; Yıldırım, Zümrüt; Çoymak, Ahmet

    2012-11-01

    The aims of the present study were to: (a) investigate the factor structure of the Motorcycle Rider Behaviour Questionnaire (MRBQ) [Elliott, M.A., Baughan, B.J., Sexton, B.F., 2007. Errors and violations in relation to motorcyclists' crash risk. Accident Analysis and Prevention 39, 491-499] in among Turkish riders, and (b) study the relationships between different types of rider behaviour and motorcyclists' active and passive accidents and offences, and (c) investigate the usefulness of the Theory of Planned Behaviour (TPB), Health Belief Model (HBM), and Locus of Control (T-LOC) in explaining rider behaviours. MRBQ was administered to a sample of motorcyclists (N=451). Principal components analysis yielded a 5-factor solution including traffic errors, control errors, speed violations, performance of stunts, and use of safety equipment. Annual mileage was related to higher number of active and passive accidents and offences whereas age was related to lower number of active and passive accidents. Stunts were the main predictors of active accidents and offences. Speeding violations predicted offences. Stunts and speeding violations were associated with the fate factor of the T-LOC, and with attitudes, subjective norms, and intention components of TPB, and cues to action and perceived severity components of the HBM. Use of safety equipment was related to the high level of perceived behavioural control and intention components of the TPB, the low score of perceived barriers component of the HBM, and the low fate factor of the T-LOC. While traffic errors were associated with the high score of perceived barriers and cues to action component of the HBM, control errors were related to the high score of vehicle and environment factor of the T-LOC. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Preliminary Modeling of Accident Tolerant Fuel Concepts under Accident Conditions

    SciTech Connect

    Gamble, Kyle A.; Hales, Jason D.

    2016-12-01

    The catastrophic events that occurred at the Fukushima-Daiichi nuclear power plant in 2011 have led to widespread interest in research of alternative fuels and claddings that are proposed to be accident tolerant. Thus, the United States Department of Energy through its NEAMS (Nuclear Energy Advanced Modeling and Simulation) program has funded an Accident Tolerant Fuel (ATF) High Impact Problem (HIP). The ATF HIP is funded for a three-year period. The purpose of the HIP is to perform research into two potential accident tolerant concepts and provide an in-depth report to the Advanced Fuels Campaign (AFC) describing the behavior of the concepts, both of which are being considered for inclusion in a lead test assembly scheduled for placement into a commercial reactor in 2022. The initial focus of the HIP is on uranium silicide fuel and iron-chromium-aluminum (FeCrAl) alloy cladding. Utilizing the expertise of three national laboratory participants (INL, LANL, and ANL) a comprehensive mulitscale approach to modeling is being used including atomistic modeling, molecular dynamics, rate theory, phase-field, and fuel performance simulations. In this paper, we present simulations of two proposed accident tolerant fuel systems: U3Si2 fuel with Zircaloy-4 cladding, and UO2 fuel with FeCrAl cladding. The simulations investigate the fuel performance response of the proposed ATF systems under Loss of Coolant and Station Blackout conditions using the BISON code. Sensitivity analyses are completed using Sandia National Laboratories’ DAKOTA software to determine which input parameters (e.g., fuel specific heat) have the greatest influence on the output metrics of interest (e.g., fuel centerline temperature). Early results indicate that each concept has significant advantages as well as areas of concern. Further work is required prior to formulating the proposition report for the Advanced Fuels Campaign.

  17. Development of a model to predict flow oscillations in low-flow sodium boiling. [Loss-of-Piping Integrity accidents

    SciTech Connect

    Levin, A.E.; Griffith, P.

    1980-04-01

    Tests performed in a small scale water loop showed that voiding oscillations, similar to those observed in sodium, were present in water, as well. An analytical model, appropriate for either sodium or water, was developed and used to describe the water flow behavior. The experimental results indicate that water can be successfully employed as a sodium simulant, and further, that the condensation heat transfer coefficient varies significantly during the growth and collapse of vapor slugs during oscillations. It is this variation, combined with the temperature profile of the unheated zone above the heat source, which determines the oscillatory behavior of the system. The analytical program has produced a model which qualitatively does a good job in predicting the flow behavior in the wake experiment. The amplitude discrepancies are attributable to experimental uncertainties and model inadequacies. Several parameters (heat transfer coefficient, unheated zone temperature profile, mixing between hot and cold fluids during oscillations) are set by the user. Criteria for the comparison of water and sodium experiments have been developed.

  18. [Chest modelling and automotive accidents].

    PubMed

    Trosseille, Xavier

    2011-11-01

    Automobile development is increasingly based on mathematical modeling. Accurate models of the human body are now available and serve to develop new means of protection. These models used to consist of rigid, articulated bodies but are now made of several million finite elements. They are now capable of predicting some risks of injury. To develop these models, sophisticated tests were conducted on human cadavers. For example, chest modeling started with material characterization and led to complete validation in the automobile environment. Model personalization, based on medical imaging, will permit studies of the behavior and tolerances of the entire population.

  19. FASTGRASS: A mechanistic model for the prediction of Xe, I, Cs, Te, Ba, and Sr release from nuclear fuel under normal and severe-accident conditions

    SciTech Connect

    Rest, J.; Zawadzki, S.A. )

    1992-09-01

    The primary physical/chemical models that form the basis of the FASTGRASS mechanistic computer model for calculating fission-product release from nuclear fuel are described. Calculated results are compared with test data and the major mechanisms affecting the transport of fission products during steady-state and accident conditions are identified.

  20. An exploration of the utility of mathematical modeling predicting fatigue from sleep/wake history and circadian phase applied in accident analysis and prevention: the crash of Comair Flight 5191.

    PubMed

    Pruchnicki, Shawn A; Wu, Lora J; Belenky, Gregory

    2011-05-01

    On 27 August 2006 at 0606 eastern daylight time (EDT) at Bluegrass Airport in Lexington, KY (LEX), the flight crew of Comair Flight 5191 inadvertently attempted to take off from a general aviation runway too short for their aircraft. The aircraft crashed killing 49 of the 50 people on board. To better understand this accident and to aid in preventing similar accidents, we applied mathematical modeling predicting fatigue-related degradation in performance for the Air Traffic Controller on-duty at the time of the crash. To provide the necessary input to the model, we attempted to estimate circadian phase and sleep/wake histories for the Captain, First Officer, and Air Traffic Controller. We were able to estimate with confidence the circadian phase for each. We were able to estimate with confidence the sleep/wake history for the Air Traffic Controller, but unable to do this for the Captain and First Officer. Using the sleep/wake history estimates for the Air Traffic Controller as input, the mathematical modeling predicted moderate fatigue-related performance degradation at the time of the crash. This prediction was supported by the presence of what appeared to be fatigue-related behaviors in the Air Traffic Controller during the 30 min prior to and in the minutes after the crash. Our modeling results do not definitively establish fatigue in the Air Traffic Controller as a cause of the accident, rather they suggest that had he been less fatigued he might have detected Comair Flight 5191's lining up on the wrong runway. We were not able to perform a similar analysis for the Captain and First Officer because we were not able to estimate with confidence their sleep/wake histories. Our estimates of sleep/wake history and circadian rhythm phase for the Air Traffic Controller might generalize to other air traffic controllers and to flight crew operating in the early morning hours at LEX. Relative to other times of day, the modeling results suggest an elevated risk of fatigue

  1. A new approach to modeling aviation accidents

    NASA Astrophysics Data System (ADS)

    Rao, Arjun Harsha

    views aviation accidents as a set of hazardous states of a system (pilot and aircraft), and triggers that cause the system to move between hazardous states. I used the NTSB's accident coding manual (that contains nearly 4000 different codes) to develop a "dictionary" of hazardous states, triggers, and information codes. Then, I created the "grammar", or a set of rules, that: (1) orders the hazardous states in each accident; and, (2) links the hazardous states using the appropriate triggers. This approach: (1) provides a more correct count of the causes for accidents in the NTSB database; and, (2) checks for gaps or omissions in NTSB accident data, and fills in some of these gaps using logic-based rules. These rules also help identify and count causes for accidents that were not discernable from previous analyses of historical accident data. I apply the model to 6200 helicopter accidents that occurred in the US between 1982 and 2015. First, I identify the states and triggers that are most likely to be associated with fatal and non-fatal accidents. The results suggest that non-fatal accidents, which account for approximately 84% of the accidents, provide valuable opportunities to learn about the causes for accidents. Next, I investigate the causes of inflight loss of control using both a conventional approach and using the state-based approach. The conventional analysis provides little insight into the causal mechanism for LOC. For instance, the top cause of LOC is "aircraft control/directional control not maintained", which does not provide any insight. In contrast, the state-based analysis showed that pilots' tendency to clip objects frequently triggered LOC (16.7% of LOC accidents)--this finding was not directly discernable from conventional analyses. Finally, I investigate the causes for improper autorotations using both a conventional approach and the state-based approach. The conventional approach uses modifiers (e.g., "improper", "misjudged") associated with "24520

  2. A critical review of macro models for road accidents.

    PubMed

    Hakim, S; Shefer, D; Hakkert, A S; Hocherman, I

    1991-10-01

    This paper presents a critical review of state-of-the-art macro models for road accidents. Such a review is meant to identify and establish the significance of policy and socioeconomic variables affecting the level of road accidents. The aim is to identify those variables associated with effective policies and interventions to enable decision makers to improve the level of road safety. The variables that appear to affect the number of fatalities or injuries are: vehicle miles travelled (VMT), vehicle population, income (in its various forms), percentage of young drivers, intervention policies such as speed limits, periodic vehicle inspection, and minimum alcohol-drinking age. Viewed critically, the state-of-the-art models being used to explain and predict road accidents are still deficient. One possible approach to correcting this deficiency draws from consumer utility theory, using analytical models built on a newly constructed theoretical framework. Success in estimating such models may improve predictions of road accidents, thus demonstrating the comparative cost effectiveness of alternative intervention policies.

  3. Dust mobilization and transport modeling for loss of vacuum accidents

    SciTech Connect

    P.W. Humrickhouse; J.P. Sharpe

    2007-10-01

    We develop a general continuum fluid dynamic model for dust transport in loss of vacuum accidents in fusion energy systems. The relationship between this general approach and established particle transport methods is clarified, in particular the relationship between the seemingly disparate treatments of aerosol dynamics and Lagrangian particle tracking. Constitutive equations for granular flow are found to be inadequate for prediction of mobilization, as these models essentially impose a condition of flow from the outset. Experiments confirm that at low shear, settled dust piles behave more like a continuum solid, and suitable solid models will be required to predict the onset of dust mobilization.

  4. Relating aviation service difficulty reports to accident data for safety trend prediction

    SciTech Connect

    Fullwood, R.; Hall, R.; Martinez, G.; Uryasev, S.

    1996-03-13

    This work explores the hypothesis that Service Difficulty Reports (SDR - primarily inspection reports) are related to Accident Incident Data System (AIDS - reports primarily compiled from National Transportation Safety Board (NTSB) accident investigations). This work sought and found relations between equipment operability reported in the SDR and aviation safety reported in AIDS. Equipment is not the only factor in aviation accidents, but it is the factor reported in the SDR. Two approaches to risk analysis were used: (1) The conventional method, in which reporting frequencies are taken from a data base (SDR), and used with an aircraft reliability block diagram model of the critical systems to predict aircraft failure, and (2) Shape analysis that uses the magnitude and shape of the SDR distribution compared with the AIDS distribution to predict aircraft failure.

  5. Predicting and analyzing the trend of traffic accidents deaths in Iran in 2014 and 2015

    PubMed Central

    Mehmandar, Mohammadreza; Soori, Hamid; Mehrabi, Yadolah

    2016-01-01

    Background: Predicting the trend in traffic accidents deaths and its analysis can be a useful tool for planning and policy-making, conducting interventions appropriate with death trend, and taking the necessary actions required for controlling and preventing future occurrences. Objective: Predicting and analyzing the trend of traffic accidents deaths in Iran in 2014 and 2015. Settings and Design: It was a cross-sectional study. Materials and Methods: All the information related to fatal traffic accidents available in the database of Iran Legal Medicine Organization from 2004 to the end of 2013 were used to determine the change points (multi-variable time series analysis). Using autoregressive integrated moving average (ARIMA) model, traffic accidents death rates were predicted for 2014 and 2015, and a comparison was made between this rate and the predicted value in order to determine the efficiency of the model. Results: From the results, the actual death rate in 2014 was almost similar to that recorded for this year, while in 2015 there was a decrease compared with the previous year (2014) for all the months. A maximum value of 41% was also predicted for the months of January and February, 2015. Conclusion: From the prediction and analysis of the death trends, proper application and continuous use of the intervention conducted in the previous years for road safety improvement, motor vehicle safety improvement, particularly training and culture-fostering interventions, as well as approval and execution of deterrent regulations for changing the organizational behaviors, can significantly decrease the loss caused by traffic accidents. PMID:27308255

  6. The "killing zone" revisited: serial nonlinearities predict general aviation accident rates from pilot total flight hours.

    PubMed

    Knecht, William R

    2013-11-01

    Is there a "killing zone" (Craig, 2001)-a range of pilot flight time over which general aviation (GA) pilots are at greatest risk? More broadly, can we predict accident rates, given a pilot's total flight hours (TFH)? These questions interest pilots, aviation policy makers, insurance underwriters, and researchers alike. Most GA research studies implicitly assume that accident rates are linearly related to TFH, but that relation may actually be multiply nonlinear. This work explores the ability of serial nonlinear modeling functions to predict GA accident rates from noisy rate data binned by TFH. Two sets of National Transportation Safety Board (NTSB)/Federal Aviation Administration (FAA) data were log-transformed, then curve-fit to a gamma-pdf-based function. Despite high rate-noise, this produced weighted goodness-of-fit (Rw(2)) estimates of .654 and .775 for non-instrument-rated (non-IR) and instrument-rated pilots (IR) respectively. Serial-nonlinear models could be useful to directly predict GA accident rates from TFH, and as an independent variable or covariate to control for flight risk during data analysis. Applied to FAA data, these models imply that the "killing zone" may be broader than imagined. Relatively high risk for an individual pilot may extend well beyond the 2000-h mark before leveling off to a baseline rate. Published by Elsevier Ltd.

  7. Exact results for car accidents in a traffic model

    NASA Astrophysics Data System (ADS)

    Huang, Ding-wei

    1998-07-01

    Within the framework of a recent model for car accidents on single-lane highway traffic, we study analytically the probability of the occurrence of car accidents. Exact results are obtained. Various scaling behaviours are observed. The linear dependence of the occurrence of car accidents on density is understood as the dominance of a single velocity in the distribution.

  8. Investigation of Key Factors for Accident Severity at Railroad Grade Crossings by Using a Logit Model

    PubMed Central

    Hu, Shou-Ren; Li, Chin-Shang; Lee, Chi-Kang

    2009-01-01

    Although several studies have used logit or probit models and their variants to fit data of accident severity on roadway segments, few have investigated accident severity at a railroad grade crossing (RGC). Compared to accident risk analysis in terms of accident frequency and severity of a highway system, investigation of the factors contributing to traffic accidents at an RGC may be more complicated because of additional highway–railway interactions. Because the proportional odds assumption was violated while fitting cumulative logit modeled by the proportional odds models with stepwise variable selection to ordinal accident severity data collected at 592 RGCs in Taiwan, as suggested by Strokes et al. (2000, p. 249) a generalized logit model with stepwise variable selection was used instead to identify explanatory variables (factors or covariates) that were significantly associated with the severity of collisions. Hence, the fitted model was used to predict the level of accident severity, given a set of values in the explanatory variables. Number of daily trains, highway separation, number of daily trucks, obstacle detection device, and approaching crossing markings significantly affected levels of accident severity at an RGC (p-value = 0.0009, 0.0008, 0.0112, 0.0017, and 0.0003, respectively). Finally, marginal effect analysis on the number of daily trains and law enforcement camera was conducted to evaluate the effect of the number of daily trains and presence of a law enforcement camera on the potential accident severity. PMID:20161414

  9. Modeling secondary accidents identified by traffic shock waves.

    PubMed

    Junhua, Wang; Boya, Liu; Lanfang, Zhang; Ragland, David R

    2016-02-01

    The high potential for occurrence and the negative consequences of secondary accidents make them an issue of great concern affecting freeway safety. Using accident records from a three-year period together with California interstate freeway loop data, a dynamic method for more accurate classification based on the traffic shock wave detecting method was used to identify secondary accidents. Spatio-temporal gaps between the primary and secondary accident were proven be fit via a mixture of Weibull and normal distribution. A logistic regression model was developed to investigate major factors contributing to secondary accident occurrence. Traffic shock wave speed and volume at the occurrence of a primary accident were explicitly considered in the model, as a secondary accident is defined as an accident that occurs within the spatio-temporal impact scope of the primary accident. Results show that the shock waves originating in the wake of a primary accident have a more significant impact on the likelihood of a secondary accident occurrence than the effects of traffic volume. Primary accidents with long durations can significantly increase the possibility of secondary accidents. Unsafe speed and weather are other factors contributing to secondary crash occurrence. It is strongly suggested that when police or rescue personnel arrive at the scene of an accident, they should not suddenly block, decrease, or unblock the traffic flow, but instead endeavor to control traffic in a smooth and controlled manner. Also it is important to reduce accident processing time to reduce the risk of secondary accident. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Predicted spatio-temporal dynamics of radiocesium deposited onto forests following the Fukushima nuclear accident

    PubMed Central

    Hashimoto, Shoji; Matsuura, Toshiya; Nanko, Kazuki; Linkov, Igor; Shaw, George; Kaneko, Shinji

    2013-01-01

    The majority of the area contaminated by the Fukushima Dai-ichi nuclear power plant accident is covered by forest. To facilitate effective countermeasure strategies to mitigate forest contamination, we simulated the spatio-temporal dynamics of radiocesium deposited into Japanese forest ecosystems in 2011 using a model that was developed after the Chernobyl accident in 1986. The simulation revealed that the radiocesium inventories in tree and soil surface organic layer components drop rapidly during the first two years after the fallout. Over a period of one to two years, the radiocesium is predicted to move from the tree and surface organic soil to the mineral soil, which eventually becomes the largest radiocesium reservoir within forest ecosystems. Although the uncertainty of our simulations should be considered, the results provide a basis for understanding and anticipating the future dynamics of radiocesium in Japanese forests following the Fukushima accident. PMID:23995073

  11. PREDICTIVE MODELS

    SciTech Connect

    Ray, R.M. )

    1986-12-01

    PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1) chemical flooding, where soap-like surfactants are injected into the reservoir to wash out the oil; 2) carbon dioxide miscible flooding, where carbon dioxide mixes with the lighter hydrocarbons making the oil easier to displace; 3) in-situ combustion, which uses the heat from burning some of the underground oil to thin the product; 4) polymer flooding, where thick, cohesive material is pumped into a reservoir to push the oil through the underground rock; and 5) steamflood, where pressurized steam is injected underground to thin the oil. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes.

  12. Traffic accident reconstruction and an approach for prediction of fault rates using artificial neural networks: A case study in Turkey.

    PubMed

    Can Yilmaz, Ali; Aci, Cigdem; Aydin, Kadir

    2016-08-17

    Currently, in Turkey, fault rates in traffic accidents are determined according to the initiative of accident experts (no speed analyses of vehicles just considering accident type) and there are no specific quantitative instructions on fault rates related to procession of accidents which just represents the type of collision (side impact, head to head, rear end, etc.) in No. 2918 Turkish Highway Traffic Act (THTA 1983). The aim of this study is to introduce a scientific and systematic approach for determination of fault rates in most frequent property damage-only (PDO) traffic accidents in Turkey. In this study, data (police reports, skid marks, deformation, crush depth, etc.) collected from the most frequent and controversial accident types (4 sample vehicle-vehicle scenarios) that consist of PDO were inserted into a reconstruction software called vCrash. Sample real-world scenarios were simulated on the software to generate different vehicle deformations that also correspond to energy-equivalent speed data just before the crash. These values were used to train a multilayer feedforward artificial neural network (MFANN), function fitting neural network (FITNET, a specialized version of MFANN), and generalized regression neural network (GRNN) models within 10-fold cross-validation to predict fault rates without using software. The performance of the artificial neural network (ANN) prediction models was evaluated using mean square error (MSE) and multiple correlation coefficient (R). It was shown that the MFANN model performed better for predicting fault rates (i.e., lower MSE and higher R) than FITNET and GRNN models for accident scenarios 1, 2, and 3, whereas FITNET performed the best for scenario 4. The FITNET model showed the second best results for prediction for the first 3 scenarios. Because there is no training phase in GRNN, the GRNN model produced results much faster than MFANN and FITNET models. However, the GRNN model had the worst prediction results. The

  13. Characterizing the Severe Turbulence Environments Associated With Commercial Aviation Accidents: A Real-Time Turbulence Model (RTTM) Designed for the Operational Prediction of Hazardous Aviation Turbulence Environments

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Lux, Kevin M.; Cetola, Jeffrey D.; Huffman, Allan W.; Riordan, Allen J.; Slusser, Sarah W.; Lin, Yuh-Lang; Charney, Joseph J.; Waight, Kenneth T.

    2004-01-01

    Real-time prediction of environments predisposed to producing moderate-severe aviation turbulence is studied. We describe the numerical model and its postprocessing system designed for said prediction of environments predisposed to severe aviation turbulence as well as presenting numerous examples of its utility. The numerical model is MASS version 5.13, which is integrated over three different grid matrices in real time on a university work station in support of NASA Langley Research Center s B-757 turbulence research flight missions. The postprocessing system includes several turbulence-related products, including four turbulence forecasting indices, winds, streamlines, turbulence kinetic energy, and Richardson numbers. Additionally, there are convective products including precipitation, cloud height, cloud mass fluxes, lifted index, and K-index. Furthermore, soundings, sounding parameters, and Froude number plots are also provided. The horizontal cross-section plot products are provided from 16 000 to 46 000 ft in 2000-ft intervals. Products are available every 3 hours at the 60- and 30-km grid interval and every 1.5 hours at the 15-km grid interval. The model is initialized from the NWS ETA analyses and integrated two times a day.

  14. A catastrophe-theory model for simulating behavioral accidents

    SciTech Connect

    Souder, W.E.

    1988-01-01

    Behavioral accidents are a particular type of accident. They are caused by inappropriate individual behaviors and faulty reactions. Catastrophe theory is a means for mathematically modeling the dynamic processes that underlie behavioral accidents. Based on a comprehensive data base of mining accidents, a computerized catastrophe model has been developed by the Bureau of Mines. This model systematically links individual psychological, group behavioral, and mine environmental variables with other accident causing factors. It answers several longstanding questions about why some normally safe behaving persons may spontaneously engage in unsafe acts that have high risks of serious injury. Field tests with the model indicate that it has three imnportant uses: it can be used as a effective training aid for increasing employee safety consciousness; it can be used as a management laboratory for testing decision alternatives and policies; and it can be used to help design the most effective work teams.

  15. Relating aviation service difficulty reports to accident data for safety trend prediction

    SciTech Connect

    Fullwood, R.R.; Hall, R.E.; Martinez-Guridi, G.; Uryasev, S.; Sampath, S.G.

    1996-10-01

    A synthetic model of scheduled-commercial U.S. aviation fatalities was constructed from linear combinations of the time-spectra of critical systems reporting using 5.5 years of Service Difficulty Reports (SDR){sup 2} and Accident Incident Data System (AIDS) records{sup 3}. This model, used to predict near-future trends in aviation accidents, was tested by using the first 36 months of data to construct the synthetic model which was used to predict fatalities during the following eight months. These predictions were tested by comparison with the fatality data. A reliability block diagram (RBD) and third-order extrapolations also were used as predictive models and compared with actuality. The synthetic model was the best predictor because of its use of systems data. Other results of the study are a database of service difficulties for major aviation systems, and a rank ordering of systems according to their contribution to the synthesis. 4 refs., 8 figs., 3 tabs.

  16. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    PubMed

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere.

  17. Predicting Posttraumatic Stress Symptoms in Children after Road Traffic Accidents

    ERIC Educational Resources Information Center

    Landolt, Markus A.; Vollrath, Margarete; Timm, Karin; Gnehm, Hanspeter E.; Sennhauser, Felix H.

    2005-01-01

    Objective: To prospectively assess the prevalence, course, and predictors of posttraumatic stress symptoms (PTSSs) in children after road traffic accidents (RTAs). Method: Sixty-eight children (6.5-14.5 years old) were interviewed 4-6 weeks and 12 months after an RTA with the Child PTSD Reaction Index (response rate 58.6%). Their mothers (n = 60)…

  18. Usefulness of high resolution coastal models for operational oil spill forecast: the Full City accident

    NASA Astrophysics Data System (ADS)

    Broström, G.; Carrasco, A.; Hole, L. R.; Dick, S.; Janssen, F.; Mattsson, J.; Berger, S.

    2011-06-01

    Oil spill modeling is considered to be an important decision support system (DeSS) useful for remedial action in case of accidents, as well as for designing the environmental monitoring system that is frequently set up after major accidents. Many accidents take place in coastal areas implying that low resolution basin scale ocean models is of limited use for predicting the trajectories of an oil spill. In this study, we target the oil spill in connection with the Full City accident on the Norwegian south coast and compare three different oil spill models for the area. The result of the analysis is that all models do a satisfactory job. The "standard" operational model for the area is shown to have severe flaws but including an analysis based on a higher resolution model (1.5 km resolution) for the area the model system show results that compare well with observations. The study also shows that an ensemble using three different models is useful when predicting/analyzing oil spill in coastal areas.

  19. Usefulness of high resolution coastal models for operational oil spill forecast: the "Full City" accident

    NASA Astrophysics Data System (ADS)

    Broström, G.; Carrasco, A.; Hole, L. R.; Dick, S.; Janssen, F.; Mattsson, J.; Berger, S.

    2011-11-01

    Oil spill modeling is considered to be an important part of a decision support system (DeSS) for oil spill combatment and is useful for remedial action in case of accidents, as well as for designing the environmental monitoring system that is frequently set up after major accidents. Many accidents take place in coastal areas, implying that low resolution basin scale ocean models are of limited use for predicting the trajectories of an oil spill. In this study, we target the oil spill in connection with the "Full City" accident on the Norwegian south coast and compare operational simulations from three different oil spill models for the area. The result of the analysis is that all models do a satisfactory job. The "standard" operational model for the area is shown to have severe flaws, but by applying ocean forcing data of higher resolution (1.5 km resolution), the model system shows results that compare well with observations. The study also shows that an ensemble of results from the three different models is useful when predicting/analyzing oil spill in coastal areas.

  20. A drug cost model for injuries due to road traffic accidents

    PubMed Central

    Riewpaiboon, Arthorn; Piyauthakit, Piyanuch; Srijariya, Witsanuchai; Chaikledkaew, Usa

    2007-01-01

    Objective This study aimed to develop a drug cost model for injuries due to road traffic accidents for patients receiving treatment at a regional hospital in Thailand. Methods The study was designed as a retrospective, descriptive analysis. The cases were all from road traffic accidents receiving treatment at a public regional hospital in the fiscal year 2004. Results Three thousand seven hundred and twenty-three road accident patients were included in the study. The mean drug cost per case was USD18.20 (SD=73.49, median=2.36). The fitted drug cost model had an adjusted R2 of 0.449. The positive significant predictor variables of drug costs were prolonged length of stay, age over 30 years old, male, Universal Health Coverage Scheme, time of accident during 18:00-24:00 o’clock, and motorcycle comparing to bus. To forecast the drug budget for 2006, there were two approaches identified, the mean drug cost and the predicted average drug cost. The predicted average drug cost was calculated based on the forecasted values of statistically significant (p<0.05) predictor variables included in the fitted model; predicted total drug cost was USD44,334. Alternatively, based on the mean cost, predicted total drug cost in 2006 was USD63,408. This was 43% higher than the figure based on the predicted cost approach. Conclusions The planned budget of drug cost based on the mean cost and predicted average cost were meaningfully different. The application of a predicted average cost model could result in a more accurate budget planning than that of a mean statistic approach. PMID:25170359

  1. Catastrophe model of the accident process, safety climate, and anxiety.

    PubMed

    Guastello, Stephen J; Lynn, Mark

    2014-04-01

    This study aimed (a) to address the evidence for situational specificity in the connection between safety climate to occupational accidents, (b) to resolve similar issues between anxiety and accidents, (c) to expand and develop the concept of safety climate to include a wider range of organizational constructs, (d) to assess a cusp catastrophe model for occupational accidents where safety climate and anxiety are treated as bifurcation variables, and environ-mental hazards are asymmetry variables. Bifurcation, or trigger variables can have a positive or negative effect on outcomes, depending on the levels of asymmetry, or background variables. The participants were 1262 production employees of two steel manufacturing facilities who completed a survey that measured safety management, anxiety, subjective danger, dysregulation, stressors and hazards. Nonlinear regression analyses showed, for this industry, that the accident process was explained by a cusp catastrophe model in which safety management and anxiety were bifurcation variables, and hazards, age and experience were asymmetry variables. The accuracy of the cusp model (R2 = .72) exceeded that of the next best log-linear model (R2 = .08) composed from the same survey variables. The results are thought to generalize to any industry where serious injuries could occur, although situationally specific effects should be anticipated as well.

  2. Prediction of Severe Accident Counter Current Natural Circulation Flows in the Hot Leg of a Pressurized Water Reactor

    SciTech Connect

    Boyd, Christopher F.

    2006-07-01

    During certain phases of a severe accident in a pressurized water reactor (PWR), the core becomes uncovered and steam carries heat to the steam generators through natural circulation. For PWR's with U-tube steam generators and loop seals filled with water, a counter current flow pattern is established in the hot leg. This flow pattern has been experimentally observed and has been predicted using computational fluid dynamics (CFD). Predictions of severe accident behavior are routinely carried out using severe accident system analysis codes such as SCDAP/RELAP5 or MELCOR. These codes, however, were not developed for predicting the three-dimensional natural circulation flow patterns during this phase of a severe accident. CFD, along with a set of experiments at 1/7. scale, have been historically used to establish the flow rates and mixing for the system analysis tools. One important aspect of these predictions is the counter current flow rate in the nearly 30 inch diameter hot leg between the reactor vessel and steam generator. This flow rate is strongly related to the amount of energy that can be transported away from the reactor core. This energy transfer plays a significant role in the prediction of core failures as well as potential failures in other reactor coolant system piping. CFD is used to determine the counter current flow rate during a severe accident. Specific sensitivities are completed for parameters such as surge line flow rates, hydrogen content, as well as vessel and steam generator temperatures. The predictions are carried out for the reactor vessel upper plenum, hot leg, a portion of the surge line, and a steam generator blocked off at the outlet plenum. All predictions utilize the FLUENT V6 CFD code. The volumetric flow in the hot leg is assumed to be proportional to the square root of the product of normalized density difference, gravity, and hydraulic diameter to the 5. power. CFD is used to determine the proportionality constant in the range

  3. Monte-Carlo prediction of changes in areas of west Cumbria requiring restrictions on sheep following the Chernobyl accident.

    PubMed

    Wright, S M; Smith, J T; Beresford, N A; Scott, W A

    2003-04-01

    Following the 1986 Chernobyl accident radiocaesium levels in sheep meat in some upland areas of the United Kingdom were above the national intervention limit. West Cumbria was one of these areas and restrictions are currently still in place. In addition to deposition from the Chernobyl accident, Cumbria has been subject to radiocaesium deposition from atmospheric nuclear weapons tests, the 1957 Windscale accident and routine releases from the Sellafield nuclear reprocessing plant. A Monte-Carlo approach has been used to try to predict areas in west Cumbria where radiocaesium activity concentrations in lamb meat would require the imposition of restrictions at different times after the Chernobyl accident. The approach models the transfer of radiocaesium from soil to vegetation, based upon soil organic matter, and from vegetation to lamb meat. Spatial inputs are soil organic matter and total post-Chernobyl (137)Cs and (134)Cs deposition; a ratio of Chernobyl (137)Cs to (134)Cs deposition has been used to differentiate Chernobyl and pre-Chernobyl (137)Cs deposition. Comparisons of predicted radiocaesium transfer from soil-vegetation and the spatial variation in lamb (137)Cs activity concentrations are good and predicted restricted areas with time after Chernobyl compare well to the restricted areas set by UK government. We predict that restrictions may be required until 2024 and that in some areas the contribution of pre-Chernobyl (137)Cs to predicted lamb radiocaesium activity concentrations is significant, such that restrictions may only have been required until 1994 as a consequence of Chernobyl radiocaesium deposition alone. This work represents a novel implementation of a spatial radioecological model using a Monte-Carlo approach.

  4. The modelling of fuel volatilisation in accident conditions

    NASA Astrophysics Data System (ADS)

    Manenc, H.; Mason, P. K.; Kissane, M. P.

    2001-04-01

    For oxidising conditions, at high temperatures, the pressure of uranium vapour species at the fuel surface is predicted to be high. These vapour species can be transported away from the fuel surface, giving rise to significant amounts of volatilised fuel, as has been observed during small-scale experiments and taken into account in different models. Hence, fuel volatilisation must be taken into account in the conduct of a simulated severe accident such as the Phebus FPT-4 experiment. A large-scale in-pile test is designed to investigate the release of fission products and actinides from irradiated UO 2 fuel in a debris bed and molten pool configuration. Best estimate predictions for fuel volatilisation were performed before the test. This analysis was used to assess the maximum possible loading of filters collecting emissions and the consequences for the filter-change schedule. Following successful completion of the experiment, blind post-test analysis is being performed; boundary conditions for the calculations are based on the preliminary post-test analysis with the core degradation code ICARE2 [J.C. Crestia, G. Repetto, S. Ederli, in: Proceedings of the Fourth Technical Seminar on the PHEBUS FP Programme, Marseille, France, 20-22 March 2000]. The general modelling approach is presented here and then illustrated by the analysis of fuel volatilisation in Phebus FPT4 (for which results are not yet available). Effort was made to reduce uncertainties in the calculations by improving the understanding of controlling physical processes and by using critically assessed thermodynamic data to determine uranium vapour pressures. The analysis presented here constitutes a preliminary, blind, post-test estimate of fuel volatilised during the test.

  5. Accidents and unpleasant incidents: worry in transport and prediction of travel behavior.

    PubMed

    Backer-Grøndahl, Agathe; Fyhri, Aslak; Ulleberg, Pål; Amundsen, Astrid Helene

    2009-09-01

    Worry on nine different means of transport was measured in a Norwegian sample of 853 respondents. The main aim of the study was to investigate differences in worry about accidents and worry about unpleasant incidents, and how these two sorts of worry relate to various means of transport as well as transport behavior. Factor analyses of worry about accidents suggested a division between rail transport, road transport, and nonmotorized transport, whereas analyses of worry about unpleasant incidents suggested a division between transport modes where you interact with other people and "private" transport modes. Moreover, mean ratings of worry showed that respondents worried more about accidents than unpleasant incidents on private transport modes, and more about unpleasant incidents than accidents on public transport modes. Support for the distinction between worry about accidents and unpleasant incidents was also found when investigating relationships between both types of worry and behavioral adaptations: worry about accidents was more important than worry about unpleasant incidents in relation to behavioral adaptations on private means of transport, whereas the opposite was true for public means of transport. Finally, predictors of worry were investigated. The models of worry about accidents and worry about unpleasant incidents differed as to what predictors turned out significant. Knowledge about peoples' worries on different means of transport is important with regard to understanding and influencing transport and travel behavior, as well as attending to commuters' welfare.

  6. Prediction of accidents at full green and green arrow traffic lights in Switzerland with the aid of configuration-specific features.

    PubMed

    Hubacher, Markus; Allenbach, Roland

    2004-09-01

    In this study it was endeavored to predict full green and green arrow accidents at traffic lights, using configuration-specific features. This was done using the statistical method known as Poisson regression. A total of 45 sets of traffic lights (criteria: in an urban area, with four approach roads) with 178 approach roads were investigated (the data from two approach roads was unable to be used). Configuration-specific features were surveyed on all approach roads (characteristics of traffic lanes, road signs, traffic lights, etc.), traffic monitored and accidents (full green and green arrow) recorded over a period of 5 consecutive years. It was demonstrated that only between 23 and 34% of variance could be explained with the models predicting both types of accidents. In green arrow accidents, the approach road topography was found to be the major contributory factor to an accident: if the approach road slopes downwards, the risk of a green arrow accident is approximately five and a half times greater (relative risk, RR = 5.56) than on a level or upward sloping approach road. With full green accidents, obstructed vision plays the major role: where vision can be obstructed by vehicles turning off, the accident risk is eight times greater (RR = 8.08) than where no comparable obstructed vision is possible. From the study it emerges that technical features of traffic lights are not able to control a driver's actions in such a way as to eradicate error. Other factors, in particular the personal characteristics of the driver (age, sex, etc.) and accident circumstances (lighting, road conditions, etc.), are likely to make an important contribution to explaining how an accident occurs. Copyright 2003 Elsevier Ltd.

  7. Advanced accident sequence precursor analysis level 2 models

    SciTech Connect

    Galyean, W.J.; Brownson, D.A.; Rempe, J.L.

    1996-03-01

    The U.S. Nuclear Regulatory Commission Accident Sequence Precursor program pursues the ultimate objective of performing risk significant evaluations on operational events (precursors) occurring in commercial nuclear power plants. To achieve this objective, the Office of Nuclear Regulatory Research is supporting the development of simple probabilistic risk assessment models for all commercial nuclear power plants (NPP) in the U.S. Presently, only simple Level 1 plant models have been developed which estimate core damage frequencies. In order to provide a true risk perspective, the consequences associated with postulated core damage accidents also need to be considered. With the objective of performing risk evaluations in an integrated and consistent manner, a linked event tree approach which propagates the front end results to back end was developed. This approach utilizes simple plant models that analyze the response of the NPP containment structure in the context of a core damage accident, estimate the magnitude and timing of a radioactive release to the environment, and calculate the consequences for a given release. Detailed models and results from previous studies, such as the NUREG-1150 study, are used to quantify these simple models. These simple models are then linked to the existing Level 1 models, and are evaluated using the SAPHIRE code. To demonstrate the approach, prototypic models have been developed for a boiling water reactor, Peach Bottom, and a pressurized water reactor, Zion.

  8. Accident sequence precursor analysis level 2/3 model development

    SciTech Connect

    Lui, C.H.; Galyean, W.J.; Brownson, D.A.

    1997-02-01

    The US Nuclear Regulatory Commission`s Accident Sequence Precursor (ASP) program currently uses simple Level 1 models to assess the conditional core damage probability for operational events occurring in commercial nuclear power plants (NPP). Since not all accident sequences leading to core damage will result in the same radiological consequences, it is necessary to develop simple Level 2/3 models that can be used to analyze the response of the NPP containment structure in the context of a core damage accident, estimate the magnitude of the resulting radioactive releases to the environment, and calculate the consequences associated with these releases. The simple Level 2/3 model development work was initiated in 1995, and several prototype models have been completed. Once developed, these simple Level 2/3 models are linked to the simple Level 1 models to provide risk perspectives for operational events. This paper describes the methods implemented for the development of these simple Level 2/3 ASP models, and the linkage process to the existing Level 1 models.

  9. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect

    Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  10. A catastrophe-theory model for simulating behavioral accidents

    SciTech Connect

    Souder, W.E.

    1988-01-01

    Based on a comprehensive data base of mining accidents, a computerized catastrophe model has been developed by the Bureau of Mines. This model systematically links individual psychological, group behavioral, and mine environmental variables with other accident causing factors. It answers several longstanding questions about why some normally safe behaving persons may spontaneously engage in unsafe acts that have high risks of serious injury. Field tests with the model indicate that it has three important uses: It can be used as an effective training aid for increasing employee safety consciousness; it can be used as a management laboratory for testing decision alternatives and policies; and it can be used to help design the most effective work teams.

  11. An application of probabilistic safety assessment methods to model aircraft systems and accidents

    SciTech Connect

    Martinez-Guridi, G.; Hall, R.E.; Fullwood, R.R.

    1998-08-01

    A case study modeling the thrust reverser system (TRS) in the context of the fatal accident of a Boeing 767 is presented to illustrate the application of Probabilistic Safety Assessment methods. A simplified risk model consisting of an event tree with supporting fault trees was developed to represent the progression of the accident, taking into account the interaction between the TRS and the operating crew during the accident, and the findings of the accident investigation. A feasible sequence of events leading to the fatal accident was identified. Several insights about the TRS and the accident were obtained by applying PSA methods. Changes proposed for the TRS also are discussed.

  12. Boundary effects on car accidents in a cellular automaton model

    NASA Astrophysics Data System (ADS)

    Yang, Xian-Qing; Ma, Yu-Qiang; Zhao, Yue-Min

    2004-04-01

    In this paper we numerically study the probability Pac of occurrence of car accidents in the Nagel-Schreckenberg (NS) model with open boundary condition. In the deterministic NS model, numerical results show that there exists a critical value of extinction rate bgr above which no car accidents occur, and below which the probability Pac is independent of the speed limit vmax and the injection rate agr, but only determined by the extinction rate bgr. In the non-deterministic NS model, the probability Pac is a non-monotonic function of bgr in the region of low bgr value, while it is independent of bgr in the region of high bgr value. The stochastic braking not only reduces the occurrence of car accidents, but splits degenerate effects of vmax on the probability Pac. Theoretical analyses give an agreement with numerical results in the deterministic NS model and in the non-deterministic NS model with vmax = 1 in the case of low bgr value region. Qualitative differences between open and periodic systems in the relations of Pac to the bulk density rgr imply that various correlations may exist between the two systems.

  13. Effects of quenched randomness induced by car accidents on traffic flow in a cellular automata model.

    PubMed

    Yang, Xian-Qing; Ma, Yu-Qiang; Zhao, Yue-Min

    2004-10-01

    In this paper we numerically study the impact of quenched disorder induced by car accidents on traffic flow in the Nagel-Schreckenberg (NS) model. Car accidents occur when the necessary conditions proposed by [J. Phys. A 30, 3329 (1997)

  14. Car Accidents in the Deterministic and Nondeterministic Nagel-Schreckenberg Models

    NASA Astrophysics Data System (ADS)

    Yang, Xian-Qing; Ma, Yu-Qiang

    In this paper, we study further the probability for the occurrence of car accidents in the Nagel-Schreckenberg model. By considering the braking probability, the conditions for car accidents to occur are modified to obtain accurate results. A universal phenomenological theory will also be presented to describe the probability for car accidents to occur in the deterministic and nondeterministic models, respectively.

  15. System analysis with improved thermo-mechanical fuel rod models for modeling current and advanced LWR materials in accident scenarios

    NASA Astrophysics Data System (ADS)

    Porter, Ian Edward

    A nuclear reactor systems code has the ability to model the system response in an accident scenario based on known initial conditions at the onset of the transient. However, there has been a tendency for these codes to lack the detailed thermo-mechanical fuel rod response models needed for accurate prediction of fuel rod failure. This proposed work will couple today's most widely used steady-state (FRAPCON) and transient (FRAPTRAN) fuel rod models with a systems code TRACE for best-estimate modeling of system response in accident scenarios such as a loss of coolant accident (LOCA). In doing so, code modifications will be made to model gamma heating in LWRs during steady-state and accident conditions and to improve fuel rod thermal/mechanical analysis by allowing axial nodalization of burnup-dependent phenomena such as swelling, cladding creep and oxidation. With the ability to model both burnup-dependent parameters and transient fuel rod response, a fuel dispersal study will be conducted using a hypothetical accident scenario under both PWR and BWR conditions to determine the amount of fuel dispersed under varying conditions. Due to the fuel fragmentation size and internal rod pressure both being dependent on burnup, this analysis will be conducted at beginning, middle and end of cycle to examine the effects that cycle time can play on fuel rod failure and dispersal. Current fuel rod and system codes used by the Nuclear Regulatory Commission (NRC) are compilations of legacy codes with only commonly used light water reactor materials, Uranium Dioxide (UO2), Mixed Oxide (U/PuO 2) and zirconium alloys. However, the events at Fukushima Daiichi and Three Mile Island accident have shown the need for exploration into advanced materials possessing improved accident tolerance. This work looks to further modify the NRC codes to include silicon carbide (SiC), an advanced cladding material proposed by current DOE funded research on accident tolerant fuels (ATF). Several

  16. Modelling the oil spill track from Prestige-Nassau accident

    NASA Astrophysics Data System (ADS)

    Montero, P.; Leitao, P.; Penabad, E.; Balseiro, C. F.; Carracedo, P.; Braunschweig, F.; Fernandes, R.; Gomez, B.; Perez-Munuzuri, V.; Neves, R.

    2003-04-01

    On November 13th 2002, the tank ship Prestige-Nassau sent a SOS signal. The hull of the ship was damaged producing an oil spill in front of the Galician coast (NW Spain). The damaged ship took north direction spilling more fuel and affecting the western Galician coast. After this, it changed its track to south. At this first stage of the accident, the ship spilt around 10000 Tm in 19th at the Galician Bank, at 133 NM of Galician coast. From the very beginning, a monitoring and forecasting of the first slick was developed. Afterwards, since southwesternly winds are frequent in wintertime, the slick from the initial spill started to move towards the Galician coast. This drift movement was followed by overflights. With the aim of forecasting the place and arriving date to the coast, some simulations with two different models were developed. The first one was a very simple drift model forced with the surface winds generated by ARPS operational model (1) at MeteoGalicia (regional weather forecast service). The second one was a more complex hydrodynamic model, MOHID2000 (2,3), developed by MARETEC GROUP (Instituto Superior Técnico de Lisboa) in collaboration with GFNL (Grupo de Física Non Lineal, Universidade de Santiago de Compostela). On November 28th, some tarballs appeared at south of main slick. This observations could be explained taking into account the below surface water movement following Ekman dynamic. Some new simulations with the aim of understanding better the physic underlying these observations were performed. Agreed between observations and simulations was achieved. We performed simulations with and without slope current previously calculated by other authors, showing that this current can only introduce subtle differences in the slick's arriving point to the coast and introducing wind as the primary forcing. (1) A two-dimensional particle tracking model for pollution dispersion in A Coruña and Vigo Rias (NW Spain). M. Gómez-Gesteira, P. Montero, R

  17. WHEN MODEL MEETS REALITY – A REVIEW OF SPAR LEVEL 2 MODEL AGAINST FUKUSHIMA ACCIDENT

    SciTech Connect

    Zhegang Ma

    2013-09-01

    The Standardized Plant Analysis Risk (SPAR) models are a set of probabilistic risk assessment (PRA) models used by the Nuclear Regulatory Commission (NRC) to evaluate the risk of operations at U.S. nuclear power plants and provide inputs to risk informed regulatory process. A small number of SPAR Level 2 models have been developed mostly for feasibility study purpose. They extend the Level 1 models to include containment systems, group plant damage states, and model containment phenomenology and accident progression in containment event trees. A severe earthquake and tsunami hit the eastern coast of Japan in March 2011 and caused significant damages on the reactors in Fukushima Daiichi site. Station blackout (SBO), core damage, containment damage, hydrogen explosion, and intensive radioactivity release, which have been previous analyzed and assumed as postulated accident progression in PRA models, now occurred with various degrees in the multi-units Fukushima Daiichi site. This paper reviews and compares a typical BWR SPAR Level 2 model with the “real” accident progressions and sequences occurred in Fukushima Daiichi Units 1, 2, and 3. It shows that the SPAR Level 2 model is a robust PRA model that could very reasonably describe the accident progression for a real and complicated nuclear accident in the world. On the other hand, the comparison shows that the SPAR model could be enhanced by incorporating some accident characteristics for better representation of severe accident progression.

  18. Markov Model of Severe Accident Progression and Management

    SciTech Connect

    Bari, R.A.; Cheng, L.; Cuadra,A.; Ginsberg,T.; Lehner,J.; Martinez-Guridi,G.; Mubayi,V.; Pratt,W.T.; Yue, M.

    2012-06-25

    The earthquake and tsunami that hit the nuclear power plants at the Fukushima Daiichi site in March 2011 led to extensive fuel damage, including possible fuel melting, slumping, and relocation at the affected reactors. A so-called feed-and-bleed mode of reactor cooling was initially established to remove decay heat. The plan was to eventually switch over to a recirculation cooling system. Failure of feed and bleed was a possibility during the interim period. Furthermore, even if recirculation was established, there was a possibility of its subsequent failure. Decay heat has to be sufficiently removed to prevent further core degradation. To understand the possible evolution of the accident conditions and to have a tool for potential future hypothetical evaluations of accidents at other nuclear facilities, a Markov model of the state of the reactors was constructed in the immediate aftermath of the accident and was executed under different assumptions of potential future challenges. This work was performed at the request of the U.S. Department of Energy to explore 'what-if' scenarios in the immediate aftermath of the accident. The work began in mid-March and continued until mid-May 2011. The analysis had the following goals: (1) To provide an overall framework for describing possible future states of the damaged reactors; (2) To permit an impact analysis of 'what-if' scenarios that could lead to more severe outcomes; (3) To determine approximate probabilities of alternative end-states under various assumptions about failure and repair times of cooling systems; (4) To infer the reliability requirements of closed loop cooling systems needed to achieve stable core end-states and (5) To establish the importance for the results of the various cooling system and physical phenomenological parameters via sensitivity calculations.

  19. Dynamic modelling of radionuclide uptake by marine biota: application to the Fukushima nuclear power plant accident.

    PubMed

    Vives i Batlle, Jordi

    2016-01-01

    The dynamic model D-DAT was developed to study the dynamics of radionuclide uptake and turnover in biota and sediments in the immediate aftermath of the Fukushima accident. This dynamics is determined by the interplay between the residence time of radionuclides in seawater/sediments and the biological half-lives of elimination by the biota. The model calculates time-variable activity concentration of (131)I, (134)Cs, (137)Cs and (90)Sr in seabed sediment, fish, crustaceans, molluscs and macroalgae from surrounding activity concentrations in seawater, with which to derive internal and external dose rates. A central element of the model is the inclusion of dynamic transfer of radionuclides to/from sediments by factorising the depletion of radionuclides adsorbed onto suspended particulates, molecular diffusion, pore water mixing and bioturbation, represented by a simple set of differential equations coupled with the biological uptake/turnover processes. In this way, the model is capable of reproducing activity concentration in sediment more realistically. The model was used to assess the radiological impact of the Fukushima accident on marine biota in the acute phase of the accident. Sediment and biota activity concentrations are within the wide range of actual monitoring data. Activity concentrations in marine biota are thus shown to be better calculated by a dynamic model than with the simpler equilibrium approach based on concentration factors, which tends to overestimate for the acute accident period. Modelled dose rates from external exposure from sediment are also significantly below equilibrium predictions. The model calculations confirm previous studies showing that radioactivity levels in marine biota have been generally below the levels necessary to cause a measurable effect on populations. The model was used in mass-balance mode to calculate total integrated releases of 103, 30 and 3 PBq for (131)I, (137)Cs and (90)Sr, reasonably in line with previous

  20. Predicted spatio-temporal dynamics of radiocesium deposited on forests following the Fukushima Dai-ichi nuclear power plant accident

    NASA Astrophysics Data System (ADS)

    Hashimoto, Shoji; Matsuura, Toshiya; Nanko, Kazuki; Linkov, Igor; Shaw, George; Kaneko, Shinji

    2013-04-01

    Radiocesium (134Cs and 137Cs) released from the Fukushima Dai-ichi nuclear power plant to the atmosphere contaminated a large area of Japan's land surface, the majority of which is covered by forest. Here we simulated the dynamics of radiocesium deposited on Japanese forest ecosystems in 2011 using a model that was developed for tracking radionuclides in forest ecosystems after the Chernobyl accident in 1986. The fate of the radiocesium was simulated using the initial conditions observed following the Fukushima accident. In addition, the simulation results were incorporated with a spatial distribution map of deposited radionuclides that was based on an air-borne survey. The simulation demonstrated that in the first two years after initial deposition radiocesium is retained primarily in the soil surface organic layer. Over a period of five to ten years radiocesium is predicted to move from the surface organic soil to the deeper mineral soil, which will eventually become the largest reservoir of radiocesium within forest ecosystems. Spatial analysis clearly shows the reduction in the extent of contaminated areas which will occur as a result of natural decay of radiocesium, as well as the spatial distribution of radiocesium in each forest component. Considering the heavier rainfall and warmer conditions in Japan than in the countries contaminated by the Chernobyl accident, migration of radiocesium from organic to mineral soil may be faster than predicted. Although the uncertainty of our simulations should be taken into account, they provide a basis for understanding and anticipating the future dynamics of radiocesium in Japanese forests following the Fukushima accident.

  1. Car accidents in cellular automata models for one-lane traffic flow.

    PubMed

    Moussa, Najem

    2003-09-01

    Conditions for the occurrence of car accidents are introduced in the Nagel-Schreckenberg model. These conditions are based on the thought that a real accident depends on several parameters: an unexpected action of the car ahead (sudden stop or abrupt deceleration), the gap between the two cars, the velocity of the successor car and its delayed reaction time. We discuss then the effect of this delayed reaction time on the probability of traffic accidents. We find that these conditions for the occurrence of car accidents are necessary for modeling realistic accidents.

  2. Car accidents in cellular automata models for one-lane traffic flow

    NASA Astrophysics Data System (ADS)

    Moussa, Najem

    2003-09-01

    Conditions for the occurrence of car accidents are introduced in the Nagel-Schreckenberg model. These conditions are based on the thought that a real accident depends on several parameters: an unexpected action of the car ahead (sudden stop or abrupt deceleration), the gap between the two cars, the velocity of the successor car and its delayed reaction time. We discuss then the effect of this delayed reaction time on the probability of traffic accidents. We find that these conditions for the occurrence of car accidents are necessary for modeling realistic accidents.

  3. Advanced accident sequence precursor analysis level 1 models

    SciTech Connect

    Sattison, M.B.; Thatcher, T.A.; Knudsen, J.K.; Schroeder, J.A.; Siu, N.O.

    1996-03-01

    INEL has been involved in the development of plant-specific Accident Sequence Precursor (ASP) models for the past two years. These models were developed for use with the SAPHIRE suite of PRA computer codes. They contained event tree/linked fault tree Level 1 risk models for the following initiating events: general transient, loss-of-offsite-power, steam generator tube rupture, small loss-of-coolant-accident, and anticipated transient without scram. Early in 1995 the ASP models were revised based on review comments from the NRC and an independent peer review. These models were released as Revision 1. The Office of Nuclear Regulatory Research has sponsored several projects at the INEL this fiscal year to further enhance the capabilities of the ASP models. Revision 2 models incorporates more detailed plant information into the models concerning plant response to station blackout conditions, information on battery life, and other unique features gleaned from an Office of Nuclear Reactor Regulation quick review of the Individual Plant Examination submittals. These models are currently being delivered to the NRC as they are completed. A related project is a feasibility study and model development of low power/shutdown (LP/SD) and external event extensions to the ASP models. This project will establish criteria for selection of LP/SD and external initiator operational events for analysis within the ASP program. Prototype models for each pertinent initiating event (loss of shutdown cooling, loss of inventory control, fire, flood, seismic, etc.) will be developed. A third project concerns development of enhancements to SAPHIRE. In relation to the ASP program, a new SAPHIRE module, GEM, was developed as a specific user interface for performing ASP evaluations. This module greatly simplifies the analysis process for determining the conditional core damage probability for a given combination of initiating events and equipment failures or degradations.

  4. Markov Model of Accident Progression at Fukushima Daiichi

    SciTech Connect

    Cuadra A.; Bari R.; Cheng, L-Y; Ginsberg, T.; Lehner, J.; Martinez-Guridi, G.; Mubayi, V.; Pratt, T.; Yue, M.

    2012-11-11

    On March 11, 2011, a magnitude 9.0 earthquake followed by a tsunami caused loss of offsite power and disabled the emergency diesel generators, leading to a prolonged station blackout at the Fukushima Daiichi site. After successful reactor trip for all operating reactors, the inability to remove decay heat over an extended period led to boil-off of the water inventory and fuel uncovery in Units 1-3. A significant amount of metal-water reaction occurred, as evidenced by the quantities of hydrogen generated that led to hydrogen explosions in the auxiliary buildings of the Units 1 & 3, and in the de-fuelled Unit 4. Although it was assumed that extensive fuel damage, including fuel melting, slumping, and relocation was likely to have occurred in the core of the affected reactors, the status of the fuel, vessel, and drywell was uncertain. To understand the possible evolution of the accident conditions at Fukushima Daiichi, a Markov model of the likely state of one of the reactors was constructed and executed under different assumptions regarding system performance and reliability. The Markov approach was selected for several reasons: It is a probabilistic model that provides flexibility in scenario construction and incorporates time dependence of different model states. It also readily allows for sensitivity and uncertainty analyses of different failure and repair rates of cooling systems. While the analysis was motivated by a need to gain insight on the course of events for the damaged units at Fukushima Daiichi, the work reported here provides a more general analytical basis for studying and evaluating severe accident evolution over extended periods of time. This work was performed at the request of the U.S. Department of Energy to explore 'what-if' scenarios in the immediate aftermath of the accidents.

  5. Uncertainty and sensitivity analysis of food pathway results with the MACCS Reactor Accident Consequence Model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the food pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 87 imprecisely-known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, milk growing season dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, area dependent cost, crop disposal cost, milk disposal cost, condemnation area, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: fraction of cesium deposition on grain fields that is retained on plant surfaces and transferred directly to grain, maximum allowable ground concentrations of Cs-137 and Sr-90 for production of crops, ground concentrations of Cs-134, Cs-137 and I-131 at which the disposal of milk will be initiated due to accidents that occur during the growing season, ground concentrations of Cs-134, I-131 and Sr-90 at which the disposal of crops will be initiated due to accidents that occur during the growing season, rate of depletion of Cs-137 and Sr-90 from the root zone, transfer of Sr-90 from soil to legumes, transfer of Cs-137 from soil to pasture, transfer of cesium from animal feed to meat, and the transfer of cesium, iodine and strontium from animal feed to milk.

  6. Development of hydrogeological modelling approaches for assessment of consequences of hazardous accidents at nuclear power plants

    SciTech Connect

    Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A.

    1994-07-01

    This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs.

  7. ATMOSPHERIC MODELING IN SUPPORT OF A ROADWAY ACCIDENT

    SciTech Connect

    Buckley, R.; Hunter, C.

    2010-10-21

    The United States Forest Service-Savannah River (USFS) routinely performs prescribed fires at the Savannah River Site (SRS), a Department of Energy (DOE) facility located in southwest South Carolina. This facility covers {approx}800 square kilometers and is mainly wooded except for scattered industrial areas containing facilities used in managing nuclear materials for national defense and waste processing. Prescribed fires of forest undergrowth are necessary to reduce the risk of inadvertent wild fires which have the potential to destroy large areas and threaten nuclear facility operations. This paper discusses meteorological observations and numerical model simulations from a period in early 2002 of an incident involving an early-morning multicar accident caused by poor visibility along a major roadway on the northern border of the SRS. At the time of the accident, it was not clear if the limited visibility was due solely to fog or whether smoke from a prescribed burn conducted the previous day just to the northwest of the crash site had contributed to the visibility. Through use of available meteorological information and detailed modeling, it was determined that the primary reason for the low visibility on this night was fog induced by meteorological conditions.

  8. Modeling and analyses of postulated UF{sub 6} release accidents in gaseous diffusion plant

    SciTech Connect

    Kim, S.H.; Taleyarkhan, R.P.; Keith, K.D.; Schmidt, R.W.; Carter, J.C.; Dyer, R.H.

    1995-10-01

    Computer models have been developed to simulate the transient behavior of aerosols and vapors as a result of a postulated accident involving the release of uranium hexafluoride (UF{sub 6}) into the process building of a gaseous diffusion plant. UF{sub 6} undergoes an exothermic chemical reaction with moisture (H{sub 2}O) in the air to form hydrogen fluoride (HF) and radioactive uranyl fluoride (UO{sub 2}F{sub 2}). As part of a facility-wide safety evaluation, this study evaluated source terms consisting of UO{sub 2}F{sub 2} as well as HF during a postulated UF{sub 6} release accident in a process building. In the postulated accident scenario, {approximately}7900 kg (17,500 lb) of hot UF{sub 6} vapor is released over a 5 min period from the process piping into the atmosphere of a large process building. UO{sub 2}F{sub 2} mainly remains as airborne-solid particles (aerosols), and HF is in a vapor form. Some UO{sub 2}F{sub 2} aerosols are removed from the air flow due to gravitational settling. The HF and the remaining UO{sub 2}F{sub 2} are mixed with air and exhausted through the building ventilation system. The MELCOR computer code was selected for simulating aerosols and vapor transport in the process building. MELCOR model was first used to develop a single volume representation of a process building and its results were compared with those from past lumped parameter models specifically developed for studying UF{sub 6} release accidents. Preliminary results indicate that MELCOR predicted results (using a lumped formulation) are comparable with those from previously developed models.

  9. Analysis of traffic accident size for Korean highway using structural equation models.

    PubMed

    Lee, Ju-Yeon; Chung, Jin-Hyuk; Son, Bongsoo

    2008-11-01

    Accident size can be expressed as the number of involved vehicles, the number of damaged vehicles, the number of deaths and/or the number of injured. Accident size is the one of the important indices to measure the level of safety of transportation facilities. Factors such as road geometric condition, driver characteristic and vehicle type may be related to traffic accident size. However, all these factors interact in complicate ways so that the interrelationships among the variables are not easily identified. A structural equation model is adopted to capture the complex relationships among variables because the model can handle complex relationships among endogenous and exogenous variables simultaneously and furthermore it can include latent variables in the model. In this study, we use 2649 accident data occurred on highways in Korea and estimate relationship among exogenous factors and traffic accident size. The model suggests that road factors, driver factors and environment factors are strongly related to the accident size.

  10. Mathematical modelling of patient flow through an accident and emergency department

    PubMed Central

    Coats, T; Michalis, S

    2001-01-01

    Objectives—The objectives of this project; (1) to evaluate the method, (2) to assess the information required for a more detailed model, and (3) to determine if it was worthwhile to undertake the data collection needed for a more detailed model. Methods—A mathematical model was constructed using the operational research method of discreet event simulation. The effect of different SHO shift patterns on waiting time was assessed with the model. Results—The model constructed was not an accurate representation of patient flow because of the large number of assumptions that had to be made in this preliminary model. However, the model predicted that an SHO shift pattern that more closely matched the patient arrival pattern would produce shorter waiting times. Conclusions—This method can be applied to an accident and emergency department. Extension of this approach with the collection of additional data and the development of more sophisticated models seems worthwhile. PMID:11354210

  11. Simulation Study of Traffic Accidents in Bidirectional Traffic Models

    NASA Astrophysics Data System (ADS)

    Moussa, Najem

    Conditions for the occurrence of bidirectional collisions are developed based on the Simon-Gutowitz bidirectional traffic model. Three types of dangerous situations can occur in this model. We analyze those corresponding to head-on collision; rear-end collision and lane-changing collision. Using Monte Carlo simulations, we compute the probability of the occurrence of these collisions for different values of the oncoming cars' density. It is found that the risk of collisions is important when the density of cars in one lane is small and that of the other lane is high enough. The influence of different proportions of heavy vehicles is also studied. We found that heavy vehicles cause an important reduction of traffic flow on the home lane and provoke an increase of the risk of car accidents.

  12. Predictive modeling of complications.

    PubMed

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  13. Uncertainty and sensitivity analysis of early exposure results with the MACCS Reactor Accident Consequence Model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; McKay, M.D.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 34 imprecisely known input variables on the following reactor accident consequences are studied: number of early fatalities, number of cases of prodromal vomiting, population dose within 10 mi of the reactor, population dose within 1000 mi of the reactor, individual early fatality probability within 1 mi of the reactor, and maximum early fatality distance. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: scaling factor for horizontal dispersion, dry deposition velocity, inhalation protection factor for nonevacuees, groundshine shielding factor for nonevacuees, early fatality hazard function alpha value for bone marrow exposure, and scaling factor for vertical dispersion.

  14. Low predictive power of peritraumatic dissociation for PTSD symptoms in accident survivors.

    PubMed

    Wittmann, Lutz; Moergeli, Hanspeter; Schnyder, Ulrich

    2006-10-01

    To test the predictive power of peritraumatic dissociation for the development of psychopathology, the authors assessed symptoms of peritraumatic dissociation (Peritraumatic Dissociative Experiences Questionnaire; PDEQ), posttraumatic stress disorder (Clinician-Administered PTSD Scale; CAPS), anxiety and depression (Hospital Anxiety and Depression Scale; HADS) in a sample of 214 accident victims 5 days postaccident (T1). Six months later (T2), CAPS and HADS were administered again. Acute stress disorder (ASD) and PTSD symptom levels were surprisingly low. In sequential regression analyses, initial reexperiencing and hyperarousal significantly predicted PTSD symptom level (T2) over several possibly confounding variables controlled for. Peritraumatic dissociation explained less than 3% of variance. For PTSD scores, 38% overall variance explanation was obtained; the variance for HADS scores was low. Possible explanations for the low-predictive power of peritraumatic dissociation for posttraumatic psychopathology in the sample are discussed.

  15. Health effects model for nuclear power plant accident consequence analysis. Part I. Introduction, integration, and summary. Part II. Scientific basis for health effects models

    SciTech Connect

    Evans, J.S.; Moeller, D.W.; Cooper, D.W.

    1985-07-01

    Analysis of the radiological health effects of nuclear power plant accidents requires models for predicting early health effects, cancers and benign thyroid nodules, and genetic effects. Since the publication of the Reactor Safety Study, additional information on radiological health effects has become available. This report summarizes the efforts of a program designed to provide revised health effects models for nuclear power plant accident consequence modeling. The new models for early effects address four causes of mortality and nine categories of morbidity. The models for early effects are based upon two parameter Weibull functions. They permit evaluation of the influence of dose protraction and address the issue of variation in radiosensitivity among the population. The piecewise-linear dose-response models used in the Reactor Safety Study to predict cancers and thyroid nodules have been replaced by linear and linear-quadratic models. The new models reflect the most recently reported results of the follow-up of the survivors of the bombings of Hiroshima and Nagasaki and permit analysis of both morbidity and mortality. The new models for genetic effects allow prediction of genetic risks in each of the first five generations after an accident and include information on the relative severity of various classes of genetic effects. The uncertainty in modeloling radiological health risks is addressed by providing central, upper, and lower estimates of risks. An approach is outlined for summarizing the health consequences of nuclear power plant accidents. 298 refs., 9 figs., 49 tabs.

  16. Uncertainty and sensitivity analysis of chronic exposure results with the MACCS reactor accident consequence model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the chronic exposure pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 75 imprecisely known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, water ingestion dose, milk growing season dose, long-term groundshine dose, long-term inhalation dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, total latent cancer fatalities, area-dependent cost, crop disposal cost, milk disposal cost, population-dependent cost, total economic cost, condemnation area, condemnation population, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: dry deposition velocity, transfer of cesium from animal feed to milk, transfer of cesium from animal feed to meat, ground concentration of Cs-134 at which the disposal of milk products will be initiated, transfer of Sr-90 from soil to legumes, maximum allowable ground concentration of Sr-90 for production of crops, fraction of cesium entering surface water that is consumed in drinking water, groundshine shielding factor, scale factor defining resuspension, dose reduction associated with decontamination, and ground concentration of 1-131 at which disposal of crops will be initiated due to accidents that occur during the growing season.

  17. Prediction in Multilevel Models

    ERIC Educational Resources Information Center

    Afshartous, David; de Leeuw, Jan

    2005-01-01

    Multilevel modeling is an increasingly popular technique for analyzing hierarchical data. This article addresses the problem of predicting a future observable y[subscript *j] in the jth group of a hierarchical data set. Three prediction rules are considered and several analytical results on the relative performance of these prediction rules are…

  18. Crime prediction modeling

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.

  19. Another Look at the Relationship Between Accident- and Encroachment-Based Approaches to Run-Off-the-Road Accidents Modeling

    SciTech Connect

    Miaou, Shaw-Pin

    1997-08-01

    The purpose of this study was to look for ways to combine the strengths of both approaches in roadside safety research. The specific objectives were (1) to present the encroachment-based approach in a more systematic and coherent way so that its limitations and strengths can be better understood from both statistical and engineering standpoints, and (2) to apply the analytical and engineering strengths of the encroachment-based thinking to the formulation of mean functions in accident-based models.

  20. BISON Modeling of Reactivity-Initiated Accident Experiments in a Static Environment

    SciTech Connect

    Folsom, Charles P.; Jensen, Colby B.; Williamson, Richard L.; Woolstenhulme, Nicolas E.; Ban, Heng; Wachs, Daniel M.

    2016-09-01

    In conjunction with the restart of the TREAT reactor and the design of test vehicles, modeling and simulation efforts are being used to model the response of Accident Tolerant Fuel (ATF) concepts under reactivity insertion accident (RIA) conditions. The purpose of this work is to model a baseline case of a 10 cm long UO2-Zircaloy fuel rodlet using BISON and RELAP5 over a range of energy depositions and with varying reactor power pulse widths. The results show the effect of varying the pulse width and energy deposition on both thermal and mechanical parameters that are important for predicting failure of the fuel rodlet. The combined BISON/RELAP5 model captures coupled thermal and mechanical effects on the fuel-to-cladding gap conductance, cladding-to-coolant heat transfer coefficient and water temperature and pressure that would not be capable in each code individually. These combined effects allow for a more accurate modeling of the thermal and mechanical response in the fuel rodlet and thermal-hydraulics of the test vehicle.

  1. Simulation on Poisson and negative binomial models of count road accident modeling

    NASA Astrophysics Data System (ADS)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  2. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    SciTech Connect

    Carbajo, J.J.

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  3. [Integration of hospital social services in the rehabilitation of accident patients by the statutory accident insurance. Results of a one-year model project].

    PubMed

    Lukasczik, M; Geyer, S; Neuderth, S; Gerlich, C; Weis, I; Raiber, I; Weber-Falkensammer, H; Vogel, H

    2008-02-01

    In accident patient care, there is a substantial overlap between the scope of duties of hospital social services and tasks fulfilled by the German statutory accident insurances' visiting staff that regularly takes care of accident patients. Therefore, a project on the integration of hospital social services into the organizational structures of the German statutory accident insurance was initiated which aimed at optimising communication and realising synergy effects. A formative evaluation of the project was conducted that provided process- and outcome-related data for a comprehensive evaluation of the strengths and potentials of the project. Report forms containing patient-related information were completed by hospital social services. Forms were evaluated in terms of their utility for case management by accident insurance administrators using a checklist. Project implementation and procedures were documented and evaluated using semi-structured interviews with social services staff and accident insurance employees. Through the model, a comprehensive care for accident patients could be reached. In one third of all cases reviewed, rehabilitation management could be improved by including hospital social services. Moreover, in one third of all cases, care-related activities initiated by accident insurance funds could be reduced by involving local hospital social services. The report form used by hospital social services was evaluated as a useful tool in the context of patient care and rehabilitation management. The model was evaluated by interview participants as a highly targeted approach in accident patients' care management. Implications of the study for improving health care are discussed.

  4. An aggregate accident model based on pooled, regional time-series data.

    PubMed

    Fridstrøm, L; Ingebrigtsen, S

    1991-10-01

    The determinants of personal injury road accidents and their severity are studied by means of generalized Poisson regression models estimated on the basis of combined cross-section/time-series data. Monthly data have been assembled for 18 Norwegian counties (every county but one), covering the period from January 1974 until December 1986. A rather wide range of potential explanatory factors are taken into account, including road use (exposure), weather, daylight, traffic density, road investment and maintenance expenditure, accident reporting routines, vehicle inspection, law enforcement, seat belt usage, proportion of inexperienced drivers, and alcohol sales. Separate probability models are estimated for the number of personal injury accidents, fatal accidents, injury victims, death victims, car occupants injured, and bicyclists and pedestrians injured. The fraction of personal injury accidents that are fatal is interpreted as an average severity measure and studied by means of a binomial logit model.

  5. Wind power prediction models

    NASA Technical Reports Server (NTRS)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  6. Model Valid Prediction Period

    NASA Astrophysics Data System (ADS)

    Chu, P. C.

    2002-12-01

    A new concept, valid prediction period (VPP), is presented here to evaluate model predictability. VPP is defined as the time period when the prediction error first exceeds a pre-determined criterion (i.e., the tolerance level). It depends not only on the instantaneous error growth, but also on the noise level, the initial error, and tolerance level. The model predictability skill is then represented by a single scalar, VPP. The longer the VPP, the higher the model predictability skill is. A theoretical framework on the base of the backward Fokker-Planck equation is developed to determine the probability density function (pdf) of VPP. Verification of a Gulf of Mexico nowcast/forecast model is used as an example to demonstrate the usefulness of VPP. Power law scaling is found in the mean square error of displacement between drifting buoy and model trajectories (both at 50 m depth). The pdf of VPP is asymmetric with a long and broad tail on the higher value side, which suggests long-term predictability. The calculations demonstrate that the long-term (extreme long such as 50-60 day) predictability is not an "outlier" and shares the same statistical properties as the short-term predictions. References Chu P. C., L. M. Ivanov, and C.W. Fan, Backward Fokker-Plank equation for determining model predictability with unknown initial error distribution. J. Geophys. Res., in press, 2002. Chu P.C., L.M.Ivanov, T.M. Margolina, and O.V.Melnichenko, 2002b: On probabilistic stability of an atmospheric model to various amplitude perturbations. J. Atmos. Sci., in press Chu P.C., L.M. Ivanov, L. Kantha, O.V. Melnichenko and Y.A. Poberezhny, 2002c: The long-term correlations and power decay law in model prediction skill. Geophys. Res. Let., in press.

  7. Transport and fate modeling of nitrobenzene in groundwater after the Songhua River pollution accident.

    PubMed

    Zhang, Wenjing; Lin, Xueyu; Su, Xiaosi

    2010-11-01

    In 2005 a pollution accident occurred in the Songhua River, which is geographically located next to groundwater supply plants. This caused public concern about the transport and fate of nitrobenzene (NB) in the groundwater. This paper discusses the mechanisms and effects of the transport and fate of NB in groundwater based on pilot scale experiments conducted in the laboratory, including a simulation experiment, bench-scale batch tests and a one-dimensional numerical model. Parallel batch tests showed that the adsorption of NB to the clay and sand followed the Langmuir-type isotherm, and clay had a greater NB adsorption capacity than sand. NB biodegradation in different conditions was well fitted by the Monod equation and the q(max) values varied from 0.018 to 0.046 h(-1). Results indicated that NB's biodegradation was not affected by the initial NB concentration. Numerical modeling results indicated a good match between computed and observed data, and in the prediction model NB entered the groundwater after the pollution accident. However, the highest concentration of NB was much lower than the allowable limit set by the national standard (0.017 mg/L).

  8. Modelling transport and deposition of caesium and iodine from the Chernobyl accident using the DREAM model

    NASA Astrophysics Data System (ADS)

    Brandt, J.; Christensen, J. H.; Frohn, L. M.

    2002-06-01

    A tracer model, DREAM (the Danish Rimpuff and Eulerian Accidental release Model), has been developed for modelling transport, dispersion and deposition (wet and dry) of radioactive material from accidental releases, as the Chernobyl accident. The model is a combination of a Lagrangian model, that includes the near source dispersion, and an Eulerian model describing the long-range transport. The performance of the transport model has previously been tested within the European Tracer Experiment, ETEX, which included transport and dispersion of an inert, non-depositing tracer from a controlled release. The focus of this paper is the model performance with respect to the deposition of 137Cs, 134Cs and 131I from the Chernobyl accident, using different relatively simple and comprehensive parameterizations. The performance, compared to measurements, of different combinations of parameterizations of wet and dry deposition schemes has been evaluated, using different statistical tests.

  9. VICTORIA: A mechanistic model of radionuclide behavior in the reactor coolant system under severe accident conditions. Revision 1

    SciTech Connect

    Heams, T J; Williams, D A; Johns, N A; Mason, A; Bixler, N E; Grimley, A J; Wheatley, C J; Dickson, L W; Osborn-Lee, I; Domagala, P; Zawadzki, S; Rest, J; Alexander, C A; Lee, R Y

    1992-12-01

    The VICTORIA model of radionuclide behavior in the reactor coolant system (RCS) of a light water reactor during a severe accident is described. It has been developed by the USNRC to define the radionuclide phenomena and processes that must be considered in systems-level models used for integrated analyses of severe accident source terms. The VICTORIA code, based upon this model, predicts fission product release from the fuel, chemical reactions involving fission products, vapor and aerosol behavior, and fission product decay heating. Also included is a detailed description of how the model is implemented in VICTORIA, the numerical algorithms used, and the correlations and thermochemical data necessary for determining a solution. A description of the code structure, input and output, and a sample problem are provided.

  10. Estimation Of 137Cs Using Atmospheric Dispersion Models After A Nuclear Reactor Accident

    NASA Astrophysics Data System (ADS)

    Simsek, V.; Kindap, T.; Unal, A.; Pozzoli, L.; Karaca, M.

    2012-04-01

    Nuclear energy will continue to have an important role in the production of electricity in the world as the need of energy grows up. But the safety of power plants will always be a question mark for people because of the accidents happened in the past. Chernobyl nuclear reactor accident which happened in 26 April 1986 was the biggest nuclear accident ever. Because of explosion and fire large quantities of radioactive material was released to the atmosphere. The release of the radioactive particles because of accident affected not only its region but the entire Northern hemisphere. But much of the radioactive material was spread over west USSR and Europe. There are many studies about distribution of radioactive particles and the deposition of radionuclides all over Europe. But this was not true for Turkey especially for the deposition of radionuclides released after Chernobyl nuclear reactor accident and the radiation doses received by people. The aim of this study is to determine the radiation doses received by people living in Turkish territory after Chernobyl nuclear reactor accident and use this method in case of an emergency. For this purpose The Weather Research and Forecasting (WRF) Model was used to simulate meteorological conditions after the accident. The results of WRF which were for the 12 days after accident were used as input data for the HYSPLIT model. NOAA-ARL's (National Oceanic and Atmospheric Administration Air Resources Laboratory) dispersion model HYSPLIT was used to simulate the 137Cs distrubition. The deposition values of 137Cs in our domain after Chernobyl Nuclear Reactor Accident were between 1.2E-37 Bq/m2 and 3.5E+08 Bq/m2. The results showed that Turkey was affected because of the accident especially the Black Sea Region. And the doses were calculated by using GENII-LIN which is multipurpose health physics code.

  11. Principles of Predictive Modeling

    NASA Astrophysics Data System (ADS)

    Delignette-Muller, Marie Laure

    Mathematical models were first used in food microbiology in the early 20th century to describe the thermal destruction of pathogens in food, but the concept of predictive microbiology really emerged in the 1980 s. This concept was first developed and extensively discussed by McMeekin and his colleagues at the University of Tasmania (Ratkowsky, Olley, McMeekin, & Ball, 1982; McMeekin, Olley, Ross, & Ratkowsky, 1993; McMeekin, Olley, Ratkowsky, & Ross, 2002). Now predictive microbiology or predictive modeling in foods may be considered as a subdiscipline of food microbiology, with its international meetings (5th conference on “Predictive Modelling in Foods” in 2007) gathering a scientific community from all over the world.

  12. Evaluation models and their influence on radiological consequences of hypothetical accidents in FFTF

    SciTech Connect

    Stepnewski, D.D.; Hale, J.P.; Martin, H.C.; Peak, R.D.; Franz, G.R.

    1980-04-01

    The influence of radiological evaluation models and assumptions on the off-site consequences of hypothetical core disruptive accidents is examined. The effects of initial source term, time of containment venting, meteorology, biological dose model, and aerosol fallout have been included. The analyses were based on two postulated scenarios of a severe hypothetical reactor vessel melt-through accident for 400 MW(t) fast reactor. Within each accident scenario, the results show that, although other variables are significant, radiological consequences are strongly affected by the amount of aerosol fallout computed to occur in the incident.

  13. Innovative approach to modeling accident response of Gravel Gerties

    SciTech Connect

    Kramer, M.; McClure, P.; Sullivan, H.

    1997-08-01

    Recent safety analyses at nuclear explosive facilities have renewed interest in the accident phenomenology associated with explosions in nuclear explosive cells, which are commonly referred to as {open_quotes}Gravel Gerties.{close_quotes} The cells are used for the assembly and disassembly of nuclear explosives and are located in the Device Assembly Facility (DAF) at the Nevada Test Site (NTS) and at the Pantex facility. The cells are designed to mitigate the release of special nuclear material to the environment in the event of a detonation of high explosive within the Gravel Gertie. Although there are some subtle differences between the cells of DAF and Pantex, their general design, geometry, and configuration are similar. The cells consist of a round room approximately 10.4 m in diameter and 5.2 m high enclosed by 0.3-m-thick concrete. Each cell has a wire-rope cantenary roof overlain with gravel. The gravel is approximately 6.9 m deep at the center of the roof and decreases toward the outer edge of the cell. The cell is connected to a corridor and subsequent rooms through an interlocking blast door. In the event of a accidental explosion involving significant amounts of high explosive, the roof structure is lifted by the force of the explosion, the supporting cables break, the gravel is lifted by the blast (resulting in rapid venting of the cell), and the gravel roof collapses, filling the cell. The lifting and subsequent collapse of the gravel, which acts much like a piston, is very challenging to model.

  14. Highway accident severities and the mixed logit model: an exploratory empirical analysis.

    PubMed

    Milton, John C; Shankar, Venky N; Mannering, Fred L

    2008-01-01

    Many transportation agencies use accident frequencies, and statistical models of accidents frequencies, as a basis for prioritizing highway safety improvements. However, the use of accident severities in safety programming has been often been limited to the locational assessment of accident fatalities, with little or no emphasis being placed on the full severity distribution of accidents (property damage only, possible injury, injury)-which is needed to fully assess the benefits of competing safety-improvement projects. In this paper we demonstrate a modeling approach that can be used to better understand the injury-severity distributions of accidents on highway segments, and the effect that traffic, highway and weather characteristics have on these distributions. The approach we use allows for the possibility that estimated model parameters can vary randomly across roadway segments to account for unobserved effects potentially relating to roadway characteristics, environmental factors, and driver behavior. Using highway-injury data from Washington State, a mixed (random parameters) logit model is estimated. Estimation findings indicate that volume-related variables such as average daily traffic per lane, average daily truck traffic, truck percentage, interchanges per mile and weather effects such as snowfall are best modeled as random-parameters-while roadway characteristics such as the number of horizontal curves, number of grade breaks per mile and pavement friction are best modeled as fixed parameters. Our results show that the mixed logit model has considerable promise as a methodological tool in highway safety programming.

  15. CFD modeling of debris melting phenomena during late phase Candu 6 severe accident

    SciTech Connect

    Nicolici, S.; Dupleac, D.; Prisecaru, I.

    2012-07-01

    The objective of this paper was to study the phase change of the debris formed on the Candu 6 calandria bottom in a postulated accident sequence. The molten pool and crust formation were studied employing the Ansys-Fluent code. The 3D model using Large Eddy Simulation (LES) predicts the conjugate, radiative and convective heat transfer inside and from the corium pool. LES simulations require a very fine grid to capture the crust formation and the free convection flow. This aspect (fine mesh requirement) correlated with the long transient has imposed the use of a slice from the 3D calandria geometry in order not to exceed the computing resources. The preliminary results include heat transfer coefficients, temperature profiles and heat fluxes through calandria wall. From the safety point of view it is very important to maintain a heat flux through the wall below the CHF assuring the integrity of the calandria vessel. This can be achieved by proper cooling of the tank water which contains the vessel. Also, transient duration can be estimated being important in developing guidelines for severe accidents management. The debris physical structure and material properties have large uncertainties in the temperature range of interest. Thus, further sensitivity studies should be carried out in order to better understand the influence of these parameters on this complex phenomenon. (authors)

  16. Predictive models in urology.

    PubMed

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  17. Modeling fault among accident--involved pedestrians and motorists in Hawaii.

    PubMed

    Kim, Karl; Brunner, I Made; Yamashita, Eric

    2008-11-01

    Using a comprehensive database of police-reported accidents in Hawaii, we describe the nature of pedestrian accidents over the period 2002-2005. Approximately 36% of the accidents occur in residential areas, while another 34% occur in business areas. Only 41.7% of the pedestrian accidents occur at intersections. More pedestrian crashes occur at non-intersection locations-including midblock locations, driveways, parking lots, and other off roadway locations. Approximately 38.2% of the crashes occur at crosswalk locations, while proportionately more (61.8%) of the pedestrian accidents occur at non-crosswalk locations. Using this database the human, temporal, roadway, and environmental factors associated with being "at-fault" for both pedestrians and drivers are also examined. Using techniques of logistic regression, several different explanatory models are constructed, to identify the factors associated with crashes producing fatalities and serious injuries. Finally, two pedestrian models (drunk males and young boys) and one driver model (male commuters) are developed to provide further understanding of pedestrian accident causation. Drunk male pedestrians who were jaywalking were in excess of 10x more likely than other groups to be at-fault in pedestrian accidents. Young boys in residential areas were also more likely to be at-fault. Male commuters in business areas in the morning were also found to have higher odds of being classified at-fault when involved in pedestrian accidents. The results of this study indicate that there should be a combination of enforcement and educational programs implemented for both the pedestrian and drivers to show those at-fault the consequences of their actions, and to reduce the overall number of accidents.

  18. Atmospheric prediction model survey

    NASA Technical Reports Server (NTRS)

    Wellck, R. E.

    1976-01-01

    As part of the SEASAT Satellite program of NASA, a survey of representative primitive equation atmospheric prediction models that exist in the world today was written for the Jet Propulsion Laboratory. Seventeen models developed by eleven different operational and research centers throughout the world are included in the survey. The surveys are tutorial in nature describing the features of the various models in a systematic manner.

  19. A finite element model of a six-year-old child for simulating pedestrian accidents.

    PubMed

    Meng, Yunzhu; Pak, Wansoo; Guleyupoglu, Berkan; Koya, Bharath; Gayzik, F Scott; Untaroiu, Costin D

    2017-01-01

    Child pedestrian protection deserves more attention in vehicle safety design since they are the most vulnerable road users who face the highest mortality rate. Pediatric Finite Element (FE) models could be used to simulate and understand the pedestrian injury mechanisms during crashes in order to mitigate them. Thus, the objective of the study was to develop a computationally efficient (simplified) six-year-old (6YO-PS) pedestrian FE model and validate it based on the latest published pediatric data. The 6YO-PS FE model was developed by morphing the existing GHBMC adult pedestrian model. Retrospective scan data were used to locally adjust the geometry as needed for accuracy. Component test simulations focused only the lower extremities and pelvis, which are the first body regions impacted during pedestrian accidents. Three-point bending test simulations were performed on the femur and tibia with adult material properties and then updated using child material properties. Pelvis impact and knee bending tests were also simulated. Finally, a series of pediatric Car-to-Pedestrian Collision (CPC) were simulated with pre-impact velocities ranging from 20km/h up to 60km/h. The bone models assigned pediatric material properties showed lower stiffness and a good match in terms of fracture force to the test data (less than 6% error). The pelvis impact force predicted by the child model showed a similar trend with test data. The whole pedestrian model was stable during CPC simulations and predicted common pedestrian injuries. Overall, the 6YO-PS FE model developed in this study showed good biofidelity at component level (lower extremity and pelvis) and stability in CPC simulations. While more validations would improve it, the current model could be used to investigate the lower limb injury mechanisms and in the prediction of the impact parameters as specified in regulatory testing protocols. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Cellular automata model simulating traffic car accidents in the on-ramp system

    NASA Astrophysics Data System (ADS)

    Echab, H.; Lakouari, N.; Ez-Zahraouy, H.; Benyoussef, A.

    2015-01-01

    In this paper, using Nagel-Schreckenberg model we study the on-ramp system under the expanded open boundary condition. The phase diagram of the two-lane on-ramp system is computed. It is found that the expanded left boundary insertion strategy enhances the flow in the on-ramp lane. Furthermore, we have studied the probability of the occurrence of car accidents. We distinguish two types of car accidents: the accident at the on-ramp site (Prc) and the rear-end accident in the main road (Pac). It is shown that car accidents at the on-ramp site are more likely to occur when traffic is free on road A. However, the rear-end accidents begin to occur above a critical injecting rate αc1. The influence of the on-ramp length (LB) and position (xC0) on the car accidents probabilities is studied. We found that large LB or xC0 causes an important decrease of the probability Prc. However, only large xC0 provokes an increase of the probability Pac. The effect of the stochastic randomization is also computed.

  1. A Finite Element Model of a mid-size male for simulating pedestrian accidents.

    PubMed

    Untaroiu, Costin D; Pak, Wansoo; Meng, Yunzhu; Schap, Jeremy M; Koya, Bharath; Gayzik, F Scott

    2017-09-06

    Pedestrians represent one of the most vulnerable road users and comprise nearly 22% the road crash related fatalities in the world. Therefore, protection of pedestrians in car-to-pedestrian collisions (CPC) has recently generated increased attention with regulations involving three subsystem tests. The development of a finite element (FE) pedestrian model could provide a complementary component that characterizes the whole-body response of vehicle-pedestrian interactions and assesses the pedestrian injuries. The main goal of this study was to develop and to validate a simplified full body FE model corresponding to a 50th male pedestrian in standing posture (M50-PS). The FE model mesh and defined material properties are based on a 50th percentile male occupant model. The lower limb-pelvis and lumbar spine regions of the human model were validated against the post-mortem human surrogate (PMHS) test data recorded in four-point lateral knee bending tests, pelvic\\abdomen\\shoulder\\thoracic impact tests, and lumbar spine bending tests. Then, a pedestrian-to-vehicle impact simulation was performed using the whole pedestrian model and the results were compared to corresponding PMHS tests. Overall, the simulation results showed that lower leg response is mostly within boundaries of PMHS corridors. In addition, the model shows the capability to predict the most common lower extremity injuries observed in pedestrian accidents. Generally, the validated pedestrian model may be used by safety researchers in the design of front ends of new vehicles in order to increase pedestrian protection.

  2. Modeling and Prediction Overview

    SciTech Connect

    Ermak, D L

    2002-10-18

    Effective preparation for and response to the release of toxic materials into the atmosphere hinges on accurate predictions of the dispersion pathway, concentration, and ultimate fate of the chemical or biological agent. Of particular interest is the threat to civilian populations within major urban areas, which are likely targets for potential attacks. The goals of the CBNP Modeling and Prediction area are: (1) Development of a suite of validated, multi-scale, atmospheric transport and fate modeling capabilities for chemical and biological agent releases within the complex urban environment; (2) Integration of these models and related user tools into operational emergency response systems. Existing transport and fate models are being adapted to treat the complex atmospheric flows within and around structures (e.g., buildings, subway systems, urban areas) and over terrain. Relevant source terms and the chemical and physical behavior of gas- and particle-phase species (e.g., losses due to deposition, bio-agent viability, degradation) are also being developed and incorporated into the models. Model validation is performed using both laboratory and field data. CBNP is producing and testing a suite of models with differing levels of complexity and fidelity to address the full range of user needs and applications. Lumped-parameter transport models are being developed for subway systems and building interiors, supplemented by the use of computational fluid dynamics (CFD) models to describe the circulation within large, open spaces such as auditoriums. Both sophisticated CFD transport models and simpler fast-response models are under development to treat the complex flow around individual structures and arrays of buildings. Urban parameterizations are being incorporated into regional-scale weather forecast, meteorological data assimilation, and dispersion models for problems involving larger-scale urban and suburban areas. Source term and dose response models are being

  3. A study of factors affecting highway accident rates using the random-parameters tobit model.

    PubMed

    Anastasopoulos, Panagiotis Ch; Mannering, Fred L; Shankar, Venky N; Haddock, John E

    2012-03-01

    A large body of previous literature has used a variety of count-data modeling techniques to study factors that affect the frequency of highway accidents over some time period on roadway segments of a specified length. An alternative approach to this problem views vehicle accident rates (accidents per mile driven) directly instead of their frequencies. Viewing the problem as continuous data instead of count data creates a problem in that roadway segments that do not have any observed accidents over the identified time period create continuous data that are left-censored at zero. Past research has appropriately applied a tobit regression model to address this censoring problem, but this research has been limited in accounting for unobserved heterogeneity because it has been assumed that the parameter estimates are fixed over roadway-segment observations. Using 9-year data from urban interstates in Indiana, this paper employs a random-parameters tobit regression to account for unobserved heterogeneity in the study of motor-vehicle accident rates. The empirical results show that the random-parameters tobit model outperforms its fixed-parameters counterpart and has the potential to provide a fuller understanding of the factors determining accident rates on specific roadway segments.

  4. Modeling of the TMI-2 (Three Mile Island Unit-2) accident with MELPROG/TRAC and calculation results for Phases 1 and 2

    SciTech Connect

    Motley, F.E.; Jenks, R.P.

    1988-01-01

    Work has been performed to develop a Three Mile Island Unit-2 (TMI-2) simulation model for MELPROG/TRAC capable of predicting the observed plant behavior that took place during the accident of March 1979. A description of the TMI-2 plant model is presented and calculation results through 174 min of the accident are discussed. Using the ICBC boundary conditions, the calculation predicts pressurizer draining and core recovering prior to fuel-rod damage. A parametric calculation (reduced makeup flow) is currently underway and is in better agreement with the observed plant behavior. Efforts are underway to resolve current discrepancies and proceed with an accurate simulation through Phases 3 and 4 of the accident (174-227 min and 227-300 min, respectively). 13 refs., 11 figs., 2 tabs.

  5. Freeze Prediction Model

    NASA Technical Reports Server (NTRS)

    Morrow, C. T. (Principal Investigator)

    1981-01-01

    Measurements of wind speed, net irradiation, and of air, soil, and dew point temperatures in an orchard at the Rock Springs Agricultural Research Center, as well as topographical and climatological data and a description of the major apple growing regions of Pennsylvania were supplied to the University of Florida for use in running the P-model, freeze prediction program. Results show that the P-model appears to have considerable applicability to conditions in Pennsylvania. Even though modifications may have to be made for use in the fruit growing regions, there are advantages for fruit growers with the model in its present form.

  6. Analysis of 2004 German general aviation aircraft accidents according to the HFACS model.

    PubMed

    Dambier, Michael; Hinkelbein, Jochen

    2006-01-01

    The number of aircraft accidents remains on a constant level since the late 1990s. Routine analysis in detail of the causative factors is not carried out in Germany. The analysis of flight mishaps has been demonstrated to be an important basis for flight safety. The Human Factors Analysis and Classification System (HFACS) model is best suitable for aircraft accident analysis. The aim of this study was to classify aircraft accidents in the General Aviation (GA) of Germany according to the HFACS model and to figure out the underlying causes. The analysis was performed with the HFACS model and on the basis of the regularly published reports of the German state department for aircraft accident analysis (BFU) including accidents (but not incidents) of GA aircraft flown by German pilots in Germany and in other countries. The underlying reasons were classified as follows: pilot errors, organizational factors, ergonomic factors, aeromedical problems, and crew resource management. Additionally, the phase of the flight was classified. Two hundred thirty-nine GA aircraft accidents were registered in 2004 in Germany. Eighty-seven (36%) were reported in the class up to 2 tons, six (3%) in the class of 2.0 to 5.7 tons, 28 (12%) for Touring Motor Gliders (TMG), and 118 (49%) for gliders. Of these accidents, 54 (35 crewmembers and 19 passengers) aircraft occupants survived slightly injured, 35 (23 crewmembers and 12 passengers) were seriously injured, and 34 (21 crewmembers and 13 passengers) were killed. Data for uninjured aircraft occupants were not available. Most accidents happened on summer weekends during approach and landing (53%) due to pilot errors (84%). Our data mainly seem to be in concordance with previously published data on GA. An improvement of flight safety can be achieved only with a detailed analysis of the accident data. Therefore, more data on aircraft accidents in Germany are needed, for example, by adapting the German aircraft accident report form. Pilots

  7. Quantifying the risk of extreme aviation accidents

    NASA Astrophysics Data System (ADS)

    Das, Kumer Pial; Dey, Asim Kumer

    2016-12-01

    Air travel is considered a safe means of transportation. But when aviation accidents do occur they often result in fatalities. Fortunately, the most extreme accidents occur rarely. However, 2014 was the deadliest year in the past decade causing 111 plane crashes, and among them worst four crashes cause 298, 239, 162 and 116 deaths. In this study, we want to assess the risk of the catastrophic aviation accidents by studying historical aviation accidents. Applying a generalized Pareto model we predict the maximum fatalities from an aviation accident in future. The fitted model is compared with some of its competitive models. The uncertainty in the inferences are quantified using simulated aviation accident series, generated by bootstrap resampling and Monte Carlo simulations.

  8. Effects of a type of quenched randomness on car accidents in a cellular automaton model.

    PubMed

    Yang, Xian-qing; Zhang, Wei; Qiu, Kang; Zhao, Yue-min

    2006-01-01

    In this paper we numerically study the probability Pac of the occurrence of car accidents in the Nagel-Schreckenberg (NS) model with a defect. In the deterministic NS model, numerical results show that there exists a critical value of car density below which no car accident happens. The critical density Pc1 is not related only to the maximum speed of cars, but also to the braking probability at the defect. The braking probability at a defect can enhance, not suppress, the occurrence of car accidents when its value is small. Only the braking probability at the defect is very large, car accidents can be reduced by the bottleneck. In the nondeterministic NS model, the probability Pac exhibits the same behaviors with that in the deterministic model except the case of vmax=1 under which the probability Pac is only reduced by the defect. The defect also induces the inhomogeneous distribution of car accidents over the whole road. Theoretical analyses give an agreement with numerical results in the deterministic NS model and in the nondeterministic NS model with vmax=1 in the case of large defect braking probability.

  9. Effects of a type of quenched randomness on car accidents in a cellular automaton model

    NASA Astrophysics Data System (ADS)

    Yang, Xian-Qing; Zhang, Wei; Qiu, Kang; Zhao, Yue-Min

    2006-01-01

    In this paper we numerically study the probability Pac of the occurrence of car accidents in the Nagel-Schreckenberg (NS) model with a defect. In the deterministic NS model, numerical results show that there exists a critical value of car density below which no car accident happens. The critical density ρc1 is not related only to the maximum speed of cars, but also to the braking probability at the defect. The braking probability at a defect can enhance, not suppress, the occurrence of car accidents when its value is small. Only the braking probability at the defect is very large, car accidents can be reduced by the bottleneck. In the nondeterministic NS model, the probability Pac exhibits the same behaviors with that in the deterministic model except the case of vmax=1 under which the probability Pac is only reduced by the defect. The defect also induces the inhomogeneous distribution of car accidents over the whole road. Theoretical analyses give an agreement with numerical results in the deterministic NS model and in the nondeterministic NS model with vmax=1 in the case of large defect braking probability.

  10. Inter-comparison of dynamic models for radionuclide transfer to marine biota in a Fukushima accident scenario.

    PubMed

    Vives I Batlle, J; Beresford, N A; Beaugelin-Seiller, K; Bezhenar, R; Brown, J; Cheng, J-J; Ćujić, M; Dragović, S; Duffa, C; Fiévet, B; Hosseini, A; Jung, K T; Kamboj, S; Keum, D-K; Kryshev, A; LePoire, D; Maderich, V; Min, B-I; Periáñez, R; Sazykina, T; Suh, K-S; Yu, C; Wang, C; Heling, R

    2016-03-01

    We report an inter-comparison of eight models designed to predict the radiological exposure of radionuclides in marine biota. The models were required to simulate dynamically the uptake and turnover of radionuclides by marine organisms. Model predictions of radionuclide uptake and turnover using kinetic calculations based on biological half-life (TB1/2) and/or more complex metabolic modelling approaches were used to predict activity concentrations and, consequently, dose rates of (90)Sr, (131)I and (137)Cs to fish, crustaceans, macroalgae and molluscs under circumstances where the water concentrations are changing with time. For comparison, the ERICA Tool, a model commonly used in environmental assessment, and which uses equilibrium concentration ratios, was also used. As input to the models we used hydrodynamic forecasts of water and sediment activity concentrations using a simulated scenario reflecting the Fukushima accident releases. Although model variability is important, the intercomparison gives logical results, in that the dynamic models predict consistently a pattern of delayed rise of activity concentration in biota and slow decline instead of the instantaneous equilibrium with the activity concentration in seawater predicted by the ERICA Tool. The differences between ERICA and the dynamic models increase the shorter the TB1/2 becomes; however, there is significant variability between models, underpinned by parameter and methodological differences between them. The need to validate the dynamic models used in this intercomparison has been highlighted, particularly in regards to optimisation of the model biokinetic parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Inter-comparison of dynamic models for radionuclide transfer to marine biota in a Fukushima accident scenario

    SciTech Connect

    Vives i Batlle, J.; Beresford, N A; Beaugelin-Seiller, K; Bezhenar, R.; Brown, J.; Cheng, J.-J.; Cujic, M.; Dragovic, S.; Duffa, C.; Fievet, B.; Hosseini, A; Jung, K. T.; Kamboj, S.; Keum, D.-K.; Kryshev, A.; LePoire, D.; Maderich, V.; Min, B.-I.; Perianez, R.; Sazykina, T; Suh, K.-S.; Yu, C.; Wang, C.; Heling, R

    2016-03-01

    We report an inter-comparison of eight models designed to predict the radiological exposure of radionuclides in marine biota. The models were required to simulate dynamically the uptake and turnover of radionuclides by marine organisms. Model predictions of radionuclide uptake and turnover using kinetic calculations based on biological half-life (TB1/2) and/or more complex metabolic modelling approaches were used to predict activity concentrations and, consequently, dose rates of 90Sr, 131I and 137Cs to fish, crustaceans, macroalgae and molluscs under circumstances where the water concentrations are changing with time. For comparison, the ERICA Tool, a model commonly used in environmental assessment, and which uses equilibrium concentration ratios, was also used. As input to the models we used hydrodynamic forecasts of water and sediment activity concentrations using a simulated scenario reflecting the Fukushima accident releases. Although model variability is important, the intercomparison gives logical results, in that the dynamic models predict consistently a pattern of delayed rise of activity concentration in biota and slow decline instead of the instantaneous equilibrium with the activity concentration in seawater predicted by the ERICA Tool. The differences between ERICA and the dynamic models increase the shorter the TB1/2 becomes; however, there is significant variability between models, underpinned by parameter and methodological differences between them. The need to validate the dynamic models used in this intercomparison has been highlighted, particularly in regards to optimisation of the model biokinetic parameters.

  12. Development of comprehensive accident models for two-lane rural highways using exposure, geometry, consistency and context variables.

    PubMed

    Cafiso, Salvatore; Di Graziano, Alessandro; Di Silvestro, Giacomo; La Cava, Grazia; Persaud, Bhagwant

    2010-07-01

    In Europe, approximately 60% of road accident fatalities occur on two-lane rural roads. Thus, research to develop and enhance explanatory and predictive models for this road type continues to be of interest in mitigating these accidents. To this end, this paper describes a novel and extensive data collection and modeling effort to define accident models for two-lane road sections based on a unique combination of exposure, geometry, consistency and context variables directly related to the safety performance. The first part of the paper documents how these were identified for the segmentation of highways into homogeneous sections. Next, is a description of the extensive data collection effort that utilized differential cinematic GPS surveys to define the horizontal alignment variables, and road safety inspections (RSIs) to quantify the other road characteristics related to safety. The final part of the paper focuses on the calibration of models for estimating the expected number of accidents on homogeneous sections that can be characterized by constant values of the explanatory variables. Several candidate models were considered for calibration using the Generalized Linear Modeling (GLM) approach. After considering the statistical significance of the parameters related to exposure, geometry, consistency and context factors, and goodness of fit statistics, 19 models were ranked and three were selected as the recommended models. The first of the three is a base model, with length and traffic as the only predictor variables; since these variables are the only ones likely to be available network-wide, this base model can be used in an empirical Bayesian calculation to conduct network screening for ranking "sites with promise" of safety improvement. The other two models represent the best statistical fits with different combinations of significant variables related to exposure, geometry, consistency and context factors. These multiple variable models can be used, with

  13. A Review of Accident Modelling Approaches for Complex Critical Sociotechnical Systems

    DTIC Science & Technology

    2008-01-01

    deviance , and structural secrecy in NASA. (CAIB, 2003: Chap. 8). This paper provides a review of key traditional accident modelling approaches and... models that consider the simultaneous interactions of technical, human, social , cultural and organisational aspects of modern complex systems. DSTO...consider the organisational, social , and complex interactions between the various system components. Sequential models assume that the cause-effect

  14. Importance of emergency response actions to reactor accidents within a probabilistic consequence assessment model

    SciTech Connect

    Mubayi, V.; Neymotin, L.

    1997-03-01

    An uncertainty and sensitivity analysis of early health consequences of severe accidents at nuclear power plants as a function of the emergency response parameters has been performed using a probabilistic consequence assessment code. The importance of various emergency response parameters in predicting the consequences for a range of accident source terms was determined through training a neural network algorithm which relates the sensitivity of the output to various choices of the input. Extensions of this approach should be helpful to planners in prioritizing the emergency responses at nuclear power plants.

  15. Object-Oriented Bayesian Networks (OOBN) for Aviation Accident Modeling and Technology Portfolio Impact Assessment

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Ancel, Ersin; Jones, Sharon M.

    2012-01-01

    The concern for reducing aviation safety risk is rising as the National Airspace System in the United States transforms to the Next Generation Air Transportation System (NextGen). The NASA Aviation Safety Program is committed to developing an effective aviation safety technology portfolio to meet the challenges of this transformation and to mitigate relevant safety risks. The paper focuses on the reasoning of selecting Object-Oriented Bayesian Networks (OOBN) as the technique and commercial software for the accident modeling and portfolio assessment. To illustrate the benefits of OOBN in a large and complex aviation accident model, the in-flight Loss-of-Control Accident Framework (LOCAF) constructed as an influence diagram is presented. An OOBN approach not only simplifies construction and maintenance of complex causal networks for the modelers, but also offers a well-organized hierarchical network that is easier for decision makers to exploit the model examining the effectiveness of risk mitigation strategies through technology insertions.

  16. Influence of main variables modifications on accident transient based on AP1000-like MELCOR model

    NASA Astrophysics Data System (ADS)

    Malicki, M.; Pieńkowski, L.

    2016-09-01

    Analysis of Severe Accidents (SA) is one of the most important parts of nuclear safety researches. MELCOR is a validated system code for severe accident analysis and as such it was used to obtain presented results. Analysed AP1000 model is based on publicly available data only. Sensitivity analysis was done for the main variables of primary reactor coolant system to find their influence on accident transient. This kind of analysis helps to find weak points of reactor design and the model itself. Performed analysis is a base for creation of Small Modular Reactor (SMR) generic model which will be the next step of the investigation aiming to estimate safety level of different reactors. Results clearly help to establish a range of boundary conditions for main the variables in future SMR model.

  17. Effects of quenched randomness induced by car accidents on traffic flow in a cellular automata model

    NASA Astrophysics Data System (ADS)

    Yang, Xian-Qing; Ma, Yu-Qiang; Zhao, Yue-Min

    2004-10-01

    In this paper we numerically study the impact of quenched disorder induced by car accidents on traffic flow in the Nagel-Schreckenberg (NS) model. Car accidents occur when the necessary conditions proposed by [Boccara J. Phys. A 30, 3329 (1997)] are satisfied. Two realistic situations of cars involved in car accidents have been considered. Model A is presented to consider that the accident cars become temporarily stuck. Our studies exhibit the “inverse- λ form” or the metastable state for traffic flow in the fundamental diagram and wide-moving waves of jams in the space-time pattern. Model B is proposed to take into account that the “wrecked” cars stay there forever and the cars behind will pass through the sites occupied by the “wrecked” cars with a transmission rate. Four-stage transitions from a maximum flow through a sharp decrease phase and a density-independent phase to a high-density jamming phase for traffic flow have been observed. The density profiles and the effects of transmission rate and probability of the occurrence of car accidents in model B are also discussed.

  18. Development and Validation of Accident Models for FeCrAl Cladding

    SciTech Connect

    Gamble, Kyle Allan Lawrence; Hales, Jason Dean

    2016-08-01

    The purpose of this milestone report is to present the work completed in regards to material model development for FeCrAl cladding and highlight the results of applying these models to Loss of Coolant Accidents (LOCA) and Station Blackouts (SBO). With the limited experimental data available (essentially only the data used to create the models) true validation is not possible. In the absence of another alternative, qualitative comparisons during postulated accident scenarios between FeCrAl and Zircaloy-4 cladded rods have been completed demonstrating the superior performance of FeCrAl.

  19. Predictive Surface Complexation Modeling

    SciTech Connect

    Sverjensky, Dimitri A.

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  20. Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices

    SciTech Connect

    Not Available

    1988-12-15

    This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

  1. Final safety analysis report for the Galileo Mission: Volume 2: Book 1, Accident model document

    SciTech Connect

    Not Available

    1988-12-15

    The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source. The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence. 27 figs., 11 tabs.

  2. A Statistical Approach to Predict the Failure Enthalpy and Reliability of Irradiated PWR Fuel Rods During Reactivity-Initiated Accidents

    SciTech Connect

    Nam, Cheol; Jeong, Yong-Hwan; Jung, Youn-Ho

    2001-11-15

    During the last decade, the failure behavior of high-burnup fuel rods under a reactivity-initiated accident (RIA) condition has been a serious concern since fuel rod failures at low enthalpy have been observed. This has resulted in the reassessment of existing licensing criteria and failure-mode study. To address the issue, a statistics-based methodology is suggested to predict failure probability of irradiated fuel rods under an RIA. Based on RIA simulation results in the literature, a failure enthalpy correlation for an irradiated fuel rod is constructed as a function of oxide thickness, fuel burnup, and pulse width. Using the failure enthalpy correlation, a new concept of ''equivalent enthalpy'' is introduced to reflect the effects of the three primary factors as well as peak fuel enthalpy into a single damage parameter. Moreover, the failure distribution function with equivalent enthalpy is derived, applying a two-parameter Weibull statistical model. Finally, the sensitivity analysis is carried out to estimate the effects of burnup, corrosion, peak fuel enthalpy, pulse width, and cladding materials used.

  3. Severe accident modeling of a PWR core with different cladding materials

    SciTech Connect

    Johnson, S. C.; Henry, R. E.; Paik, C. Y.

    2012-07-01

    The MAAP v.4 software has been used to model two severe accident scenarios in nuclear power reactors with three different materials as fuel cladding. The TMI-2 severe accident was modeled with Zircaloy-2 and SiC as clad material and a SBO accident in a Zion-like, 4-loop, Westinghouse PWR was modeled with Zircaloy-2, SiC, and 304 stainless steel as clad material. TMI-2 modeling results indicate that lower peak core temperatures, less H 2 (g) produced, and a smaller mass of molten material would result if SiC was substituted for Zircaloy-2 as cladding. SBO modeling results indicate that the calculated time to RCS rupture would increase by approximately 20 minutes if SiC was substituted for Zircaloy-2. Additionally, when an extended SBO accident (RCS creep rupture failure disabled) was modeled, significantly lower peak core temperatures, less H 2 (g) produced, and a smaller mass of molten material would be generated by substituting SiC for Zircaloy-2 or stainless steel cladding. Because the rate of SiC oxidation reaction with elevated temperature H{sub 2}O (g) was set to 0 for this work, these results should be considered preliminary. However, the benefits of SiC as a more accident tolerant clad material have been shown and additional investigation of SiC as an LWR core material are warranted, specifically investigations of the oxidation kinetics of SiC in H{sub 2}O (g) over the range of temperatures and pressures relevant to severe accidents in LWR 's. (authors)

  4. [Model of Analysis and Prevention of Accidents - MAPA: tool for operational health surveillance].

    PubMed

    de Almeida, Ildeberto Muniz; Vilela, Rodolfo Andrade de Gouveia; da Silva, Alessandro José Nunes; Beltran, Sandra Lorena

    2014-12-01

    The analysis of work-related accidents is important for accident surveillance and prevention. Current methods of analysis seek to overcome reductionist views that see these occurrences as simple events explained by operator error. The objective of this paper is to analyze the Model of Analysis and Prevention of Accidents (MAPA) and its use in monitoring interventions, duly highlighting aspects experienced in the use of the tool. The descriptive analytical method was used, introducing the steps of the model. To illustrate contributions and or difficulties, cases where the tool was used in the context of service were selected. MAPA integrates theoretical approaches that have already been tried in studies of accidents by providing useful conceptual support from the data collection stage until conclusion and intervention stages. Besides revealing weaknesses of the traditional approach, it helps identify organizational determinants, such as management failings, system design and safety management involved in the accident. The main challenges lie in the grasp of concepts by users, in exploring organizational aspects upstream in the chain of decisions or at higher levels of the hierarchy, as well as the intervention to change the determinants of these events.

  5. Collective responsibility for freeway rear-ending accidents? An application of probabilistic casual models.

    PubMed

    Davis, Gary A; Swenson, Tait

    2006-07-01

    Determining whether or not an event was a cause of a road accident often involves determining the truth of a counterfactual conditional, where what happened is compared to what would have happened had the supposed cause been absent. Using structural causal models, Pearl and his associates have recently developed a rigorous method for posing and answering causal questions, and this approach is especially well suited to the reconstruction and analysis of road accidents. Here, we applied these methods to three freeway rear-end collisions. Starting with video recordings of the accidents, trajectory information for a platoon of vehicles involved in and preceding the collision was extracted from the video record, and this information was used to estimate each driver's initial speed, following distance, reaction time, and braking rate. Using Brill's model of rear-end accidents, it was then possible to simulate what would have happened, other things being equal, had certain driver actions been other than they were. In each of the three accidents we found evidence that: (1) short following headways by the colliding drivers were probable causal factors for the collisions, (2) for each collision, at least one driver ahead of the colliding vehicles probably had a reaction time that was longer than his or her following headway, and (3) had that driver's reaction time been equal to his or her following headway, the rear-end collision probably would not have happened.

  6. Illustration interface of accident progression in PWR by quick inference based on multilevel flow models

    SciTech Connect

    Yoshikawa, H.; Ouyang, J.; Niwa, Y.

    2006-07-01

    In this paper, a new accident inference method is proposed by using a goal and function oriented modeling method called Multilevel Flow Model focusing on explaining the causal-consequence relations and the objective of automatic action in the accident of nuclear power plant. Users can easily grasp how the various plant parameters will behave and how the various safety facilities will be activated sequentially to cope with the accident until the nuclear power plants are settled into safety state, i.e., shutdown state. The applicability of the developed method was validated by the conduction of internet-based 'view' experiment to the voluntary respondents, and in the future, further elaboration of interface design and the further introduction of instruction contents will be developed to make it become the usable CAI system. (authors)

  7. Modelling transport and deposition of caesium and iodine from the Chernobyl accident using the DREAM model

    NASA Astrophysics Data System (ADS)

    Brandt, J.; Christensen, J. H.; Frohn, L. M.

    2002-12-01

    A tracer model, DREAM (the Danish Rimpuff and Eulerian Accidental release Model), has been developed for modelling transport, dispersion and deposition (wet and dry) of radioactive material from accidental releases, as the Chernobyl accident. The model is a combination of a Lagrangian model, that includes the near source dispersion, and an Eulerian model describing the long-range transport. The performance of the transport model has previously been tested within the European Tracer Experiment, ETEX, which included transport and dispersion of an inert, non-depositing tracer from a controlled release. The focus of this paper is the model performance with respect to the total deposition of 137Cs, 134Cs and 131I from the Chernobyl accident, using different relatively simple and comprehensive parameterizations for dry- and wet deposition. The performance, compared to measurements, of using different combinations of two different wet deposition parameterizations and three different parameterizations of dry deposition has been evaluated, using different statistical tests. The best model performance, compared to measurements, is obtained when parameterizing the total deposition combined of a simple method for dry deposition and a subgrid-scale averaging scheme for wet deposition based on relative humidities. The same major conclusion is obtained for all the three different radioactive isotopes and using two different deposition measurement databases. Large differences are seen in the results obtained by using the two different parameterizations of wet deposition based on precipitation rates and relative humidities, respectively. The parameterization based on subgrid-scale averaging is, in all cases, performing better than the parameterization based on precipitation rates. This indicates that the in-cloud scavenging process is more important than the below cloud scavenging process for the submicron particles and that the precipitation rates are relatively uncertain in the

  8. Radiological assessment by compartment model POSEIDON-R of radioactivity released in the ocean following Fukushima Daiichi accident

    NASA Astrophysics Data System (ADS)

    Bezhenar, Roman; Maderich, Vladimir; Heling, Rudie; Jung, Kyung Tae; Myoung, Jung-Goo

    2013-04-01

    The modified compartment model POSEIDON-R (Lepicard et al, 2004), was applied to the North-Western Pacific and adjacent seas. It is for the first time, that a compartment model was used in this region, where 25 Nuclear Power Plants (NPP) are operated. The aim of this study is to perform a radiological assessment of the releases of radioactivity due to the Fukushima Daiichi accident. The model predicts the dispersion of radioactivity in water column and in the sediments, and the transfer of radionuclides throughout the marine food web, and the subsequent doses to the population due to the consumption of fishery products. A generic predictive dynamical food-chain model is used instead of concentration factor (CF) approach. The radionuclide uptake model for fish has as central feature the accumulation of radionuclides in the target tissue. Three layer structure of the water column makes it possible to describe deep-water transport adequately. In total 175 boxes cover the Northwestern Pacific, the East China Sea, and the Yellow Sea and East/Japan Sea. Water fluxes between boxes were calculated by averaging three-dimensional currents obtained by hydrodynamic model ROMS over a 10-years period. Tidal mixing between boxes was parameterized. The model was validated on observation data on the Cs-137 in water for the period 1945-2004. The source terms from nuclear weapon tests are regional source term from the bomb tests on Atoll Enewetak and Atoll Bikini and global deposition from weapons tests. The correlation coefficient between predicted and observed concentrations of Cs-137 in the surface water is 0.925 and RMSE=1.43 Bq/m3. A local-scale coastal box was used according POSEIDON's methodology to describe local processes of activity transport, deposition and food web around the Fukushima Daiichi NPP. The source term to the ocean from the Fukushima accident includes a 10-days release of Cs-134 (5 PBq) and Cs-137 (4 PBq) directly into the ocean and 6 and 5 PBq of Cs-134 and

  9. [Guilty victims: a model to perpetuate impunity for work-related accidents].

    PubMed

    Vilela, Rodolfo Andrade Gouveia; Iguti, Aparecida Mari; Almeida, Ildeberto Muniz

    2004-01-01

    This article analyzes reports and data from the investigation of severe and fatal work-related accidents by the Regional Institute of Criminology in Piracicaba, São Paulo State, Brazil. Some 71 accident investigation reports were analyzed from 1998, 1999, and 2000. Accidents involving machinery represented 38.0% of the total, followed by high falls (15.5%), and electric shocks (11.3%). The reports conclude that 80.0% of the accidents are caused by "unsafe acts" committed by workers themselves, while the lack of safety or "unsafe conditions" account for only 15.5% of cases. Victims are blamed even in situations involving high risk in which not even minimum safety conditions are adopted, thus favoring employers' interests. Such conclusions reflect traditional reductionist explanatory models, in which accidents are viewed as simple, unicausal phenomena, generally focused on slipups and errors by the workers themselves. Despite criticism in recent decades from the technical and academic community, this concept is still hegemonic, thus jeopardizing the development of preventive policies and the improvement of work conditions.

  10. A graph model for preventing railway accidents based on the maximal information coefficient

    NASA Astrophysics Data System (ADS)

    Shao, Fubo; Li, Keping

    2017-01-01

    A number of factors influences railway safety. It is an important work to identify important influencing factors and to build the relationship between railway accident and its influencing factors. The maximal information coefficient (MIC) is a good measure of dependence for two-variable relationships which can capture a wide range of associations. Employing MIC, a graph model is proposed for preventing railway accidents which avoids complex mathematical computation. In the graph, nodes denote influencing factors of railway accidents and edges represent dependence of the two linked factors. With the increasing of dependence level, the graph changes from a globally coupled graph to isolated points. Moreover, the important influencing factors are identified from many factors which are the monitor key. Then the relationship between railway accident and important influencing factors is obtained by employing the artificial neural networks. With the relationship, a warning mechanism is built by giving the dangerous zone. If the related factors fall into the dangerous zone in railway operations, the warning level should be raised. The built warning mechanism can prevent railway accidents and can promote railway safety.

  11. Melanoma risk prediction models.

    PubMed

    Nikolić, Jelena; Loncar-Turukalo, Tatjana; Sladojević, Srdan; Marinković, Marija; Janjić, Zlata

    2014-08-01

    The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. This case-control study included 697 participants (341 patients and 356 controls) that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR) and alternating decision trees (ADT) prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS) based on the outcome of the LR model was presented. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724-9.366 for those that sometimes used sunbeds), solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage), hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair), the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931), the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119), Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were only present in melanoma patients and thus

  12. Code package {open_quotes}SVECHA{close_quotes}: Modeling of core degradation phenomena at severe accidents

    SciTech Connect

    Veshchunov, M.S.; Kisselev, A.E.; Palagin, A.V.

    1995-09-01

    The code package SVECHA for the modeling of in-vessel core degradation (CD) phenomena in severe accidents is being developed in the Nuclear Safety Institute, Russian Academy of Science (NSI RAS). The code package presents a detailed mechanistic description of the phenomenology of severe accidents in a reactor core. The modules of the package were developed and validated on separate effect test data. These modules were then successfully implemented in the ICARE2 code and validated against a wide range of integral tests. Validation results have shown good agreement with separate effect tests data and with the integral tests CORA-W1/W2, CORA-13, PHEBUS-B9+.

  13. Multilevel modelling for the regional effect of enforcement on road accidents.

    PubMed

    Yannis, George; Papadimitriou, Eleonora; Antoniou, Constantinos

    2007-07-01

    This paper investigates the effect of the intensification of Police enforcement on the number of road accidents at national and regional level in Greece, focusing on one of the most important road safety violations: drinking-and-driving. Multilevel negative binomial models are developed to describe the effect of the intensification of alcohol enforcement on the reduction of road accidents in different regions of Greece. Moreover, two approaches are explored as far as regional clustering is concerned: the first one concerns an ad hoc geographical clustering and the second one is based on the results of mathematical cluster analysis through demographic, transport and road safety characteristics. Results indicate that there are significant spatial dependences among road accidents and enforcement. Additionally, it is shown that these dependences are more efficiently interpreted when regions are determined on the basis of qualitative similarities than on the basis of geographical adjacency.

  14. Computer program predicts thermal and flow transients experienced in a reactor loss- of-flow accident

    NASA Technical Reports Server (NTRS)

    Hale, C. J.

    1967-01-01

    Program analyzes the consequences of a loss-of-flow accident in the primary cooling system of a heterogeneous light-water moderated and cooled nuclear reactor. It produces a temperature matrix 36 x 41 /x,y/ which includes fuel surface temperatures relative to the time the pump power was lost.

  15. Traffic accidents in a cellular automaton model with a speed limit zone

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Yang, Xian-qing; Sun, Da-peng; Qiu, Kang; Xia, Hui

    2006-07-01

    In this paper, we numerically study the probability Pac of the occurrence of car accidents in the Nagel-Schreckenberg (NS) model with a speed limit zone. Numerical results show that the probability for car accidents to occur Pac is determined by the maximum speed v'max of the speed limit zone, but is independent of the length Lv of the speed limit zone in the deterministic NS model. However in the nondeterministic NS model, the probability of the occurrence of car accidents Pac is determined not only by the maximum speed v'max, but also the length Lv. The probability Pac increases accordingly with the increase of the maximum speed of the speed limit zone, but decreases with the increase of the length of the speed limit zone, in the low-density region. However in the case of v'max = 1, the probability Pac increases with the increase of the length in the low-density region, but decreases in the interval between the low-density and high-density regions. The speed limit zone also causes an inhomogeneous distribution of car accidents over the whole road. Theoretical analyses give an agreement with numerical results in the nondeterministic NS model with v'max = 1 and vmax = 5.

  16. Using meteorological ensembles for atmospheric dispersion modelling of the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Périllat, Raphaël; Korsakissok, Irène; Mallet, Vivien; Mathieu, Anne; Sekiyama, Thomas; Didier, Damien; Kajino, Mizuo; Igarashi, Yasuhito; Adachi, Kouji

    2016-04-01

    Dispersion models are used in response to an accidental release of radionuclides of the atmosphere, to infer mitigation actions, and complement field measurements for the assessment of short and long term environmental and sanitary impacts. However, the predictions of these models are subject to important uncertainties, especially due to input data, such as meteorological fields or source term. This is still the case more than four years after the Fukushima disaster (Korsakissok et al., 2012, Girard et al., 2014). In the framework of the SAKURA project, an MRI-IRSN collaboration, a meteorological ensemble of 20 members designed by MRI (Sekiyama et al. 2013) was used with IRSN's atmospheric dispersion models. Another ensemble, retrieved from ECMWF and comprising 50 members, was also used for comparison. The MRI ensemble is 3-hour assimilated, with a 3-kilometers resolution, designed to reduce the meteorological uncertainty in the Fukushima case. The ECMWF is a 24-hour forecast with a coarser grid, representative of the uncertainty of the data available in a crisis context. First, it was necessary to assess the quality of the ensembles for our purpose, to ensure that their spread was representative of the uncertainty of meteorological fields. Using meteorological observations allowed characterizing the ensembles' spread, with tools such as Talagrand diagrams. Then, the uncertainty was propagated through atmospheric dispersion models. The underlying question is whether the output spread is larger than the input spread, that is, whether small uncertainties in meteorological fields can produce large differences in atmospheric dispersion results. Here again, the use of field observations was crucial, in order to characterize the spread of the ensemble of atmospheric dispersion simulations. In the case of the Fukushima accident, gamma dose rates, air activities and deposition data were available. Based on these data, selection criteria for the ensemble members were

  17. A dynamic model to estimate the activity concentration and whole body dose rate of marine biota as consequences of a nuclear accident.

    PubMed

    Keum, Dong-Kwon; Jun, In; Kim, Byeong-Ho; Lim, Kwang-Muk; Choi, Yong-Ho

    2015-02-01

    This paper describes a dynamic compartment model (K-BIOTA-DYN-M) to assess the activity concentration and whole body dose rate of marine biota as a result of a nuclear accident. The model considers the transport of radioactivity between the marine biota through the food chain, and applies the first order kinetic model for the sedimentation of radionuclides from seawater onto sediment. A set of ordinary differential equations representing the model are simultaneously solved to calculate the activity concentration of the biota and the sediment, and subsequently the dose rates, given the seawater activity concentration. The model was applied to investigate the long-term effect of the Fukushima nuclear accident on the marine biota using (131)I, (134)Cs, and, (137)Cs activity concentrations of seawater measured for up to about 2.5 years after the accident at two locations in the port of the Fukushima Daiichi Nuclear Power Station (FDNPS) which was the most highly contaminated area. The predicted results showed that the accumulated dose for 3 months after the accident was about 4-4.5Gy, indicating the possibility of occurrence of an acute radiation effect in the early phase after the Fukushima accident; however, the total dose rate for most organisms studied was usually below the UNSCEAR (United Nations Scientific Committee on the Effects of Atomic Radiation)'s bench mark level for chronic exposure except for the initial phase of the accident, suggesting a very limited radiological effect on the marine biota at the population level. The predicted Cs sediment activity by the first-order kinetic model for the sedimentation was in a good agreement with the measured activity concentration. By varying the ecological parameter values, the present model was able to predict the very scattered (137)Cs activity concentrations of fishes measured in the port of FDNPS. Conclusively, the present dynamic model can be usefully applied to estimate the activity concentration and whole

  18. MELCOR analysis of the TMI-2 accident

    SciTech Connect

    Boucheron, E.A.

    1990-01-01

    This paper describes the analysis of the Three Mile Island-2 (TMI-2) standard problem that was performed with MELCOR. The MELCOR computer code is being developed by Sandia National Laboratories for the Nuclear Regulatory Commission for the purpose of analyzing severe accident in nuclear power plants. The primary role of MELCOR is to provide realistic predictions of severe accident phenomena and the radiological source team. The analysis of the TMI-2 standard problem allowed for comparison of the model predictions in MELCOR to plant data and to the results of more mechanistic analyses. This exercise was, therefore valuable for verifying and assessing the models in the code. The major trends in the TMI-2 accident are reasonably well predicted with MELCOR, even with its simplified modeling. Comparison of the calculated and measured results is presented and, based on this comparison, conclusions can be drawn concerning the applicability of MELCOR to severe accident analysis. 5 refs., 10 figs., 3 tabs.

  19. Input-output model for MACCS nuclear accident impacts estimation¹

    SciTech Connect

    Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  20. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    NASA Technical Reports Server (NTRS)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  1. ON PREDICTION AND MODEL VALIDATION

    SciTech Connect

    M. MCKAY; R. BECKMAN; K. CAMPBELL

    2001-02-01

    Quantification of prediction uncertainty is an important consideration when using mathematical models of physical systems. This paper proposes a way to incorporate ''validation data'' in a methodology for quantifying uncertainty of the mathematical predictions. The report outlines a theoretical framework.

  2. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential

  3. Accident Sequence Precursor Program Large Early Release Frequency Model Development

    SciTech Connect

    Brown, T.D.; Brownson, D.A.; Duran, F.A.; Gregory, J.J.; Rodrick, E.G.

    1999-01-04

    The objectives for the ASP large early release frequency (LERF) model development work is to build a Level 2 containment response model that would capture all of the events necessary to define LERF as outlined in Regulatory Guide 1.174, can be directly interfaced with the existing Level 1 models, is technically correct, can be readily modified to incorporate new information or to represent another plant, and can be executed in SAPHIRE. The ASP LERF models being developed will meet these objectives while providing the NRC with the capability to independently assess the risk impact of plant-specific changes proposed by the utilities that change the nuclear power plants' licensing basis. Together with the ASP Level 1 models, the ASP LERF models provide the NRC with the capability of performing equipment and event assessments to determine their impact on a plant's LERF for internal events during power operation. In addition, the ASP LERF models are capable of being updated to reflect changes in information regarding the system operations and phenomenological events, and of being updated to assess the potential for early fatalities for each LERF sequence. As the ASP Level 1 models evolve to include more analysis capabilities, the LERF models will also be refined to reflect the appropriate level of detail needed to demonstrate the new capabilities. An approach was formulated for the development of detailed LERF models using the NUREG-1150 APET models as a guide. The modifications to the SAPHIRE computer code have allowed the development of these detailed models and the ability to analyze these models in a reasonable time. Ten reference LERF plant models, including six PWR models and four BWR models, which cover a wide variety of containment and nuclear steam supply systems designs, will be complete in 1999. These reference models will be used as the starting point for developing the LERF models for the remaining nuclear power plants.

  4. Prediction models in cancer care.

    PubMed

    Vickers, Andrew J

    2011-01-01

    Prediction is ubiquitous across the spectrum of cancer care from screening to hospice. Indeed, oncology is often primarily a prediction problem; many of the early stage cancers cause no symptoms, and treatment is recommended because of a prediction that tumor progression would ultimately threaten a patient's quality of life or survival. Recent years have seen attempts to formalize risk prediction in cancer care. In place of qualitative and implicit prediction algorithms, such as cancer stage, researchers have developed statistical prediction tools that provide a quantitative estimate of the probability of a specific event for an individual patient. Prediction models generally have greater accuracy than reliance on stage or risk groupings, can incorporate novel predictors such as genomic data, and can be used more rationally to make treatment decisions. Several prediction models are now widely used in clinical practice, including the Gail model for breast cancer incidence or the Adjuvant! Online prediction model for breast cancer recurrence. Given the burgeoning complexity of diagnostic and prognostic information, there is simply no realistic alternative to incorporating multiple variables into a single prediction model. As such, the question should not be whether but how prediction models should be used to aid decision-making. Key issues will be integration of models into the electronic health record and more careful evaluation of models, particularly with respect to their effects on clinical outcomes.

  5. MODELING OF 2LIBH4 PLUS MGH2 HYDROGEN STORAGE SYSTEM ACCIDENT SCENARIOS USING EMPIRICAL AND THEORETICAL THERMODYNAMICS

    SciTech Connect

    James, C; David Tamburello, D; Joshua Gray, J; Kyle Brinkman, K; Bruce Hardy, B; Donald Anton, D

    2009-04-01

    It is important to understand and quantify the potential risk resulting from accidental environmental exposure of condensed phase hydrogen storage materials under differing environmental exposure scenarios. This paper describes a modeling and experimental study with the aim of predicting consequences of the accidental release of 2LiBH{sub 4}+MgH{sub 2} from hydrogen storage systems. The methodology and results developed in this work are directly applicable to any solid hydride material and/or accident scenario using appropriate boundary conditions and empirical data. The ability to predict hydride behavior for hypothesized accident scenarios facilitates an assessment of the of risk associated with the utilization of a particular hydride. To this end, an idealized finite volume model was developed to represent the behavior of dispersed hydride from a breached system. Semiempirical thermodynamic calculations and substantiating calorimetric experiments were performed in order to quantify the energy released, energy release rates and to quantify the reaction products resulting from water and air exposure of a lithium borohydride and magnesium hydride combination. The hydrides, LiBH{sub 4} and MgH{sub 2}, were studied individually in the as-received form and in the 2:1 'destabilized' mixture. Liquid water hydrolysis reactions were performed in a Calvet calorimeter equipped with a mixing cell using neutral water. Water vapor and oxygen gas phase reactivity measurements were performed at varying relative humidities and temperatures by modifying the calorimeter and utilizing a gas circulating flow cell apparatus. The results of these calorimetric measurements were compared with standardized United Nations (UN) based test results for air and water reactivity and used to develop quantitative kinetic expressions for hydrolysis and air oxidation in these systems. Thermodynamic parameters obtained from these tests were then inputted into a computational fluid dynamics model to

  6. Underreporting in traffic accident data, bias in parameters and the structure of injury severity models.

    PubMed

    Yamamoto, Toshiyuki; Hashiji, Junpei; Shankar, Venkataraman N

    2008-07-01

    Injury severities in traffic accidents are usually recorded on ordinal scales, and statistical models have been applied to investigate the effects of driver factors, vehicle characteristics, road geometrics and environmental conditions on injury severity. The unknown parameters in the models are in general estimated assuming random sampling from the population. Traffic accident data however suffer from underreporting effects, especially for lower injury severities. As a result, traffic accident data can be regarded as outcome-based samples with unknown population shares of the injury severities. An outcome-based sample is overrepresented by accidents of higher severities. As a result, outcome-based samples result in biased parameters which skew our inferences on the effect of key safety variables such as safety belt usage. The pseudo-likelihood function for the case with unknown population shares, which is the same as the conditional maximum likelihood for the case with known population shares, is applied in this study to examine the effects of severity underreporting on the parameter estimates. Sequential binary probit models and ordered-response probit models of injury severity are developed and compared in this study. Sequential binary probit models assume that the factors determining the severity change according to the level of the severity itself, while ordered-response probit models assume that the same factors correlate across all levels of severity. Estimation results suggest that the sequential binary probit models outperform the ordered-response probit models, and that the coefficient estimates for lap and shoulder belt use are biased if underreporting is not considered. Mean parameter bias due to underreporting can be significant. The findings show that underreporting on the outcome dimension may induce bias in inferences on a variety of factors. In particular, if underreporting is not accounted for, the marginal impacts of a variety of factors appear

  7. A statistical model for predicting muscle performance

    NASA Astrophysics Data System (ADS)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  8. A flammability and combustion model for integrated accident analysis. [Advanced light water reactors

    SciTech Connect

    Plys, M.G.; Astleford, R.D.; Epstein, M. )

    1988-01-01

    A model for flammability characteristics and combustion of hydrogen and carbon monoxide mixtures is presented for application to severe accident analysis of Advanced Light Water Reactors (ALWR's). Flammability of general mixtures for thermodynamic conditions anticipated during a severe accident is quantified with a new correlation technique applied to data for several fuel and inertant mixtures and using accepted methods for combining these data. Combustion behavior is quantified by a mechanistic model consisting of a continuity and momentum balance for the burned gases, and considering an uncertainty parameter to match the idealized process to experiment. Benchmarks against experiment demonstrate the validity of this approach for a single recommended value of the flame flux multiplier parameter. The models presented here are equally applicable to analysis of current LWR's. 21 refs., 16 figs., 6 tabs.

  9. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    SciTech Connect

    Collin, Blaise P.

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document

  10. A Time Series Model for Assessing the Trend and Forecasting the Road Traffic Accident Mortality

    PubMed Central

    Yousefzadeh-Chabok, Shahrokh; Ranjbar-Taklimie, Fatemeh; Malekpouri, Reza; Razzaghi, Alireza

    2016-01-01

    Background Road traffic accident (RTA) is one of the main causes of trauma and known as a growing public health concern worldwide, especially in developing countries. Assessing the trend of fatalities in the past years and forecasting it enables us to make the appropriate planning for prevention and control. Objectives This study aimed to assess the trend of RTAs and forecast it in the next years by using time series modeling. Materials and Methods In this historical analytical study, the RTA mortalities in Zanjan Province, Iran, were evaluated during 2007 - 2013. The time series analyses including Box-Jenkins models were used to assess the trend of accident fatalities in previous years and forecast it for the next 4 years. Results The mean age of the victims was 37.22 years (SD = 20.01). From a total of 2571 deaths, 77.5% (n = 1992) were males and 22.5% (n = 579) were females. The study models showed a descending trend of fatalities in the study years. The SARIMA (1, 1, 3) (0, 1, 0) 12 model was recognized as a best fit model in forecasting the trend of fatalities. Forecasting model also showed a descending trend of traffic accident mortalities in the next 4 years. Conclusions There was a decreasing trend in the study and the future years. It seems that implementation of some interventions in the recent decade has had a positive effect on the decline of RTA fatalities. Nevertheless, there is still a need to pay more attention in order to prevent the occurrence and the mortalities related to traffic accidents. PMID:27800467

  11. A Time Series Model for Assessing the Trend and Forecasting the Road Traffic Accident Mortality.

    PubMed

    Yousefzadeh-Chabok, Shahrokh; Ranjbar-Taklimie, Fatemeh; Malekpouri, Reza; Razzaghi, Alireza

    2016-09-01

    Road traffic accident (RTA) is one of the main causes of trauma and known as a growing public health concern worldwide, especially in developing countries. Assessing the trend of fatalities in the past years and forecasting it enables us to make the appropriate planning for prevention and control. This study aimed to assess the trend of RTAs and forecast it in the next years by using time series modeling. In this historical analytical study, the RTA mortalities in Zanjan Province, Iran, were evaluated during 2007 - 2013. The time series analyses including Box-Jenkins models were used to assess the trend of accident fatalities in previous years and forecast it for the next 4 years. The mean age of the victims was 37.22 years (SD = 20.01). From a total of 2571 deaths, 77.5% (n = 1992) were males and 22.5% (n = 579) were females. The study models showed a descending trend of fatalities in the study years. The SARIMA (1, 1, 3) (0, 1, 0) 12 model was recognized as a best fit model in forecasting the trend of fatalities. Forecasting model also showed a descending trend of traffic accident mortalities in the next 4 years. There was a decreasing trend in the study and the future years. It seems that implementation of some interventions in the recent decade has had a positive effect on the decline of RTA fatalities. Nevertheless, there is still a need to pay more attention in order to prevent the occurrence and the mortalities related to traffic accidents.

  12. Time series count data models: an empirical application to traffic accidents.

    PubMed

    Quddus, Mohammed A

    2008-09-01

    Count data are primarily categorised as cross-sectional, time series, and panel. Over the past decade, Poisson and Negative Binomial (NB) models have been used widely to analyse cross-sectional and time series count data, and random effect and fixed effect Poisson and NB models have been used to analyse panel count data. However, recent literature suggests that although the underlying distributional assumptions of these models are appropriate for cross-sectional count data, they are not capable of taking into account the effect of serial correlation often found in pure time series count data. Real-valued time series models, such as the autoregressive integrated moving average (ARIMA) model, introduced by Box and Jenkins have been used in many applications over the last few decades. However, when modelling non-negative integer-valued data such as traffic accidents at a junction over time, Box and Jenkins models may be inappropriate. This is mainly due to the normality assumption of errors in the ARIMA model. Over the last few years, a new class of time series models known as integer-valued autoregressive (INAR) Poisson models, has been studied by many authors. This class of models is particularly applicable to the analysis of time series count data as these models hold the properties of Poisson regression and able to deal with serial correlation, and therefore offers an alternative to the real-valued time series models. The primary objective of this paper is to introduce the class of INAR models for the time series analysis of traffic accidents in Great Britain. Different types of time series count data are considered: aggregated time series data where both the spatial and temporal units of observation are relatively large (e.g., Great Britain and years) and disaggregated time series data where both the spatial and temporal units are relatively small (e.g., congestion charging zone and months). The performance of the INAR models is compared with the class of Box and

  13. GEN-IV BENCHMARKING OF TRISO FUEL PERFORMANCE MODELS UNDER ACCIDENT CONDITIONS MODELING INPUT DATA

    SciTech Connect

    Collin, Blaise Paul

    2016-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read

  14. Modelling of human alarm handling response times: a case study of the Ladbroke Grove rail accident in the UK.

    PubMed

    Stanton, Neville A; Baber, Christopher

    2008-04-01

    The purpose of the paper was to address the timeliness of the signaller's intervention in the Ladbroke Grove rail incident in the UK, as well as to consider the utility of human performance time modelling more generally. Human performance response time modelling is a critical area for Human Factors and Ergonomics research. This research applied two approaches to the same problem to see if they arrived at the same conclusion. The first modelling approach used the alarm initiated activity (AIA) model. This approach is useful for indicating general response times in emergency events, but it cannot comment in detail on any specific case. The second modelling approach employed a multi-modal critical path analysis (CPA) technique. The advantage of the latter approach is that it can be used to model a specific incident on the basis of the known factors from the accident inquiry. The results show that the AIA model produced an estimated response time of 17 s, whereas the CPA model produced an estimated response time of 19 s. This compares with the actual response time of the signaller of 18 s. The response time data from both approaches are concordant and suggest that the signaller's response time in the Ladbroke Grove rail accident was reasonable. This research has application to the modelling of human responses to emergency events in all domains. Rather than the forensic reconstruction approach used in this paper, the models could be used in a predictive manner to anticipate how long human operators of safety-critical systems might take to respond in emergency scenarios.

  15. Modeling the early-phase redistribution of radiocesium fallouts in an evergreen coniferous forest after Chernobyl and Fukushima accidents.

    PubMed

    Calmon, P; Gonze, M-A; Mourlon, Ch

    2015-10-01

    Following the Chernobyl accident, the scientific community gained numerous data on the transfer of radiocesium in European forest ecosystems, including information regarding the short-term redistribution of atmospheric fallout onto forest canopies. In the course of international programs, the French Institute for Radiological Protection and Nuclear Safety (IRSN) developed a forest model, named TREE4 (Transfer of Radionuclides and External Exposure in FORest systems), 15 years ago. Recently published papers on a Japanese evergreen coniferous forest contaminated by Fukushima radiocesium fallout provide interesting and quantitative data on radioactive mass fluxes measured within the forest in the months following the accident. The present study determined whether the approach adopted in the TREE4 model provides satisfactory results for Japanese forests or whether it requires adjustments. This study focused on the interception of airborne radiocesium by forest canopy, and the subsequent transfer to the forest floor through processes such as litterfall, throughfall, and stemflow, in the months following the accident. We demonstrated that TREE4 quite satisfactorily predicted the interception fraction (20%) and the canopy-to-soil transfer (70% of the total deposit in 5 months) in the Tochigi forest. This dynamics was similar to that observed in the Höglwald spruce forest. However, the unexpectedly high contribution of litterfall (31% in 5 months) in the Tochigi forest could not be reproduced in our simulations (2.5%). Possible reasons for this discrepancy are discussed; and sensitivity of the results to uncertainty in deposition conditions was analyzed. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Real-time EEG-based detection of fatigue driving danger for accident prediction.

    PubMed

    Wang, Hong; Zhang, Chi; Shi, Tianwei; Wang, Fuwang; Ma, Shujun

    2015-03-01

    This paper proposes a real-time electroencephalogram (EEG)-based detection method of the potential danger during fatigue driving. To determine driver fatigue in real time, wavelet entropy with a sliding window and pulse coupled neural network (PCNN) were used to process the EEG signals in the visual area (the main information input route). To detect the fatigue danger, the neural mechanism of driver fatigue was analyzed. The functional brain networks were employed to track the fatigue impact on processing capacity of brain. The results show the overall functional connectivity of the subjects is weakened after long time driving tasks. The regularity is summarized as the fatigue convergence phenomenon. Based on the fatigue convergence phenomenon, we combined both the input and global synchronizations of brain together to calculate the residual amount of the information processing capacity of brain to obtain the dangerous points in real time. Finally, the danger detection system of the driver fatigue based on the neural mechanism was validated using accident EEG. The time distributions of the output danger points of the system have a good agreement with those of the real accident points.

  17. Accident investigation

    NASA Technical Reports Server (NTRS)

    Brunstein, A. I.

    1979-01-01

    Aircraft accident investigations are discussed with emphasis on those accidents that involved weather as a contributing factor. The organization of the accident investigation board for air carrier accidents is described along with the hearings, and formal report preparation. Statistical summaries of the investigations of general aviation accidents are provided.

  18. Manned space flight nuclear system safety. Volume 3: Reactor system preliminary nuclear safety analysis. Part 2: Accident Model Document (AMD)

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Accident Model Document is one of three documents of the Preliminary Safety Analysis Report (PSAR) - Reactor System as applied to a Space Base Program. Potential terrestrial nuclear hazards involving the zirconium hydride reactor-Brayton power module are identified for all phases of the Space Base program. The accidents/events that give rise to the hazards are defined and abort sequence trees are developed to determine the sequence of events leading to the hazard and the associated probabilities of occurence. Source terms are calculated to determine the magnitude of the hazards. The above data is used in the mission accident analysis to determine the most probable and significant accidents/events in each mission phase. The only significant hazards during the prelaunch and launch ascent phases of the mission are those which arise form criticality accidents. Fission product inventories during this time period were found to be very low due to very limited low power acceptance testing.

  19. A crash-prediction model for multilane roads.

    PubMed

    Caliendo, Ciro; Guida, Maurizio; Parisi, Alessandra

    2007-07-01

    Considerable research has been carried out in recent years to establish relationships between crashes and traffic flow, geometric infrastructure characteristics and environmental factors for two-lane rural roads. Crash-prediction models focused on multilane rural roads, however, have rarely been investigated. In addition, most research has paid but little attention to the safety effects of variables such as stopping sight distance and pavement surface characteristics. Moreover, the statistical approaches have generally included Poisson and Negative Binomial regression models, whilst Negative Multinomial regression model has been used to a lesser extent. Finally, as far as the authors are aware, prediction models involving all the above-mentioned factors have still not been developed in Italy for multilane roads, such as motorways. Thus, in this paper crash-prediction models for a four-lane median-divided Italian motorway were set up on the basis of accident data observed during a 5-year monitoring period extending between 1999 and 2003. The Poisson, Negative Binomial and Negative Multinomial regression models, applied separately to tangents and curves, were used to model the frequency of accident occurrence. Model parameters were estimated by the Maximum Likelihood Method, and the Generalized Likelihood Ratio Test was applied to detect the significant variables to be included in the model equation. Goodness-of-fit was measured by means of both the explained fraction of total variation and the explained fraction of systematic variation. The Cumulative Residuals Method was also used to test the adequacy of a regression model throughout the range of each variable. The candidate set of explanatory variables was: length (L), curvature (1/R), annual average daily traffic (AADT), sight distance (SD), side friction coefficient (SFC), longitudinal slope (LS) and the presence of a junction (J). Separate prediction models for total crashes and for fatal and injury crashes

  20. Phase-Change Modelling in Severe Nuclear Accidents

    NASA Astrophysics Data System (ADS)

    Pain, Christopher; Pavlidis, Dimitrios; Xie, Zhihua; Percival, James; Gomes, Jefferson; Matar, Omar; Moatamedi, Moji; Tehrani, Ali; Jones, Alan; Smith, Paul

    2014-11-01

    This paper describes progress on a consistent approach for multi-phase flow modelling with phase-change. Although, the developed methods are general purpose the applications presented here cover core melt phenomena at the lower vessel head. These include corium pool formation, coolability and solidification. With respect to external cooling, comparison with the LIVE experiments (from Karlsruhe) is undertaken. Preliminary re-flooding simulation results are also presented. These include water injection into porous media (debris bed) and boiling. Numerical simulations follow IRSN's PEARL experimental programme on quenching/re-flooding. The authors wish to thank Prof. Timothy Haste of IRSN. Dr. D. Pavlidis is funded by EPSRC Consortium ``Computational Modelling for Advanced Nuclear Plants,'' Grant Number EP/I003010/1.

  1. Initial VHTR accident scenario classification: models and data.

    SciTech Connect

    Vilim, R. B.; Feldman, E. E.; Pointer, W. D.; Wei, T. Y. C.; Nuclear Engineering Division

    2005-09-30

    Nuclear systems codes are being prepared for use as computational tools for conducting performance/safety analyses of the Very High Temperature Reactor. The thermal-hydraulic codes are RELAP5/ATHENA for one-dimensional systems modeling and FLUENT and/or Star-CD for three-dimensional modeling. We describe a formal qualification framework, the development of Phenomena Identification and Ranking Tables (PIRTs), the initial filtering of the experiment databases, and a preliminary screening of these codes for use in the performance/safety analyses. In the second year of this project we focused on development of PIRTS. Two events that result in maximum fuel and vessel temperatures, the Pressurized Conduction Cooldown (PCC) event and the Depressurized Conduction Cooldown (DCC) event, were selected for PIRT generation. A third event that may result in significant thermal stresses, the Load Change event, is also selected for PIRT generation. Gas reactor design experience and engineering judgment were used to identify the important phenomena in the primary system for these events. Sensitivity calculations performed with the RELAP5 code were used as an aid to rank the phenomena in order of importance with respect to the approach of plant response to safety limits. The overall code qualification methodology was illustrated by focusing on the Reactor Cavity Cooling System (RCCS). The mixed convection mode of heat transfer and pressure drop is identified as an important phenomenon for Reactor Cavity Cooling System (RCCS) operation. Scaling studies showed that the mixed convection mode is likely to occur in the RCCS air duct during normal operation and during conduction cooldown events. The RELAP5/ATHENA code was found to not adequately treat the mixed convection regime. Readying the code will require adding models for the turbulent mixed convection regime while possibly performing new experiments for the laminar mixed convection regime. Candidate correlations for the turbulent

  2. Simulation Modeling Requirements for Loss-of-Control Accident Prevention of Turboprop Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Crider, Dennis; Foster, John V.

    2012-01-01

    In-flight loss of control remains the leading contributor to aviation accident fatalities, with stall upsets being the leading causal factor. The February 12, 2009. Colgan Air, Inc., Continental Express flight 3407 accident outside Buffalo, New York, brought this issue to the forefront of public consciousness and resulted in recommendations from the National Transportation Safety Board to conduct training that incorporates stalls that are fully developed and develop simulator standards to support such training. In 2010, Congress responded to this accident with Public Law 11-216 (Section 208), which mandates full stall training for Part 121 flight operations. Efforts are currently in progress to develop recommendations on implementation of stall training for airline pilots. The International Committee on Aviation Training in Extended Envelopes (ICATEE) is currently defining simulator fidelity standards that will be necessary for effective stall training. These recommendations will apply to all civil transport aircraft including straight-wing turboprop aircraft. Government-funded research over the previous decade provides a strong foundation for stall/post-stall simulation for swept-wing, conventional tail jets to respond to this mandate, but turboprops present additional and unique modeling challenges. First among these challenges is the effect of power, which can provide enhanced flow attachment behind the propellers. Furthermore, turboprops tend to operate for longer periods in an environment more susceptible to ice. As a result, there have been a significant number of turboprop accidents as a result of the early (lower angle of attack) stalls in icing. The vulnerability of turboprop configurations to icing has led to studies on ice accumulation and the resulting effects on flight behavior. Piloted simulations of these effects have highlighted the important training needs for recognition and mitigation of icing effects, including the reduction of stall margins

  3. Categorical and dimensional study of the predictive factors of the development of a psychotrauma in victims of car accidents.

    PubMed

    Berna, G; Vaiva, G; Ducrocq, F; Duhem, S; Nandrino, J L

    2012-01-01

    This study aimed to evaluate the predictive factors of the emergence of complete PTSD and subsyndromal PTSD (defined as individuals exposed to a traumatic event with at least one psychopathological impact, such as hyperarousal, avoidance or persistent re-experiencing) following a motor vehicle accident (MVA). We recruited 155 adult MVA patients, physically injured and admitted to trauma service, over two years. In the week following the accident, patients were asked to complete questionnaires assessing their social situation (sex, age, marital and employment status, prior MVA or trauma), comorbidity (MINI), distress (PDI) and dissociation (PDEQ) experienced during and immediately after the trauma. An evaluation using the CAPS was conducted six months after the trauma to assess a possible PTSD. At six months, 25.8% of the participants developed subsyndromal symptoms and 7.74% developed complete PTSD. The three symptoms that best discriminated the groups were dysphoric emotion, perceived life threat and dissociation. Logistic regression results showed that the strongest predictor of PTSD was the perceived life threat. In addition, a dimensional approach to the results revealed significant correlations between (1) peritraumatic distress and persistent re-experiencing or hyperarousal and (2) dissociation score and avoidance strategy. The presence of a prior traumatic event reinforces avoidance strategies. Our results stress that peritraumatic factors (especially the perception of a life threat) are good predictors of PTSD development. A dimensional perspective allows better identification of psychological complications following an MVA. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Light-Weight Radioisotope Heater Unit final safety analysis report (LWRHU-FSAR): Volume 2: Accident Model Document (AMD)

    SciTech Connect

    Johnson, E.W.

    1988-10-01

    The purpose of this volume of the LWRHU SAR, the Accident Model Document (AMD), are to: Identify all malfunctions, both singular and multiple, which can occur during the complete mission profile that could lead to release outside the clad of the radioisotopic material contained therein; Provide estimates of occurrence probabilities associated with these various accidents; Evaluate the response of the LWRHU (or its components) to the resultant accident environments; and Associate the potential event history with test data or analysis to determine the potential interaction of the released radionuclides with the biosphere.

  5. A Complex Network Model for Analyzing Railway Accidents Based on the Maximal Information Coefficient

    NASA Astrophysics Data System (ADS)

    Shao, Fu-Bo; Li, Ke-Ping

    2016-10-01

    It is an important issue to identify important influencing factors in railway accident analysis. In this paper, employing the good measure of dependence for two-variable relationships, the maximal information coefficient (MIC), which can capture a wide range of associations, a complex network model for railway accident analysis is designed in which nodes denote factors of railway accidents and edges are generated between two factors of which MIC values are larger than or equal to the dependent criterion. The variety of network structure is studied. As the increasing of the dependent criterion, the network becomes to an approximate scale-free network. Moreover, employing the proposed network, important influencing factors are identified. And we find that the annual track density-gross tonnage factor is an important factor which is a cut vertex when the dependent criterion is equal to 0.3. From the network, it is found that the railway development is unbalanced for different states which is consistent with the fact. Supported by the Fundamental Research Funds for the Central Universities under Grant No. 2016YJS087, the National Natural Science Foundation of China under Grant No. U1434209, and the Research Foundation of State Key Laboratory of Railway Traffic Control and Safety, Beijing Jiaotong University under Grant No. RCS2016ZJ001

  6. Melanoma Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Radionuclides from the Fukushima accident in the air over Lithuania: measurement and modelling approaches.

    PubMed

    Lujanienė, G; Byčenkienė, S; Povinec, P P; Gera, M

    2012-12-01

    Analyses of (131)I, (137)Cs and (134)Cs in airborne aerosols were carried out in daily samples in Vilnius, Lithuania after the Fukushima accident during the period of March-April, 2011. The activity concentrations of (131)I and (137)Cs ranged from 12 μBq/m(3) and 1.4 μBq/m(3) to 3700 μBq/m(3) and 1040 μBq/m(3), respectively. The activity concentration of (239,240)Pu in one aerosol sample collected from 23 March to 15 April, 2011 was found to be 44.5 nBq/m(3). The two maxima found in radionuclide concentrations were related to complicated long-range air mass transport from Japan across the Pacific, the North America and the Atlantic Ocean to Central Europe as indicated by modelling. HYSPLIT backward trajectories and meteorological data were applied for interpretation of activity variations of measured radionuclides observed at the site of investigation. (7)Be and (212)Pb activity concentrations and their ratios were used as tracers of vertical transport of air masses. Fukushima data were compared with the data obtained during the Chernobyl accident and in the post Chernobyl period. The activity concentrations of (131)I and (137)Cs were found to be by 4 orders of magnitude lower as compared to the Chernobyl accident. The activity ratio of (134)Cs/(137)Cs was around 1 with small variations only. The activity ratio of (238)Pu/(239,240)Pu in the aerosol sample was 1.2, indicating a presence of the spent fuel of different origin than that of the Chernobyl accident. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Validation of advanced NSSS simulator model for loss-of-coolant accidents

    SciTech Connect

    Kao, S.P.; Chang, S.K.; Huang, H.C.

    1995-09-01

    The replacement of the NSSS (Nuclear Steam Supply System) model on the Millstone 2 full-scope simulator has significantly increased its fidelity to simulate adverse conditions in the RCS. The new simulator NSSS model is a real-time derivative of the Nuclear Plant Analyzer by ABB. The thermal-hydraulic model is a five-equation, non-homogeneous model for water, steam, and non-condensible gases. The neutronic model is a three-dimensional nodal diffusion model. In order to certify the new NSSS model for operator training, an extensive validation effort has been performed by benchmarking the model performance against RELAP5/MOD2. This paper presents the validation results for the cases of small-and large-break loss-of-coolant accidents (LOCA). Detailed comparisons in the phenomena of reflux-condensation, phase separation, and two-phase natural circulation are discussed.

  9. Predictive models of moth development

    USDA-ARS?s Scientific Manuscript database

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  10. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.

    PubMed

    Moray, Neville; Groeger, John; Stanton, Neville

    2017-02-01

    This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.

  11. Effort test failure: toward a predictive model.

    PubMed

    Webb, James W; Batchelor, Jennifer; Meares, Susanne; Taylor, Alan; Marsh, Nigel V

    2012-01-01

    Predictors of effort test failure were examined in an archival sample of 555 traumatically brain-injured (TBI) adults. Logistic regression models were used to examine whether compensation-seeking, injury-related, psychological, demographic, and cultural factors predicted effort test failure (ETF). ETF was significantly associated with compensation-seeking (OR = 3.51, 95% CI [1.25, 9.79]), low education (OR:. 83 [.74, . 94]), self-reported mood disorder (OR: 5.53 [3.10, 9.85]), exaggerated displays of behavior (OR: 5.84 [2.15, 15.84]), psychotic illness (OR: 12.86 [3.21, 51.44]), being foreign-born (OR: 5.10 [2.35, 11.06]), having sustained a workplace accident (OR: 4.60 [2.40, 8.81]), and mild traumatic brain injury severity compared with very severe traumatic brain injury severity (OR: 0.37 [0.13, 0.995]). ETF was associated with a broader range of statistical predictors than has previously been identified and the relative importance of psychological and behavioral predictors of ETF was evident in the logistic regression model. Variables that might potentially extend the model of ETF are identified for future research efforts.

  12. Predictive Models and Computational Embryology

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  13. Predictive Models and Computational Embryology

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  14. Proton Fluence Prediction Models

    NASA Technical Reports Server (NTRS)

    Feynman, Joan

    1996-01-01

    Many spacecraft anomalies are caused by positively charged high energy particles impinging on the vehicle and its component parts. Here we review the current knowledge of the interplanetary particle environment in the energy ranges that are most important for these effects, 10 to 100 MeV/amu. The emphasis is on the particle environment at 1 AU. State-of-the-art engineering models are briefly described along with comments on the future work required in this field.

  15. Predictive Modeling in Race Walking

    PubMed Central

    Wiktorowicz, Krzysztof; Przednowek, Krzysztof; Lassota, Lesław; Krzeszowski, Tomasz

    2015-01-01

    This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers' training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors. PMID:26339230

  16. Analysis 320 coal mine accidents using structural equation modeling with unsafe conditions of the rules and regulations as exogenous variables.

    PubMed

    Zhang, Yingyu; Shao, Wei; Zhang, Mengjia; Li, Hejun; Yin, Shijiu; Xu, Yingjun

    2016-07-01

    Mining has been historically considered as a naturally high-risk industry worldwide. Deaths caused by coal mine accidents are more than the sum of all other accidents in China. Statistics of 320 coal mine accidents in Shandong province show that all accidents contain indicators of "unsafe conditions of the rules and regulations" with a frequency of 1590, accounting for 74.3% of the total frequency of 2140. "Unsafe behaviors of the operator" is another important contributory factor, which mainly includes "operator error" and "venturing into dangerous places." A systems analysis approach was applied by using structural equation modeling (SEM) to examine the interactions between the contributory factors of coal mine accidents. The analysis of results leads to three conclusions. (i) "Unsafe conditions of the rules and regulations," affect the "unsafe behaviors of the operator," "unsafe conditions of the equipment," and "unsafe conditions of the environment." (ii) The three influencing factors of coal mine accidents (with the frequency of effect relation in descending order) are "lack of safety education and training," "rules and regulations of safety production responsibility," and "rules and regulations of supervision and inspection." (iii) The three influenced factors (with the frequency in descending order) of coal mine accidents are "venturing into dangerous places," "poor workplace environment," and "operator error."

  17. Application of a predictive Bayesian model to environmental accounting.

    PubMed

    Anex, R P; Englehardt, J D

    2001-03-30

    Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.

  18. Predictive models of battle dynamics

    NASA Astrophysics Data System (ADS)

    Jelinek, Jan

    2001-09-01

    The application of control and game theories to improve battle planning and execution requires models, which allow military strategists and commanders to reliably predict the expected outcomes of various alternatives over a long horizon into the future. We have developed probabilistic battle dynamics models, whose building blocks in the form of Markov chains are derived from the first principles, and applied them successfully in the design of the Model Predictive Task Commander package. This paper introduces basic concepts of our modeling approach and explains the probability distributions needed to compute the transition probabilities of the Markov chains.

  19. A model for the release, dispersion and environmental impact of a postulated reactor accident from a submerged commercial nuclear power plant

    NASA Astrophysics Data System (ADS)

    Bertch, Timothy Creston

    1998-12-01

    Nuclear power plants are inherently suitable for submerged applications and could provide power to the shore power grid or support future underwater applications. The technology exists today and the construction of a submerged commercial nuclear power plant may become desirable. A submerged reactor is safer to humans because the infinite supply of water for heat removal, particulate retention in the water column, sedimentation to the ocean floor and inherent shielding of the aquatic environment would significantly mitigate the effects of a reactor accident. A better understanding of reactor operation in this new environment is required to quantify the radioecological impact and to determine the suitability of this concept. The impact of release to the environment from a severe reactor accident is a new aspect of the field of marine radioecology. Current efforts have been centered on radioecological impacts of nuclear waste disposal, nuclear weapons testing fallout and shore nuclear plant discharges. This dissertation examines the environmental impact of a severe reactor accident in a submerged commercial nuclear power plant, modeling a postulated site on the Atlantic continental shelf adjacent to the United States. This effort models the effects of geography, decay, particle transport/dispersion, bioaccumulation and elimination with associated dose commitment. The use of a source term equivalent to the release from Chernobyl allows comparison between the impacts of that accident and the postulated submerged commercial reactor plant accident. All input parameters are evaluated using sensitivity analysis. The effect of the release on marine biota is determined. Study of the pathways to humans from gaseous radionuclides, consumption of contaminated marine biota and direct exposure as contaminated water reaches the shoreline is conducted. The model developed by this effort predicts a significant mitigation of the radioecological impact of the reactor accident release

  20. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    SciTech Connect

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.

  1. A radioecological model for thyroid dose reconstruction of the Belarus population following the Chernobyl accident.

    PubMed

    Kruk, J E; Pröhl, G; Kenigsberg, J I

    2004-07-01

    A radioecological model was developed to estimate thyroid exposures of the Belarus population following the Chernobyl accident. The input of the model includes an extensive data set of the (137)Cs activity per unit area deposited during the Chernobyl accident, the rainfall data for different regions of Belarus, the (131)I/(137)Cs ratio in the deposit and the start of the grazing period in Belarus in April/May 1986. The output of the model is the age-dependent thyroid exposure due to the intake of (131)I with fresh milk. Age-dependent average thyroid doses were assessed for selected regions of Belarus. The maximum thyroid doses were estimated for the inhabitants of Gomel oblast where the highest deposition was observed among the regions considered here. The lowest doses were estimated for Vitebsk oblast with the lowest level of depositions. The mean exposures for the oblasts of Grodno, Minsk, Mogilev and Brest were very similar. The results were compared with estimations of thyroid exposure that were based on (131)I measurements in human thyroids, and they are in good agreement. The model may be used for the assessment of thyroid doses in Belarus for areas where no (131)I measurements are available.

  2. Model aids cuttings transport prediction

    SciTech Connect

    Gavignet, A.A. ); Sobey, I.J. )

    1989-09-01

    Drilling of highly deviated wells can be complicated by the formation of a thick bed of cuttings at low flow rates. The model proposed in this paper shows what mechanisms control the thickness of such a bed, and the model predictions are compared with experimental results.

  3. A re-parameterisation of the Power Model of the relationship between the speed of traffic and the number of accidents and accident victims.

    PubMed

    Elvik, Rune

    2013-01-01

    This paper presents a re-analysis of the Power Model of the relationship between the mean speed of traffic and road safety. Past evaluations of the model, most recently in 2009, have broadly speaking supported it. However, the most recent evaluation of the model indicated that the relationship between speed and road safety depends not only on the relative change in speed, as suggested by the Power Model, but also on initial speed. This implies that the exponent describing, for example, a 25% reduction in speed will not be the same when speed changes from 100km/h to 75km/h as it will when speed changes from 20km/h to 15km/h. This paper reports an analysis leading to a re-parameterisation of the Power Model in terms of continuously varying exponents which depend on initial speed. The re-parameterisation was accomplished by fitting exponential functions to data points in which changes in speed and accidents were sorted in groups of 10km/h according to initial speed, starting with data points referring to the highest initial speeds. The exponential functions fitted the data extremely well and imply that the effect on accidents of a given relative change in speed is largest when initial speed is highest. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Systems thinking, the Swiss Cheese Model and accident analysis: a comparative systemic analysis of the Grayrigg train derailment using the ATSB, AcciMap and STAMP models.

    PubMed

    Underwood, Peter; Waterson, Patrick

    2014-07-01

    The Swiss Cheese Model (SCM) is the most popular accident causation model and is widely used throughout various industries. A debate exists in the research literature over whether the SCM remains a viable tool for accident analysis. Critics of the model suggest that it provides a sequential, oversimplified view of accidents. Conversely, proponents suggest that it embodies the concepts of systems theory, as per the contemporary systemic analysis techniques. The aim of this paper was to consider whether the SCM can provide a systems thinking approach and remain a viable option for accident analysis. To achieve this, the train derailment at Grayrigg was analysed with an SCM-based model (the ATSB accident investigation model) and two systemic accident analysis methods (AcciMap and STAMP). The analysis outputs and usage of the techniques were compared. The findings of the study showed that each model applied the systems thinking approach. However, the ATSB model and AcciMap graphically presented their findings in a more succinct manner, whereas STAMP more clearly embodied the concepts of systems theory. The study suggests that, whilst the selection of an analysis method is subject to trade-offs that practitioners and researchers must make, the SCM remains a viable model for accident analysis.

  5. Validation of the long-term exposure pathway models in the NRC's accident consequence code MACCS

    SciTech Connect

    Tveten, U. )

    1992-01-01

    The task described in this paper was performed for the U.S. Nuclear Regulatory Commission. The chronic exposure pathway models implemented in the MELCOR Accident Consequence Code System (MACCS) were compared to post-Chernobyl data from various sources, though mainly from Norway, for verification or identification of areas for possible improvement. The reason for choosing data from Norway for this purpose is partly that Chernobyl fallout levels in Norway are higher than in any other country in western Europe and partly that Norway has been deeply involved in many different types of experiments examining the behavior of radioactive materials in the environment since the early 1960s.

  6. Estimation of the time-dependent radioactive source-term from the Fukushima nuclear power plant accident using atmospheric transport modelling

    NASA Astrophysics Data System (ADS)

    Schoeppner, M.; Plastino, W.; Budano, A.; De Vincenzi, M.; Ruggieri, F.

    2012-04-01

    Several nuclear reactors at the Fukushima Dai-ichi power plant have been severely damaged from the Tōhoku earthquake and the subsequent tsunami in March 2011. Due to the extremely difficult on-site situation it has been not been possible to directly determine the emissions of radioactive material. However, during the following days and weeks radionuclides of 137-Caesium and 131-Iodine (amongst others) were detected at monitoring stations throughout the world. Atmospheric transport models are able to simulate the worldwide dispersion of particles accordant to location, time and meteorological conditions following the release. The Lagrangian atmospheric transport model Flexpart is used by many authorities and has been proven to make valid predictions in this regard. The Flexpart software has first has been ported to a local cluster computer at the Grid Lab of INFN and Department of Physics of University of Roma Tre (Rome, Italy) and subsequently also to the European Mediterranean Grid (EUMEDGRID). Due to this computing power being available it has been possible to simulate the transport of particles originating from the Fukushima Dai-ichi plant site. Using the time series of the sampled concentration data and the assumption that the Fukushima accident was the only source of these radionuclides, it has been possible to estimate the time-dependent source-term for fourteen days following the accident using the atmospheric transport model. A reasonable agreement has been obtained between the modelling results and the estimated radionuclide release rates from the Fukushima accident.

  7. Computational modeling of the accident in the fourth power-generating unit of the Chernobyl nuclear power plant

    SciTech Connect

    Podlazov, L.N.; Trekhov, V.E.; Cherkashov, Y.M.

    1995-02-01

    During the accident in the fourth power-generating unit of the Chernobyl nuclear power plant complicated spatially distributed processes (neutron-physical, thermohydrodynamic, chemical, and thermomechanical) were focused and became intertwined. This has made it difficult to model the accident numberically and it has made international collaboration in this field urgent. As a result, specialists in three different countries performed a series of methodological investigations of the effect of different factors on the positive reactive arising as a result of the insertion of the safety and control rods. These works confirmed that the positive reactivity is highly sensitive to the state of the core prior to the accident and they substantiated the need for reproducing in detail the preliminary initial conditions during computational modeling of the first phase of the accident. The first stage of a combined comprehensive computational analysis of the Chernobyl accident were quasistatic estimates of the positive reactivity according the DINA and CITATION codes. The results of the reconstruction of the three-dimensional neutron fields on the basis of information recorded approximately 2 minute prior to the accident by the SKALA system were used as the initial information for constructing the preaccident state of the reactor.

  8. Bayesian spatial and ecological models for small-area accident and injury analysis.

    PubMed

    MacNab, Ying C

    2004-11-01

    In this article, recently developed Bayesian spatial and ecological regression models are applied to analyse small-area variation in accident and injury. This study serves to demonstrate how Bayesian modelling techniques can be implemented to assess potential risk factors measured at group (e.g. area) level. Presented here is a unified modelling framework that enables thorough investigations into associations between injury rates and regional characteristics, residual variation and spatial autocorrelation. Using hospital separation data for 83 local health areas in British Columbia (BC), Canada, in 1990-1999, we explore and examine ecological/contextual determinants of motor vehicle accident injury (MVAI) among male children and youth aged 0-24 and for those of six age groups (<1, 1-4, 5-9, 10-14, 15-19 and 20-24). Eighteen local health area characteristics are studied. They include a broad spectrum of socio-economic indicators, residential environment indicators (roads and parks), medical services availability and utilisation, population health, proportion of recent immigrants, crime rates, rates of speeding charge and rates of seatbelt violation. Our study indicates a large regional variation in MVAI in males aged 0-24 in British Columbia, Canada, in 1990-1999, and that adjusting for appropriate risk factors eliminates nearly all the variation observed. Socio-economic influence on MVAI was profoundly apparent in young males of all ages with the injury being more common in communities of lower socio-economic status. High adult male crime rates were significantly associated with high injury rates of boys aged 1-14. Seatbelt violations and excess speeding charges were found to be positively associated with the injury rates of young men aged 20-24. This and similar ecological studies shed light on reasons for regional variations in accident occurrence as well as in the resulting injuries and hospital utilisation. Thereby they are potentially useful in identifying

  9. Mesoscale modelling of radioactive contamination formation in Ukraine caused by the Chernobyl accident.

    PubMed

    Talerko, Nikolai

    2005-01-01

    This work is devoted to the reconstruction of time-dependent radioactive contamination fields in the territory of Ukraine in the initial period of the Chernobyl accident using the model of atmospheric transport LEDI (Lagrangian-Eulerian DIffusion model). The modelling results were compared with available 137Cs air and ground contamination measurement data. The 137Cs atmospheric transport over the territory of Ukraine was simulated during the first 12 days after the accident (from 26 April to 7 May 1986) using real aerological information and rain measurement network data. The detailed scenario of the release from the accidental unit of the Chernobyl nuclear plant has been built (including time-dependent radioactivity release intensity and time-varied height of the release). The calculations have enabled to explain the main features of spatial and temporal variations of radioactive contamination fields over the territory of Ukraine on the regional scale, including the formation of the major large-scale spots of radioactive contamination caused by dry and wet deposition.

  10. Low-power and shutdown models for the accident sequence precursor (ASP) program

    SciTech Connect

    Sattison, M.B.; Thatcher, T.A.; Knudsen, J.K.

    1997-02-01

    The US Nuclear Regulatory Commission (NRC) has been using full-power. Level 1, limited-scope risk models for the Accident Sequence Precursor (ASP) program for over fifteen years. These models have evolved and matured over the years, as have probabilistic risk assessment (PRA) and computer technologies. Significant upgrading activities have been undertaken over the past three years, with involvement from the Offices of Nuclear Reactor Regulation (NRR), Analysis and Evaluation of Operational Data (AEOD), and Nuclear Regulatory Research (RES), and several national laboratories. Part of these activities was an RES-sponsored feasibility study investigating the ability to extend the ASP models to include contributors to core damage from events initiated with the reactor at low power or shutdown (LP/SD), both internal events and external events. This paper presents only the LP/SD internal event modeling efforts.

  11. Model simulations of the radioactive material plumes in the Fukushima nuclear power station accident

    NASA Astrophysics Data System (ADS)

    Nakajima, Teruyuki; Goto, Daisuke; Morino, Yu; Misawa, Shota; Tsuruta, Haruo; Uchida, Junya; Takemura, Toshihiko; Ohara, Toshimasa; Oura, Yasuji; Ebihara, Mitsuru; Satoh, Masaki

    2017-04-01

    We like to present an analysis of a model-simulated and observed data comparison for depiction of the atmospheric transportation of the 137Cs emitted from the Fukushima Daiichi Nuclear Power Station accident. This method employs a combination of the results of two aerosol model ensembles and the hourly observed atmospheric 137Cs concentration during 14-23 March 2011 at 90 sites in the Suspended Particulate Matter monitoring network. The result elucidates accurate transport routes and the distribution of the surface-level atmospheric 137Cs relevant to eight plume events that were previously identified. The model ensemble simulates the main features of the observed distribution of surface-level atmospheric 137Cs. However, significant differences were found in some cases. Through the analysis we discuss the important processes to control the characteristic shape and movement of each plume. We also report the status of the 2nd international model intercomparison in progress.

  12. Hydrometeorological model for streamflow prediction

    USGS Publications Warehouse

    Tangborn, Wendell V.

    1979-01-01

    The hydrometeorological model described in this manual was developed to predict seasonal streamflow from water in storage in a basin using streamflow and precipitation data. The model, as described, applies specifically to the Skokomish, Nisqually, and Cowlitz Rivers, in Washington State, and more generally to streams in other regions that derive seasonal runoff from melting snow. Thus the techniques demonstrated for these three drainage basins can be used as a guide for applying this method to other streams. Input to the computer program consists of daily averages of gaged runoff of these streams, and daily values of precipitation collected at Longmire, Kid Valley, and Cushman Dam. Predictions are based on estimates of the absolute storage of water, predominately as snow: storage is approximately equal to basin precipitation less observed runoff. A pre-forecast test season is used to revise the storage estimate and improve the prediction accuracy. To obtain maximum prediction accuracy for operational applications with this model , a systematic evaluation of several hydrologic and meteorologic variables is first necessary. Six input options to the computer program that control prediction accuracy are developed and demonstrated. Predictions of streamflow can be made at any time and for any length of season, although accuracy is usually poor for early-season predictions (before December 1) or for short seasons (less than 15 days). The coefficient of prediction (CP), the chief measure of accuracy used in this manual, approaches zero during the late autumn and early winter seasons and reaches a maximum of about 0.85 during the spring snowmelt season. (Kosco-USGS)

  13. Predictive models of forest dynamics.

    PubMed

    Purves, Drew; Pacala, Stephen

    2008-06-13

    Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.

  14. PREDICTIVE MODELS. Enhanced Oil Recovery Model

    SciTech Connect

    Ray, R.M.

    1992-02-26

    PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1 chemical flooding, where soap-like surfactants are injected into the reservoir to wash out the oil; 2 carbon dioxide miscible flooding, where carbon dioxide mixes with the lighter hydrocarbons making the oil easier to displace; 3 in-situ combustion, which uses the heat from burning some of the underground oil to thin the product; 4 polymer flooding, where thick, cohesive material is pumped into a reservoir to push the oil through the underground rock; and 5 steamflood, where pressurized steam is injected underground to thin the oil. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes.

  15. Modeling the impact of the components of long work hours on injuries and "accidents".

    PubMed

    Folkard, Simon; Lombardi, David A

    2006-11-01

    Many of the industrial disasters of the last few decades, including Three Mile island, Chernobyl, Bhopal, Exxon Valdez, and the Estonia ferry, have occurred in the early hours of the morning. Follow-up investigations concluded that they were at least partially attributable to human fatigue and/or error. The potential impact of long work hours on health and safety is a major concern that has resulted in various work hour regulations. The risk of injuries and "accidents" (incidents) associated with features of work schedules from published epidemiological studies are pooled using an additive model to form a "Risk Index." The estimated risks of an incident for various standard work schedules are presented using the proposed model. The estimated risk of an injury or accident associated with any given number of weekly work hours varies substantially depending on how work hours are comprised. The risk depends on the length and type of shift, as well as the frequency of rest breaks. We conclude that placing a limit on the risk associated with a particular work schedule is likely more effective than setting daily, weekly or monthly work hour regulations in keeping workplace safety within acceptable limits. Copyright (c) 2006 Wiley-Liss, Inc.

  16. What do saliency models predict?

    PubMed Central

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  17. A Flexible Hierarchical Bayesian Modeling Technique for Risk Analysis of Major Accidents.

    PubMed

    Yu, Hongyang; Khan, Faisal; Veitch, Brian

    2017-02-28

    Safety analysis of rare events with potentially catastrophic consequences is challenged by data scarcity and uncertainty. Traditional causation-based approaches, such as fault tree and event tree (used to model rare event), suffer from a number of weaknesses. These include the static structure of the event causation, lack of event occurrence data, and need for reliable prior information. In this study, a new hierarchical Bayesian modeling based technique is proposed to overcome these drawbacks. The proposed technique can be used as a flexible technique for risk analysis of major accidents. It enables both forward and backward analysis in quantitative reasoning and the treatment of interdependence among the model parameters. Source-to-source variability in data sources is also taken into account through a robust probabilistic safety analysis. The applicability of the proposed technique has been demonstrated through a case study in marine and offshore industry.

  18. Predictive Models of Liver Cancer

    EPA Science Inventory

    Predictive models of chemical-induced liver cancer face the challenge of bridging causative molecular mechanisms to adverse clinical outcomes. The latent sequence of intervening events from chemical insult to toxicity are poorly understood because they span multiple levels of bio...

  19. Predictive Models of Liver Cancer

    EPA Science Inventory

    Predictive models of chemical-induced liver cancer face the challenge of bridging causative molecular mechanisms to adverse clinical outcomes. The latent sequence of intervening events from chemical insult to toxicity are poorly understood because they span multiple levels of bio...

  20. COMPARING SAFE VS. AT-RISK BEHAVIORAL DATA TO PREDICT ACCIDENTS

    SciTech Connect

    Jeffrey C. Joe

    2001-11-01

    The Safety Observations Achieve Results (SOAR) program at the Idaho National Laboratory (INL) encourages employees to perform in-field observations of each other’s behaviors. One purpose for performing these observations is that it gives the observers the opportunity to correct, if needed, their co-worker’s at-risk work practices and habits (i.e., behaviors). The underlying premise of doing this is that major injuries (e.g., OSHA-recordable events) are prevented from occurring because the lower level at-risk behaviors are identified and corrected before they can propagate into culturally accepted unsafe behaviors that result in injuries or fatalities. However, unlike other observation programs, SOAR also emphasizes positive reinforcement for safe behaviors observed. The underlying premise of doing this is that positive reinforcement of safe behaviors helps establish a strong positive safety culture. Since the SOAR program collects both safe and at-risk leading indicator data, this provides a unique opportunity to assess and compare the two kinds of data in terms of their ability to predict future adverse safety events. This paper describes the results of analyses performed on SOAR data to assess their relative predictive ability. Implications are discussed.

  1. Development and Validation of Accident Models for U3Si2

    SciTech Connect

    Gamble, Kyle Allan Lawrence; Hales, Jason Dean

    2016-07-01

    The purpose of this milestone report is to present the work completed in regards to material model development for U3Si2 fuel and highlight the results of applying these models to Reactivity Initiated Accidents (RIA), Loss of Coolant Accidents (LOCA), and Station Blackouts (SBO). With the limited experimental data available (essentially only the data used to create the models) true validation is not possible. In the absence of another alternative, code-to-code comparisons have been completed. Qualitative comparisons during postulated accident scenarios between U3Si2 and UO2 fueled rods have also been completed demonstrating the superior performance of U3Si2.

  2. Influence of the meteorological input on the atmospheric transport modelling with FLEXPART of radionuclides from the Fukushima Daiichi nuclear accident.

    PubMed

    Arnold, D; Maurer, C; Wotawa, G; Draxler, R; Saito, K; Seibert, P

    2015-01-01

    In the present paper the role of precipitation as FLEXPART model input is investigated for one possible release scenario of the Fukushima Daiichi accident. Precipitation data from the European Center for Medium-Range Weather Forecast (ECMWF), the NOAA's National Center for Environmental Prediction (NCEP), the Japan Meteorological Agency's (JMA) mesoscale analysis and a JMA radar-rain gauge precipitation analysis product were utilized. The accident of Fukushima in March 2011 and the following observations enable us to assess the impact of these precipitation products at least for this single case. As expected the differences in the statistical scores are visible but not large. Increasing the ECMWF resolution of all the fields from 0.5° to 0.2° rises the correlation from 0.71 to 0.80 and an overall rank from 3.38 to 3.44. Substituting ECMWF precipitation, while the rest of the variables remains unmodified, by the JMA mesoscale precipitation analysis and the JMA radar gauge precipitation data yield the best results on a regional scale, specially when a new and more robust wet deposition scheme is introduced. The best results are obtained with a combination of ECMWF 0.2° data with precipitation from JMA mesoscale analyses and the modified wet deposition with a correlation of 0.83 and an overall rank of 3.58. NCEP-based results with the same source term are generally poorer, giving correlations around 0.66, and comparatively large negative biases and an overall rank of 3.05 that worsens when regional precipitation data is introduced. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Applying STAMP in Accident Analysis

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy; Daouk, Mirna; Dulac, Nicolas; Marais, Karen

    2003-01-01

    Accident models play a critical role in accident investigation and analysis. Most traditional models are based on an underlying chain of events. These models, however, have serious limitations when used for complex, socio-technical systems. Previously, Leveson proposed a new accident model (STAMP) based on system theory. In STAMP, the basic concept is not an event but a constraint. This paper shows how STAMP can be applied to accident analysis using three different views or models of the accident process and proposes a notation for describing this process.

  4. Predicting Abraham model solvent coefficients.

    PubMed

    Bradley, Jean-Claude; Abraham, Michael H; Acree, William E; Lang, Andrew Sid

    2015-01-01

    The Abraham general solvation model can be used in a broad set of scenarios involving partitioning and solubility, yet is limited to a set of solvents with measured Abraham coefficients. Here we extend the range of applicability of Abraham's model by creating open models that can be used to predict the solvent coefficients for all organic solvents. We created open random forest models for the solvent coefficients e, s, a, b, and v that had out-of-bag R(2) values of 0.31, 0.77, 0.92, 0.47, and 0.63 respectively. The models were used to suggest sustainable solvent replacements for commonly used solvents. For example, our models predict that propylene glycol may be used as a general sustainable solvent replacement for methanol. The solvent coefficient models extend the range of applicability of the Abraham general solvation equations to all organic solvents. The models were developed under Open Notebook Science conditions which makes them open, reproducible, and as useful as possible. Graphical AbstractChemical space for solvents with known Abraham coefficients.

  5. MODELLING OF FUEL BEHAVIOUR DURING LOSS-OF-COOLANT ACCIDENTS USING THE BISON CODE

    SciTech Connect

    Pastore, G.; Novascone, S. R.; Williamson, R. L.; Hales, J. D.; Spencer, B. W.; Stafford, D. S.

    2015-09-01

    This work presents recent developments to extend the BISON code to enable fuel performance analysis during LOCAs. This newly developed capability accounts for the main physical phenomena involved, as well as the interactions among them and with the global fuel rod thermo-mechanical analysis. Specifically, new multiphysics models are incorporated in the code to describe (1) transient fission gas behaviour, (2) rapid steam-cladding oxidation, (3) Zircaloy solid-solid phase transition, (4) hydrogen generation and transport through the cladding, and (5) Zircaloy high-temperature non-linear mechanical behaviour and failure. Basic model characteristics are described, and a demonstration BISON analysis of a LWR fuel rod undergoing a LOCA accident is presented. Also, as a first step of validation, the code with the new capability is applied to the simulation of experiments investigating cladding behaviour under LOCA conditions. The comparison of the results with the available experimental data of cladding failure due to burst is presented.

  6. A model for nonvolatile fission product release during reactor accident conditions

    SciTech Connect

    Lewis, B.J.; Andre, B.; Ducros, G.; Maro, D.

    1996-10-01

    An analytical model has been developed to describe the release kinetics of nonvolatile fission products (e.g., molybdenum, cerium, ruthenium, and barium) from uranium dioxide fuel under severe reactor accident conditions. This treatment considers the rate-controlling process of release in accordance with diffusional transport in the fuel matrix and fission product vaporization from the fuel surface into the surrounding gas atmosphere. The effect of the oxygen potential in the gas atmosphere on the chemical form and volatility of the fission product is considered. A correlation is also developed to account for the trapping effects of antimony and tellurium in the Zircaloy cladding. This model interprets the release behavior of fission products observed in Commissariat a l`Energie Atomique experiments conducted in the HEVA/VERCORS facility at high temperature in a hydrogen and steam atmosphere.

  7. Accident liability.

    PubMed Central

    Kuné, J B

    1985-01-01

    The idea of accident proneness, which originated in the early 1900s, has proved to be ineffectual as an operational concept. Discrete econometric methods may be useful to find out which factors are at work in the process that leads to accidents and whether there are individuals who are more liable to accidents than others. PMID:3986144

  8. Predictive Capability Maturity Model (PCMM).

    SciTech Connect

    Swiler, Laura Painton; Knupp, Patrick Michael; Urbina, Angel

    2010-10-01

    Predictive Capability Maturity Model (PCMM) is a communication tool that must include a dicussion of the supporting evidence. PCMM is a tool for managing risk in the use of modeling and simulation. PCMM is in the service of organizing evidence to help tell the modeling and simulation (M&S) story. PCMM table describes what activities within each element are undertaken at each of the levels of maturity. Target levels of maturity can be established based on the intended application. The assessment is to inform what level has been achieved compared to the desired level, to help prioritize the VU activities & to allocate resources.

  9. Assessment and prediction of the nearshore wave propagation in the case of mv prestige accident

    NASA Astrophysics Data System (ADS)

    Rusu, E.; Silva, R.; Pinto, J.; Rusu, L.; Soares, C.; Vitorino, J.

    2003-04-01

    Integrated into the process of implementing the SWAN spectral model on the Portuguese coast of the Atlantic Ocean, was devised an interface with capacities both for pre and post processing. This tool, developed using the Matlab environment, allows either a rapid implementation of the model in a specific area, as well as, a better evaluation of its output. The Prestige breakdown in November 2002, close to the NW Spanish coast, was an unhappy opportunity to test the viability and effectiveness of such a system based on numerical wave models able to provide the nearshore wave forecast, as well as, the utility of the computational environment developed. In this case SWAN was implemented using the spherical coordinates first in a coarse area covering the entire coastal environment in the NW of the Iberian Peninsula. Inside this domain, were nested two high-resolution areas, concerning respectively the western coast (between Figueira da Foz and Santiago de Compostela) and the NW side (located in the vicinity of A Coruña). For the coarse area the SWAN model was nested in WW3, being used also the NOGAPS wind field as a forcing factor. In the high-resolution simulations, when the boundary conditions were generated by the previous SWAN runs, for the wind forcing was given as input the high resolution Aladdin data field (provided by the Portuguese Institute of Meteorology). It was also given as input the current data field as a result of simulations with the HOPS model. The SWAN simulations were performed in the non-stationary mode and some results will be made available on the Internet in the web page of Instituto Hidrografico http://www.ih.marinha.pt/hidrografico/ (related to the project MOCASSIM). Finally was made also a systematic comparison with the measurements coming from two buoys (Silleiro and Leixões both located on the western coast of the Iberian Peninsula) that gave in general a good agreement.

  10. Retrospective dosimetry with the MAX/EGS4 exposure model for the radiological accident in Nesvizh-Belarus

    NASA Astrophysics Data System (ADS)

    Santos, A. M.; Kramer, R.; Brayner, C. A.; Khoury, H. J.; Vieira, J. W.

    2007-09-01

    On October 26, 1991 a fatal radiological accident occurred in a 60Co irradiation facility in the town of Nesvizh in Belarus. Following a jam in the product transport system, the operator entered the facility to clear the fault. On entering the irradiation room the operator bypassed a number of safety features, which prevented him from perceiving that the source rack was in the irradiation position. After the accident average whole body absorbed doses between 8 and 16 Gy have been determined by TLD measurements, by isodose rate distributions, by biological dosimetry and by ESR measurements of clothes and teeth. In an earlier investigation the MAX/EGS4 exposure model had been used to calculate absorbed dose distributions for the radiological accident in Yanango/Peru, which actually represented the simulation of exposure from a point source on the surface of the body. After updating the phantom as well as the Monte Carlo code, the MAX/EGS4 exposure model was used to calculate the absorbed dose distribution for the worker involved in the radiological accident in Nesvizh/Belarus. For this purpose, the arms of the MAX phantom had to be raised above the head, and a rectangular 60Co source was designed to represent the source rack used in the irradiation facility. Average organ absorbed doses, depth-absorbed doses, maximum absorbed dose and average whole body absorbed dose have been calculated and compared with the corresponding data given in the IAEA report of the accident.

  11. Regional long-term model of radioactivity dispersion and fate in the Northwestern Pacific and adjacent seas: application to the Fukushima Dai-ichi accident.

    PubMed

    Maderich, V; Bezhenar, R; Heling, R; de With, G; Jung, K T; Myoung, J G; Cho, Y-K; Qiao, F; Robertson, L

    2014-05-01

    The compartment model POSEIDON-R was modified and applied to the Northwestern Pacific and adjacent seas to simulate the transport and fate of radioactivity in the period 1945-2010, and to perform a radiological assessment on the releases of radioactivity due to the Fukushima Dai-ichi accident for the period 2011-2040. The model predicts the dispersion of radioactivity in the water column and in sediments, the transfer of radionuclides throughout the marine food web, and subsequent doses to humans due to the consumption of marine products. A generic predictive dynamic food-chain model is used instead of the biological concentration factor (BCF) approach. The radionuclide uptake model for fish has as a central feature the accumulation of radionuclides in the target tissue. The three layer structure of the water column makes it possible to describe the vertical structure of radioactivity in deep waters. In total 175 compartments cover the Northwestern Pacific, the East China and Yellow Seas and the East/Japan Sea. The model was validated from (137)Cs data for the period 1945-2010. Calculated concentrations of (137)Cs in water, bottom sediments and marine organisms in the coastal compartment, before and after the accident, are in close agreement with measurements from the Japanese agencies. The agreement for water is achieved when an additional continuous flux of 3.6 TBq y(-1) is used for underground leakage of contaminated water from the Fukushima Dai-ichi NPP, during the three years following the accident. The dynamic food web model predicts that due to the delay of the transfer throughout the food web, the concentration of (137)Cs for piscivorous fishes returns to background level only in 2016. For the year 2011, the calculated individual dose rate for Fukushima Prefecture due to consumption of fishery products is 3.6 μSv y(-1). Following the Fukushima Dai-ichi accident the collective dose due to ingestion of marine products for Japan increased in 2011 by a

  12. Nuclear fuel in a reactor accident.

    PubMed

    Burns, Peter C; Ewing, Rodney C; Navrotsky, Alexandra

    2012-03-09

    Nuclear accidents that lead to melting of a reactor core create heterogeneous materials containing hundreds of radionuclides, many with short half-lives. The long-lived fission products and transuranium elements within damaged fuel remain a concern for millennia. Currently, accurate fundamental models for the prediction of release rates of radionuclides from fuel, especially in contact with water, after an accident remain limited. Relatively little is known about fuel corrosion and radionuclide release under the extreme chemical, radiation, and thermal conditions during and subsequent to a nuclear accident. We review the current understanding of nuclear fuel interactions with the environment, including studies over the relatively narrow range of geochemical, hydrological, and radiation environments relevant to geological repository performance, and discuss priorities for research needed to develop future predictive models.

  13. Cloud diagnosis impact on deposition modelling applied to the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Quérel, Arnaud; Quélo, Denis; Roustan, Yelva; Mathieu, Anne

    2017-04-01

    The accident at the Fukushima Daiichi Nuclear Power Plant in Japan in March 2011 resulted in the release of several hundred PBq of activity into the environment. Most of the radioactivity was released in a time period of about 40 days. Radioactivity was dispersed in the atmosphere and the ocean and subsequently traces of radionuclides were detected all over Japan. At the Fukushima airport for instance, a deposit as large as 36 kBq/m2 of Cs-137 was measured resulting of an atmospheric deposition of the plume. Both dry and wet deposition were probably involved since a raining event occurred on the 15th of March when the plume was passing nearby. The accident scenario have given rise to a number of scientific investigations. Atmospheric deposition, for example, was studied by utilizing atmospheric transport models. In atmospheric transport models, some parameters, such as cloud diagnosis, are derived from meteorological data. This cloud diagnosis is a key issue for wet deposition modelling since it allows to distinguish between two processes: in-cloud scavenging which corresponds to the collection of radioactive particles into the cloud and below-cloud scavenging consequent to the removal of radioactive material due to the falling drops. Several parametrizations of cloud diagnosis exist in the literature, using different input data: relative humidity, liquid water content, also. All these diagnosis return a large range of cloud base heights and cloud top heights. In this study, computed cloud diagnostics are compared to the observations at the Fukushima airport. Atmospheric dispersion simulations at Japan scale are then performed utilizing the most reliable ones. Impact on results are discussed.

  14. The five-factor model and driving behavior: personality and involvement in vehicular accidents.

    PubMed

    Cellar, D F; Nelson, Z C; Yorke, C M

    2000-04-01

    Participants completed both the NEO-PI-R personality measure and measures of prior involvement in driving accidents. Significant negative correlations were found between the factor of Agreeableness and the total number of driving tickets received as well as the sum of combined at-fault accidents, not-at-fault accidents, and driving tickets received by participants. Implications and potential future directions for research are discussed.

  15. Modeling operator actions during a small break loss-of-coolant accident in a Babcock and Wilcox nuclear power plant

    SciTech Connect

    Ghan, L.S.; Ortiz, M.G.

    1991-01-01

    A small break loss-of-accident (SBLOCA) in a typical Babcock and Wilcox (B W) nuclear power plant was modeled using RELAP5/MOD3. This work was performed as part of the United States Regulatory Commission's (USNRC) Code, Scaling, Applicability and Uncertainty (CSAU) study. The break was initiated by severing one high pressure injection (HPI) line at the cold leg. Thus, the small break was further aggravated by reduced HPI flow. Comparisons between scoping runs with minimal operator action, and full operator action, clearly showed that the operator plays a key role in recovering the plant. Operator actions were modeled based on the emergency operating procedures (EOPs) and the Technical Bases Document for the EOPs. The sequence of operator actions modeled here is only one of several possibilities. Different sequences of operator actions are possible for a given accident because of the subjective decisions the operator must make when determining the status of the plant, hence, which branch of the EOP to follow. To assess the credibility of the modeled operator actions, these actions and results of the simulated accident scenario were presented to operator examiners who are familiar with B W nuclear power plants. They agreed that, in general, the modeled operator actions conform to the requirements set forth in the EOPs and are therefore plausible. This paper presents the method for modeling the operator actions and discusses the simulated accident scenario from the viewpoint of operator actions.

  16. Modeling operator actions during a small break loss-of-coolant accident in a Babcock and Wilcox nuclear power plant

    SciTech Connect

    Ghan, L.S.; Ortiz, M.G.

    1991-12-31

    A small break loss-of-accident (SBLOCA) in a typical Babcock and Wilcox (B&W) nuclear power plant was modeled using RELAP5/MOD3. This work was performed as part of the United States Regulatory Commission`s (USNRC) Code, Scaling, Applicability and Uncertainty (CSAU) study. The break was initiated by severing one high pressure injection (HPI) line at the cold leg. Thus, the small break was further aggravated by reduced HPI flow. Comparisons between scoping runs with minimal operator action, and full operator action, clearly showed that the operator plays a key role in recovering the plant. Operator actions were modeled based on the emergency operating procedures (EOPs) and the Technical Bases Document for the EOPs. The sequence of operator actions modeled here is only one of several possibilities. Different sequences of operator actions are possible for a given accident because of the subjective decisions the operator must make when determining the status of the plant, hence, which branch of the EOP to follow. To assess the credibility of the modeled operator actions, these actions and results of the simulated accident scenario were presented to operator examiners who are familiar with B&W nuclear power plants. They agreed that, in general, the modeled operator actions conform to the requirements set forth in the EOPs and are therefore plausible. This paper presents the method for modeling the operator actions and discusses the simulated accident scenario from the viewpoint of operator actions.

  17. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl NPP accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-03-01

    The coupled model LMDzORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5°×1.25°, and the same grid stretched over Europe to reach a resolution of 0.45°×0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels, respectively, extending up to mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 vertical levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The best choice for the model validation was the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986. This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. However, the best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to Atlas), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of

  18. Progresses in tritium accident modelling in the frame of IAEA EMRAS II

    SciTech Connect

    Galeriu, D.; Melintescu, A.

    2015-03-15

    The assessment of the environmental impact of tritium release from nuclear facilities is a topic of interest in many countries. In the IAEA's Environmental Modelling for Radiation Safety (EMRAS I) programme, progresses for routine releases were done and in the EMRAS II programme a dedicated working group (WG 7 - Tritium Accidents) focused on the potential accidental releases (liquid and atmospheric pathways). The progresses achieved in WG 7 were included in a complex report - a technical document of IAEA covering both liquid and atmospheric accidental release consequences. A brief description of the progresses achieved in the frame of EMRAS II WG 7 is presented. Important results have been obtained concerning washout rate, the deposition on the soil of HTO and HT, the HTO uptake by leaves and the subsequent conversion to OBT (organically bound tritium) during daylight. Further needs of the processes understanding and the experimental efforts are emphasised.

  19. Recursive modeling of loss of control in human and organizational processes: a systemic model for accident analysis.

    PubMed

    Kontogiannis, Tom; Malakis, Stathis

    2012-09-01

    A recursive model of accident investigation is proposed by exploiting earlier work in systems thinking. Safety analysts can understand better the underlying causes of decision or action flaws by probing into the patterns of breakdown in the organization of safety. For this deeper analysis, a cybernetic model of organizational factors and a control model of human processes have been integrated in this article (i.e., the viable system model and the extended control model). The joint VSM-ECOM framework has been applied to a case study to help safety practitioners with the analysis of patterns of breakdown with regard to how operators and organizations manage goal conflicts, monitor work progress, recognize weak signals, align goals across teams, and adapt plans on the fly. The recursive accident representation brings together several organizational issues (e.g., the dilemma of autonomy versus compliance, or the interaction between structure and strategy) and addresses how operators adapt to challenges in their environment by adjusting their modes of functioning and recovery. Finally, it facilitates the transfer of knowledge from diverse incidents and near misses within similar domains of practice. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Health effects models for nuclear power plant accident consequence analysis: Low LET radiation

    SciTech Connect

    Evans, J.S. . School of Public Health)

    1990-01-01

    This report describes dose-response models intended to be used in estimating the radiological health effects of nuclear power plant accidents. Models of early and continuing effects, cancers and thyroid nodules, and genetic effects are provided. Weibull dose-response functions are recommended for evaluating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary, and gastrointestinal syndromes -- are considered. In addition, models are included for assessing the risks of several nonlethal early and continuing effects -- including prodromal vomiting and diarrhea, hypothyroidism and radiation thyroiditis, skin burns, reproductive effects, and pregnancy losses. Linear and linear-quadratic models are recommended for estimating cancer risks. Parameters are given for analyzing the risks of seven types of cancer in adults -- leukemia, bone, lung, breast, gastrointestinal, thyroid, and other.'' The category, other'' cancers, is intended to reflect the combined risks of multiple myeloma, lymphoma, and cancers of the bladder, kidney, brain, ovary, uterus and cervix. Models of childhood cancers due to in utero exposure are also developed. For most cancers, both incidence and mortality are addressed. The models of cancer risk are derived largely from information summarized in BEIR III -- with some adjustment to reflect more recent studies. 64 refs., 18 figs., 46 tabs.

  1. United States Department of Energy severe accident research following the Fukushima Daiichi accidents

    SciTech Connect

    Farmer, M. T.; Corradini, M.; Rempe, J.; Reister, R.; Peko, D.

    2016-11-02

    The U.S. Department of Energy (DOE) has played a major role in the U.S. response to the events at Fukushima Daiichi. During the first several weeks following the accident, U.S. assistance efforts were guided by results from a significant and diverse set of analyses. In the months that followed, a coordinated analysis activity aimed at gaining a more thorough understanding of the accident sequence was completed using laboratory-developed, system-level best-estimate accident analysis codes, while a parallel analysis was conducted by U.S. industry. A comparison of predictions for Unit 1 from these two studies indicated significant differences between MAAP and MELCOR results for key plant parameters, such as in-core hydrogen production. On that basis, a crosswalk was completed to determine the key modeling variations that led to these differences. In parallel with these activities, it became clear that there was a need to perform a technology gap evaluation on accident-tolerant components and severe accident analysis methodologies with the goal of identifying any data and/or knowledge gaps that may exist given the current state of light water reactor (LWR) severe accident research and augmented by insights from Fukushima. In addition, there is growing international recognition that data from Fukushima could significantly reduce uncertainties related to severe accident progression, particularly for boiling water reactors. On these bases, a group of U. S. experts in LWR safety and plant operations was convened by the DOE Office of Nuclear Energy (DOE-NE) to complete technology gap analysis and Fukushima forensics data needs identification activities. The results from these activities were used as the basis for refining DOE-NE's severe accident research and development (R&D) plan. Finally, this paper provides a high-level review of DOE-sponsored R&D efforts in these areas, including planned activities on accident-tolerant components and accident analysis methods.

  2. United States Department of Energy severe accident research following the Fukushima Daiichi accidents

    DOE PAGES

    Farmer, M. T.; Corradini, M.; Rempe, J.; ...

    2016-11-02

    The U.S. Department of Energy (DOE) has played a major role in the U.S. response to the events at Fukushima Daiichi. During the first several weeks following the accident, U.S. assistance efforts were guided by results from a significant and diverse set of analyses. In the months that followed, a coordinated analysis activity aimed at gaining a more thorough understanding of the accident sequence was completed using laboratory-developed, system-level best-estimate accident analysis codes, while a parallel analysis was conducted by U.S. industry. A comparison of predictions for Unit 1 from these two studies indicated significant differences between MAAP and MELCORmore » results for key plant parameters, such as in-core hydrogen production. On that basis, a crosswalk was completed to determine the key modeling variations that led to these differences. In parallel with these activities, it became clear that there was a need to perform a technology gap evaluation on accident-tolerant components and severe accident analysis methodologies with the goal of identifying any data and/or knowledge gaps that may exist given the current state of light water reactor (LWR) severe accident research and augmented by insights from Fukushima. In addition, there is growing international recognition that data from Fukushima could significantly reduce uncertainties related to severe accident progression, particularly for boiling water reactors. On these bases, a group of U. S. experts in LWR safety and plant operations was convened by the DOE Office of Nuclear Energy (DOE-NE) to complete technology gap analysis and Fukushima forensics data needs identification activities. The results from these activities were used as the basis for refining DOE-NE's severe accident research and development (R&D) plan. Finally, this paper provides a high-level review of DOE-sponsored R&D efforts in these areas, including planned activities on accident-tolerant components and accident analysis methods.« less

  3. Bus accident analysis of routes with/without bus priority.

    PubMed

    Goh, Kelvin Chun Keong; Currie, Graham; Sarvi, Majid; Logan, David

    2014-04-01

    This paper summarises findings on road safety performance and bus-involved accidents in Melbourne along roads where bus priority measures had been applied. Results from an empirical analysis of the accident types revealed significant reduction in the proportion of accidents involving buses hitting stationary objects and vehicles, which suggests the effect of bus priority in addressing manoeuvrability issues for buses. A mixed-effects negative binomial (MENB) regression and back-propagation neural network (BPNN) modelling of bus accidents considering wider influences on accident rates at a route section level also revealed significant safety benefits when bus priority is provided. Sensitivity analyses done on the BPNN model showed general agreement in the predicted accident frequency between both models. The slightly better performance recorded by the MENB model results suggests merits in adopting a mixed effects modelling approach for accident count prediction in practice given its capability to account for unobserved location and time-specific factors. A major implication of this research is that bus priority in Melbourne's context acts to improve road safety and should be a major consideration for road management agencies when implementing bus priority and road schemes.

  4. TRACE/PARCS Core Modeling of a BWR/5 for Accident Analysis of ATWS Events

    SciTech Connect

    Cuadra A.; Baek J.; Cheng, L.; Aronson, A.; Diamond, D.; Yarsky, P.

    2013-11-10

    The TRACE/PARCS computational package [1, 2] isdesigned to be applicable to the analysis of light water reactor operational transients and accidents where the coupling between the neutron kinetics (PARCS) and the thermal-hydraulics and thermal-mechanics (TRACE) is important. TRACE/PARCS has been assessed for itsapplicability to anticipated transients without scram(ATWS) [3]. The challenge, addressed in this study, is to develop a sufficiently rigorous input model that would be acceptable for use in ATWS analysis. Two types of ATWS events were of interest, a turbine trip and a closure of main steam isolation valves (MSIVs). In the first type, initiated by turbine trip, the concern is that the core will become unstable and large power oscillations will occur. In the second type,initiated by MSIV closure,, the concern is the amount of energy being placed into containment and the resulting emergency depressurization. Two separate TRACE/PARCS models of a BWR/5 were developed to analyze these ATWS events at MELLLA+ (maximum extended load line limit plus)operating conditions. One model [4] was used for analysis of ATWS events leading to instability (ATWS-I);the other [5] for ATWS events leading to emergency depressurization (ATWS-ED). Both models included a large portion of the nuclear steam supply system and controls, and a detailed core model, presented henceforth.

  5. An interactive framework for developing simulation models of hospital accident and emergency services.

    PubMed

    Codrington-Virtue, Anthony; Whittlestone, Paul; Kelly, John; Chaussalet, Thierry

    2005-01-01

    Discrete-event simulation can be a valuable tool in modelling health care systems. This paper describes an interactive framework to model and simulate a hospital accident and emergency department. An interactive spreadsheet (Excel) facilitated the user-friendly input of data such as patient pathways, arrival times, service times and resources into the discrete event simulation package (SIMUL8). The framework was enhanced further by configuring SIMUL8 to visually show patient flow and activity on a schematic plan of an A&E. The patient flow and activity information included patient icons flowing along A&E corridors and pathways, processes undertaken in A&E work areas and queue activity. One major benefit of visually showing patient flow and activity was that modellers and decision makers could visually gain a dynamic insight into the performance of the overall system and visually see changes over the model run cycle. Another key benefit of the interactive framework was the ability to quickly and easily change model parameters to trial, test and compare different scenarios.

  6. GASFLOW: A computational model to analyze accidents in nuclear containment and facility buildings

    SciTech Connect

    Travis, J.R. ); Nichols, B.D.; Wilson, T.L.; Lam, K.L.; Spore, J.W.; Niederauer, G.F. )

    1993-01-01

    GASFLOW is a finite-volume computer code that solves the time-dependent, compressible Navier-Stokes equations for multiple gas species. The fluid-dynamics algorithm is coupled to the chemical kinetics of combusting liquids or gases to simulate diffusion or propagating flames in complex geometries of nuclear containment or confinement and facilities' buildings. Fluid turbulence is calculated to enhance the transport and mixing of gases in rooms and volumes that may be connected by a ventilation system. The ventilation system may consist of extensive ductwork, filters, dampers or valves, and fans. Condensation and heat transfer to walls, floors, ceilings, and internal structures are calculated to model the appropriate energy sinks. Solid and liquid aerosol behavior is simulated to give the time and space inventory of radionuclides. The solution procedure of the governing equations is a modified Los Alamos ICE'd-ALE methodology. Complex facilities can be represented by separate computational domains (multiblocks) that communicate through overlapping boundary conditions. The ventilation system is superimposed throughout the multiblock mesh. Gas mixtures and aerosols are transported through the free three-dimensional volumes and the restricted one-dimensional ventilation components as the accident and fluid flow fields evolve. Combustion may occur if sufficient fuel and reactant or oxidizer are present and have an ignition source. Pressure and thermal loads on the building, structural components, and safety-related equipment can be determined for specific accident scenarios. GASFLOW calculations have been compared with large oil-pool fire tests in the 1986 HDR containment test T52.14, which is a 3000-kW fire experiment. The computed results are in good agreement with the observed data.

  7. Mars solar conjunction prediction modeling

    NASA Astrophysics Data System (ADS)

    Srivastava, Vineet K.; Kumar, Jai; Kulshrestha, Shivali; Kushvah, Badam Singh

    2016-01-01

    During the Mars solar conjunction, telecommunication and tracking between the spacecraft and the Earth degrades significantly. The radio signal degradation depends on the angular separation between the Sun, Earth and probe (SEP), the signal frequency band and the solar activity. All radiometric tracking data types display increased noise and signatures for smaller SEP angles. Due to scintillation, telemetry frame errors increase significantly when solar elongation becomes small enough. This degradation in telemetry data return starts at solar elongation angles of around 5° at S-band, around 2° at X-band and about 1° at Ka-band. This paper presents a mathematical model for predicting Mars superior solar conjunction for any Mars orbiting spacecraft. The described model is simulated for the Mars Orbiter Mission which experienced Mars solar conjunction during May-July 2015. Such a model may be useful to flight projects and design engineers in the planning of Mars solar conjunction operational scenarios.

  8. Sensitivity study of the wet deposition schemes in the modelling of the Fukushima accident.

    NASA Astrophysics Data System (ADS)

    Quérel, Arnaud; Quélo, Denis; Roustan, Yelva; Mathieu, Anne; Kajino, Mizuo; Sekiyama, Thomas; Adachi, Kouji; Didier, Damien; Igarashi, Yasuhito

    2016-04-01

    The Fukushima-Daiichi release of radioactivity is a relevant event to study the atmospheric dispersion modelling of radionuclides. Actually, the atmospheric deposition onto the ground may be studied through the map of measured Cs-137 established consecutively to the accident. The limits of detection were low enough to make the measurements possible as far as 250km from the nuclear power plant. This large scale deposition has been modelled with the Eulerian model ldX. However, several weeks of emissions in multiple weather conditions make it a real challenge. Besides, these measurements are accumulated deposition of Cs-137 over the whole period and do not inform of deposition mechanisms involved: in-cloud, below-cloud, dry deposition. A comprehensive sensitivity analysis is performed in order to understand wet deposition mechanisms. It has been shown in a previous study (Quérel et al, 2016) that the choice of the wet deposition scheme has a strong impact on the assessment of the deposition patterns. Nevertheless, a "best" scheme could not be highlighted as it depends on the selected criteria: the ranking differs according to the statistical indicators considered (correlation, figure of merit in space and factor 2). A possibility to explain the difficulty to discriminate between several schemes was the uncertainties in the modelling, resulting from the meteorological data for instance. Since the move of the plume is not properly modelled, the deposition processes are applied with an inaccurate activity in the air. In the framework of the SAKURA project, an MRI-IRSN collaboration, new meteorological fields at higher resolution (Sekiyama et al., 2013) were provided and allows to reconsider the previous study. An updated study including these new meteorology data is presented. In addition, a focus on several releases causing deposition in located areas during known period was done. This helps to better understand the mechanisms of deposition involved following the

  9. Oil Spill Detection and Modelling: Preliminary Results for the Cercal Accident

    NASA Astrophysics Data System (ADS)

    da Costa, R. T.; Azevedo, A.; da Silva, J. C. B.; Oliveira, A.

    2013-03-01

    Oil spill research has significantly increased mainly as a result of the severe consequences experienced from industry accidents. Oil spill models are currently able to simulate the processes that determine the fate of oil slicks, playing an important role in disaster prevention, control and mitigation, generating valuable information for decision makers and the population in general. On the other hand, satellite Synthetic Aperture Radar (SAR) imagery has demonstrated significant potential in accidental oil spill detection, when they are accurately differentiated from look-alikes. The combination of both tools can lead to breakthroughs, particularly in the development of Early Warning Systems (EWS). This paper presents a hindcast simulation of the oil slick resulting from the Motor Tanker (MT) Cercal oil spill, listed by the Portuguese Navy as one of the major oil spills in the Portuguese Atlantic Coast. The accident took place nearby Leix˜oes Harbour, North of the Douro River, Porto (Portugal) on the 2nd of October 1994. The oil slick was segmented from available European Remote Sensing (ERS) satellite SAR images, using an algorithm based on a simplified version of the K-means clustering formulation. The image-acquired information, added to the initial conditions and forcings, provided the necessary inputs for the oil spill model. Simulations were made considering the tri-dimensional hydrodynamics in a crossscale domain, from the interior of the Douro River Estuary to the open-ocean on the Iberian Atlantic shelf. Atmospheric forcings (from ECMWF - the European Centre for Medium-Range Weather Forecasts and NOAA - the National Oceanic and Atmospheric Administration), river forcings (from SNIRH - the Portuguese National Information System of the Hydric Resources) and tidal forcings (from LNEC - the National Laboratory for Civil Engineering), including baroclinic gradients (NOAA), were considered. The lack of data for validation purposes only allowed the use of the

  10. Investigation of shipping accident injury severity and mortality.

    PubMed

    Weng, Jinxian; Yang, Dong

    2015-03-01

    Shipping movements are operated in a complex and high-risk environment. Fatal shipping accidents are the nightmares of seafarers. With ten years' worldwide ship accident data, this study develops a binary logistic regression model and a zero-truncated binomial regression model to predict the probability of fatal shipping accidents and corresponding mortalities. The model results show that both the probability of fatal accidents and mortalities are greater for collision, fire/explosion, contact, grounding, sinking accidents occurred in adverse weather conditions and darkness conditions. Sinking has the largest effects on the increment of fatal accident probability and mortalities. The results also show that the bigger number of mortalities is associated with shipping accidents occurred far away from the coastal area/harbor/port. In addition, cruise ships are found to have more mortalities than non-cruise ships. The results of this study are beneficial for policy-makers in proposing efficient strategies to prevent fatal shipping accidents. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Climate Modeling and Prediction at NSIPP

    NASA Technical Reports Server (NTRS)

    Suarez, Max; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The talk will review modeling and prediction efforts undertaken as part of NASA's Seasonal to Interannual Prediction Project (NSIPP). The focus will be on atmospheric model results, including its use for experimental seasonal prediction and the diagnostic analysis of climate anomalies. The model's performance in coupled experiments with land and atmosphere models will also be discussed.

  12. Key factors contributing to accident severity rate in construction industry in Iran: a regression modelling approach.

    PubMed

    Soltanzadeh, Ahmad; Mohammadfam, Iraj; Moghimbeigi, Abbas; Ghiasvand, Reza

    2016-03-01

    Construction industry involves the highest risk of occupational accidents and bodily injuries, which range from mild to very severe. The aim of this cross-sectional study was to identify the factors associated with accident severity rate (ASR) in the largest Iranian construction companies based on data about 500 occupational accidents recorded from 2009 to 2013. We also gathered data on safety and health risk management and training systems. Data were analysed using Pearson's chi-squared coefficient and multiple regression analysis. Median ASR (and the interquartile range) was 107.50 (57.24- 381.25). Fourteen of the 24 studied factors stood out as most affecting construction accident severity (p<0.05). These findings can be applied in the design and implementation of a comprehensive safety and health risk management system to reduce ASR.

  13. World Meteorological Organization's model simulations of the radionuclide dispersion and deposition from the Fukushima Daiichi nuclear power plant accident.

    PubMed

    Draxler, Roland; Arnold, Dèlia; Chino, Masamichi; Galmarini, Stefano; Hort, Matthew; Jones, Andrew; Leadbetter, Susan; Malo, Alain; Maurer, Christian; Rolph, Glenn; Saito, Kazuo; Servranckx, René; Shimbori, Toshiki; Solazzo, Efisio; Wotawa, Gerhard

    2015-01-01

    Five different atmospheric transport and dispersion model's (ATDM) deposition and air concentration results for atmospheric releases from the Fukushima Daiichi nuclear power plant accident were evaluated over Japan using regional (137)Cs deposition measurements and (137)Cs and (131)I air concentration time series at one location about 110 km from the plant. Some of the ATDMs used the same and others different meteorological data consistent with their normal operating practices. There were four global meteorological analyses data sets available and two regional high-resolution analyses. Not all of the ATDMs were able to use all of the meteorological data combinations. The ATDMs were configured identically as much as possible with respect to the release duration, release height, concentration grid size, and averaging time. However, each ATDM retained its unique treatment of the vertical velocity field and the wet and dry deposition, one of the largest uncertainties in these calculations. There were 18 ATDM-meteorology combinations available for evaluation. The deposition results showed that even when using the same meteorological analysis, each ATDM can produce quite different deposition patterns. The better calculations in terms of both deposition and air concentration were associated with the smoother ATDM deposition patterns. The best model with respect to the deposition was not always the best model with respect to air concentrations. The use of high-resolution mesoscale analyses improved ATDM performance; however, high-resolution precipitation analyses did not improve ATDM predictions. Although some ATDMs could be identified as better performers for either deposition or air concentration calculations, overall, the ensemble mean of a subset of better performing members provided more consistent results for both types of calculations.

  14. Estimating Loss-of-Coolant Accident Frequencies for the Standardized Plant Analysis Risk Models

    SciTech Connect

    S. A. Eide; D. M. Rasmuson; C. L. Atwood

    2008-09-01

    The U.S. Nuclear Regulatory Commission maintains a set of risk models covering the U.S. commercial nuclear power plants. These standardized plant analysis risk (SPAR) models include several loss-of-coolant accident (LOCA) initiating events such as small (SLOCA), medium (MLOCA), and large (LLOCA). All of these events involve a loss of coolant inventory from the reactor coolant system. In order to maintain a level of consistency across these models, initiating event frequencies generally are based on plant-type average performance, where the plant types are boiling water reactors and pressurized water reactors. For certain risk analyses, these plant-type initiating event frequencies may be replaced by plant-specific estimates. Frequencies for SPAR LOCA initiating events previously were based on results presented in NUREG/CR-5750, but the newest models use results documented in NUREG/CR-6928. The estimates in NUREG/CR-6928 are based on historical data from the initiating events database for pressurized water reactor SLOCA or an interpretation of results presented in the draft version of NUREG-1829. The information in NUREG-1829 can be used several ways, resulting in different estimates for the various LOCA frequencies. Various ways NUREG-1829 information can be used to estimate LOCA frequencies were investigated and this paper presents two methods for the SPAR model standard inputs, which differ from the method used in NUREG/CR-6928. In addition, results obtained from NUREG-1829 are compared with actual operating experience as contained in the initiating events database.

  15. Using the Data From Accidents and Natural Disasters to Improve Marine Debris Modeling

    NASA Astrophysics Data System (ADS)

    Maximenko, N. A.; Hafner, J.; MacFadyen, A.; Kamachi, M.; Murray, C. C.

    2016-02-01

    In the absence of satisfactory marine debris observing system, drift models provide a unique tool that can be used to identify main pathways and accumulation areas of the natural and anthropogenic debris, including the plastic pollution having increasing impact on the environment and raising concern of the society. Main problems, limiting the utility of model simulations, include the lack of accurate information on distribution, timing, strength and composition of sources of marine debris and the complexity of the hydrodynamics of an object, floating on the surface of a rough sea. To calculate the drift, commonly, models estimate surface currents first and then add the object motion relative to the water. Importantly, ocean surface velocity can't be measured with the existing instruments. For various applications it is derived from subsurface (such as 15-meter drifter trajectories) and satellite (altimetry, scatterometry) data using simple theories (geostrophy, Ekman spiral, etc.). Similarly, even the best ocean general circulation models (OGCM's), utilizing different parameterizations of the mixed layer, significantly disagree on the ocean surface velocities. Understanding debris motion under the direct wind force and in interaction with the breaking wind waves seems to be a task of even greater complexity. In this presentation, we demonstrate how the data of documented natural disasters (such as tsunamis, hurricanes and floods) and other accidents generating marine debris with known times and coordinates of start and/or end points of the trajectories, can be used to calibrate drift models and obtain meaningful quantitative results that can be generalized for other sources of debris and used to plan the future marine debris observing system. On these examples we also demonstrate how the oceanic and atmospheric circulations couple together to determine the pathways and destination areas of different types of the floating marine debris.

  16. Predictive Modeling of Cardiac Ischemia

    NASA Technical Reports Server (NTRS)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  17. Predictive Modeling of Tokamak Configurations*

    NASA Astrophysics Data System (ADS)

    Casper, T. A.; Lodestro, L. L.; Pearlstein, L. D.; Bulmer, R. H.; Jong, R. A.; Kaiser, T. B.; Moller, J. M.

    2001-10-01

    The Corsica code provides comprehensive toroidal plasma simulation and design capabilities with current applications [1] to tokamak, reversed field pinch (RFP) and spheromak configurations. It calculates fixed and free boundary equilibria coupled to Ohm's law, sources, transport models and MHD stability modules. We are exploring operations scenarios for both the DIII-D and KSTAR tokamaks. We will present simulations of the effects of electron cyclotron heating (ECH) and current drive (ECCD) relevant to the Quiescent Double Barrier (QDB) regime on DIII-D exploring long pulse operation issues. KSTAR simulations using ECH/ECCD in negative central shear configurations explore evolution to steady state while shape evolution studies during current ramp up using a hyper-resistivity model investigate startup scenarios and limitations. Studies of high bootstrap fraction operation stimulated by recent ECH/ECCD experiments on DIIID will also be presented. [1] Pearlstein, L.D., et al, Predictive Modeling of Axisymmetric Toroidal Configurations, 28th EPS Conference on Controlled Fusion and Plasma Physics, Madeira, Portugal, June 18-22, 2001. * Work performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

  18. Predicting Consequences of Technological Disasters from Natural Hazard Events: Challenges and Opportunities Associated with Industrial Accident Data Sources

    NASA Astrophysics Data System (ADS)

    Wood, M.

    2009-04-01

    The increased focus on the possibility of technological accidents caused by natural events (Natech) is foreseen to continue for years to come. In this case, experts in prevention, mitigation and preparation activities associated with natural events will increasingly need to borrow data and expertise traditionally associated with the technological fields to carry out the work. An important question is how useful is the data for understanding consequences from such natech events. Data and case studies provided on major industrial accidents tend to focus on lessons learned for re-engineering the process. While consequence data are reported at least nominally in most reports, their precision, quality and completeness is often lacking. Consequences that are often or sometimes available but not provided can include severity and type of injuries, distance of victims from the source, exposure measurements, volume of the release, population in potentially affected zones, and weather conditions. Yet these are precisely the type of data that will aid natural hazard experts in land-use planning and emergency response activities when a Natech event may be foreseen. This work discusses the results of a study of consequence data from accidents involving toxic releases reported in the EU's MARS accident database. The study analysed the precision, quality and completeness of three categories of consequence data reported: the description of health effects, consequence assessment and chemical risk assessment factors, and emergency response information. This work reports on the findings from this study and discusses how natural hazards experts might interact with industrial accident experts to promote more consistent and accurate reporting of the data that will be useful in consequence-based activities.

  19. Measurements and modelling of 137Cs distribution on ground due to the Chernobyl accident: a 27-y follow-up study in Northern Greece.

    PubMed

    Clouvas, A; Xanthos, S; Kadi, S; Antonopoulos-Domis, M

    2014-08-01

    Following the Chernobyl accident, an area of ∼1000 m(2) in the University farm of the Aristotle University of Thessaloniki was considered as a test ground for radioecological measurements. The radiocesium deposition in this area, due to the Chernobyl accident, was 20 kBq m(-2). The profile of (137)Cs in the soil of this area was measured systematically from 1987 to 2012. The form of the profile has changed over the years. During the 1987-2000 period the (137)Cs distribution was reproducible by a sum of two exponentials. However, at least since 2005 the (137)Cs distribution can be successfully fitted by a single exponential function. The long-time (∼27 y) evolution study of the (137)Cs distribution in soil permit one to extract with the use of a simple compartment model, the mean vertical migration velocity of (137)Cs. Vertical migration of (137)Cs in soil is a very slow process. The mean vertical migration velocity is estimated to be 0.14 cm y(-1).The relative good comparison between the time dependence of the (137)Cs distribution in soil and the model predictions indicate that the simple model used is realistic. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Comparing the Hematopoetic Syndrome Time Course in the NHP Animal Model to Radiation Accident Cases From the Database Search.

    PubMed

    Graessle, Dieter H; Dörr, Harald; Bennett, Alexander; Shapiro, Alla; Farese, Ann M; MacVittie, Thomas J; Meineke, Viktor

    2015-11-01

    Since controlled clinical studies on drug administration for the acute radiation syndrome are lacking, clinical data of human radiation accident victims as well as experimental animal models are the main sources of information. This leads to the question of how to compare and link clinical observations collected after human radiation accidents with experimental observations in non-human primate (NHP) models. Using the example of granulocyte counts in the peripheral blood following radiation exposure, approaches for adaptation between NHP and patient databases on data comparison and transformation are introduced. As a substitute for studying the effects of administration of granulocyte-colony stimulating factor (G-CSF) in human clinical trials, the method of mathematical modeling is suggested using the example of G-CSF administration to NHP after total body irradiation.

  1. Using Numerical Models in the Development of Software Tools for Risk Management of Accidents with Oil and Inert Spills

    NASA Astrophysics Data System (ADS)

    Fernandes, R.; Leitão, P. C.; Braunschweig, F.; Lourenço, F.; Galvão, P.; Neves, R.

    2012-04-01

    The increasing ship traffic and maritime transport of dangerous substances make it more difficult to significantly reduce the environmental, economic and social risks posed by potential spills, although the security rules are becoming more restrictive (ships with double hull, etc.) and the surveillance systems are becoming more developed (VTS, AIS). In fact, the problematic associated to spills is and will always be a main topic: spill events are continuously happening, most of them unknown for the general public because of their small scale impact, but with some of them (in a much smaller number) becoming authentic media phenomena in this information era, due to their large dimensions and environmental and social-economic impacts on ecosystems and local communities, and also due to some spectacular or shocking pictures generated. Hence, the adverse consequences posed by these type of accidents, increase the preoccupation of avoiding them in the future, or minimize their impacts, using not only surveillance and monitoring tools, but also increasing the capacity to predict the fate and behaviour of bodies, objects, or substances in the following hours after the accident - numerical models can have now a leading role in operational oceanography applied to safety and pollution response in the ocean because of their predictive potential. Search and rescue operation, oil, inert (ship debris, or floating containers), and HNS (hazardous and noxious substances) spills risk analysis are the main areas where models can be used. Model applications have been widely used in emergency or planning issues associated to pollution risks, and contingency and mitigation measures. Before a spill, in the planning stage, modelling simulations are used in environmental impact studies, or risk maps, using historical data, reference situations, and typical scenarios. After a spill, the use of fast and simple modelling applications allow to understand the fate and behaviour of the spilt

  2. Response of Soviet VVER-440 accident localization systems to overpressurization

    SciTech Connect

    Kulak, R.F.; Fiala, C.; Sienicki, J.J.

    1989-01-01

    The Soviet designed VVER-440 model V230 and VVER-440 model V213 reactors do not use full containments to mitigate the effects of accidents. Instead, these VVER-440 units employ a sealed set of interconnected compartments, collectively called the accident localization system (ALS), to reduce the release of radionuclides to the atmosphere during accidents. Descriptions of the VVER accident localization structures may be found in the report DOE NE-0084. The objective of this paper is to evaluate the structural integrity of the VVER-440 ALS at the Soviet design pressure, and to determine their response to pressure loadings beyond the design value. Complex, three-dimensional, nonlinear, finite element models were developed to represent the major structural components of the localization systems of the VVER-440 models V230 and V213. The interior boundary of the localization system was incrementally pressurized in the calculations until the prediction of gross failure. 6 refs., 9 figs.

  3. Curve Estimation of Number of People Killed in Traffic Accidents in Turkey

    NASA Astrophysics Data System (ADS)

    Berkhan Akalin, Kadir; Karacasu, Murat; Altin, Arzu Yavuz; Ergül, Bariş

    2016-10-01

    One or more than one vehicle in motion on the highway involving death, injury and loss events which have resulted are called accidents. As a result of increasing population and traffic density, traffic accidents continue to increase and this leads to both human losses and harm to the economy. In addition, also leads to social problems. As a result of increasing population and traffic density, traffic accidents continue to increase and this leads to both human losses and harm to the economy. In addition to this, it also leads to social problems. As a result of traffic accidents, millions of people die year by year. A great majority of these accidents occur in developing countries. One of the most important tasks of transportation engineers is to reduce traffic accidents by creating a specific system. For that reason, statistical information about traffic accidents which occur in the past years should be organized by versed people. Factors affecting the traffic accidents are analyzed in various ways. In this study, modelling the number of people killed in traffic accidents in Turkey is determined. The dead people were modelled using curve fitting method with the number of people killed in traffic accidents in Turkey dataset between 1990 and 2014. It was also predicted the number of dead people by using various models for the future. It is decided that linear model is suitable for the estimates.

  4. Persistence of airline accidents.

    PubMed

    Barros, Carlos Pestana; Faria, Joao Ricardo; Gil-Alana, Luis Alberiko

    2010-10-01

    This paper expands on air travel accident research by examining the relationship between air travel accidents and airline traffic or volume in the period from 1927-2006. The theoretical model is based on a representative airline company that aims to maximise its profits, and it utilises a fractional integration approach in order to determine whether there is a persistent pattern over time with respect to air accidents and air traffic. Furthermore, the paper analyses how airline accidents are related to traffic using a fractional cointegration approach. It finds that airline accidents are persistent and that a (non-stationary) fractional cointegration relationship exists between total airline accidents and airline passengers, airline miles and airline revenues, with shocks that affect the long-run equilibrium disappearing in the very long term. Moreover, this relation is negative, which might be due to the fact that air travel is becoming safer and there is greater competition in the airline industry. Policy implications are derived for countering accident events, based on competition and regulation. © 2010 The Author(s). Journal compilation © Overseas Development Institute, 2010.

  5. Validation and verification of the ICRP biokinetic model of 32P: the criticality accident at Tokai-Mura, Japan.

    PubMed

    Miyamoto, K; Takeda, H; Nishimura, Y; Yukawa, M; Watanabe, Y; Ishigure, N; Kouno, F; Kuroda, N; Akashi, M

    2003-01-01

    Regrettably, a criticality accident occurred at a uranium conversion facility in Tokai-mura, Ibaraki, Japan, on 30 September 1999. Radioactivities of 32P in urine, blood and bone samples of the victims, who were severely exposed to neutrons, were measured. 32P was induced in their whole bodies at the moment of the first nuclear release by the reaction 31P (n, gamma) 32P and 32S (n, p) 32P. A realistic biokinetic model was assumed, as the exchange of 32P between the extracellular fluid compartment and the soft tissue compartment occurs only through the intracellular compartment, and the model was used for preliminary calculations. Some acute excretion of 32P, caused by decomposition or elution of tissues which occurred at the time of the accident, may have happened in the victims' bodies in the first few days. The working hypotheses in the present work should initiate renewed discussion of 32P biokinetics.

  6. Modeling and analysis of the unprotected loss-of-flow accident in the Clinch River Breeder Reactor

    SciTech Connect

    Morris, E.E.; Dunn, F.E.; Simms, R.; Gruber, E.E.

    1985-01-01

    The influence of fission-gas-driven fuel compaction on the energetics resulting from a loss-of-flow accident was estimated with the aid of the SAS3D accident analysis code. The analysis was carried out as part of the Clinch River Breeder Reactor licensing process. The TREAT tests L6, L7, and R8 were analyzed to assist in the modeling of fuel motion and the effects of plenum fission-gas release on coolant and clad dynamics. Special, conservative modeling was introduced to evaluate the effect of fission-gas pressure on the motion of the upper fuel pin segment following disruption. For the nominal sodium-void worth, fission-gas-driven fuel compaction did not adversely affect the outcome of the transient. When uncertainties in the sodium-void worth were considered, however, it was found that if fuel compaction occurs, loss-of-flow driven transient overpower phenomenology could not be precluded.

  7. Prediction of groundwater contamination with 137Cs and 131I from the Fukushima nuclear accident in the Kanto district.

    PubMed

    Ohta, Tomoko; Mahara, Yasunori; Kubota, Takumi; Fukutani, Satoshi; Fujiwara, Keiko; Takamiya, Koichi; Yoshinaga, Hisao; Mizuochi, Hiroyuki; Igarashi, Toshifumi

    2012-09-01

    We measured the concentrations of (131)I, (134)Cs, and (137)Cs released from the Fukushima nuclear accident in soil and rainwater samples collected March 30-31, 2011, in Ibaraki Prefecture, Kanto district, bordering Fukushima Prefecture to the south. Column experiments revealed that all (131)I in rainwater samples was adsorbed onto an anion-exchange resin. However, 30% of (131)I was not retained by the resin after it passed through a soil layer, suggesting that a portion of (131)I became bound to organic matter from the soil. The (137)Cs migration rate was estimated to be approximately 0.6 mm/y in the Kanto area, which indicates that contamination of groundwater by (137)Cs is not likely to occur in rainwater infiltrating into the surface soil after the Fukushima accident.

  8. WASTE-ACC: A computer model for analysis of waste management accidents

    SciTech Connect

    Nabelssi, B.K.; Folga, S.; Kohout, E.J.; Mueller, C.J.; Roglans-Ribas, J.

    1996-12-01

    In support of the U.S. Department of Energy`s (DOE`s) Waste Management Programmatic Environmental Impact Statement, Argonne National Laboratory has developed WASTE-ACC, a computational framework and integrated PC-based database system, to assess atmospheric releases from facility accidents. WASTE-ACC facilitates the many calculations for the accident analyses necessitated by the numerous combinations of waste types, waste management process technologies, facility locations, and site consolidation strategies in the waste management alternatives across the DOE complex. WASTE-ACC is a comprehensive tool that can effectively test future DOE waste management alternatives and assumptions. The computational framework can access several relational databases to calculate atmospheric releases. The databases contain throughput volumes, waste profiles, treatment process parameters, and accident data such as frequencies of initiators, conditional probabilities of subsequent events, and source term release parameters of the various waste forms under accident stresses. This report describes the computational framework and supporting databases used to conduct accident analyses and to develop source terms to assess potential health impacts that may affect on-site workers and off-site members of the public under various DOE waste management alternatives.

  9. Piloted Simulation of a Model-Predictive Automated Recovery System

    NASA Technical Reports Server (NTRS)

    Liu, James (Yuan); Litt, Jonathan; Sowers, T. Shane; Owens, A. Karl; Guo, Ten-Huei

    2014-01-01

    This presentation describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  10. Predictive Model Assessment for Count Data

    DTIC Science & Technology

    2007-09-05

    critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002

  11. Models and numerical methods for the simulation of loss-of-coolant accidents in nuclear reactors

    NASA Astrophysics Data System (ADS)

    Seguin, Nicolas

    2014-05-01

    In view of the simulation of the water flows in pressurized water reactors (PWR), many models are available in the literature and their complexity deeply depends on the required accuracy, see for instance [1]. The loss-of-coolant accident (LOCA) may appear when a pipe is broken through. The coolant is composed by light water in its liquid form at very high temperature and pressure (around 300 °C and 155 bar), it then flashes and becomes instantaneously vapor in case of LOCA. A front of liquid/vapor phase transition appears in the pipes and may propagate towards the critical parts of the PWR. It is crucial to propose accurate models for the whole phenomenon, but also sufficiently robust to obtain relevant numerical results. Due to the application we have in mind, a complete description of the two-phase flow (with all the bubbles, droplets, interfaces…) is out of reach and irrelevant. We investigate averaged models, based on the use of void fractions for each phase, which represent the probability of presence of a phase at a given position and at a given time. The most accurate averaged model, based on the so-called Baer-Nunziato model, describes separately each phase by its own density, velocity and pressure. The two phases are coupled by non-conservative terms due to gradients of the void fractions and by source terms for mechanical relaxation, drag force and mass transfer. With appropriate closure laws, it has been proved [2] that this model complies with all the expected physical requirements: positivity of densities and temperatures, maximum principle for the void fraction, conservation of the mixture quantities, decrease of the global entropy… On the basis of this model, it is possible to derive simpler models, which can be used where the flow is still, see [3]. From the numerical point of view, we develop new Finite Volume schemes in [4], which also satisfy the requirements mentioned above. Since they are based on a partial linearization of the physical

  12. Nuclear accidents

    SciTech Connect

    Mobley, J.A.

    1982-05-01

    A nuclear accident with radioactive contamination can happen anywhere in the world. Because expert nuclear emergency teams may take several hours to arrive at the scene, local authorities must have a plan of action for the hours immediately following an accident. The site should be left untouched except to remove casualties. Treatment of victims includes decontamination and meticulous wound debridement. Acute radiation syndrome may be an overwhelming sequela.

  13. A review and test of predictive models for the bioaccumulation of radiostrontium in fish.

    PubMed

    Smith, J T; Sasina, N V; Kryshev, A I; Belova, N V; Kudelsky, A V

    2009-11-01

    Empirical relations between the (90)Sr concentration factor (CF) and the calcium concentration in freshwater aquatic systems have previously been determined in studies based on data obtained prior to the Chernobyl accident. The purpose of the present research is to review and compare these models, and to test them against a database of post-Chernobyl measurements from rivers and lakes in Ukraine, Russia, Belarus and Finland. It was found that two independently developed models, based on pre-Chernobyl empirical data, are in close agreement with each other, and with empirical data. Testing of both models against new data obtained after the Chernobyl accident confirms the models' predictive ability. An investigation of the influence of fish size on (90)Sr accumulation showed no significant relationship, though the data set was somewhat limited.

  14. Dynamical jamming transition induced by a car accident in traffic-flow model of a two-lane roadway

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    1994-01-01

    A deterministic cellular automaton model is presented to simulate the traffic jam induced by a car accident in a two-lane roadway. We study the traffic flow of the system when the translation invariance is broken by the insertion of a blockage which is induced by a car accident in the first lane. Using the computer simulation, it is shown that the dynamical jamming transitions occur successively from phase 1 to phase 4 with increasing the density of cars. In phase 1, no cars exist in the first lane and the cars in the second lane move with the maximal velocity. In phase 2, a discontinuity of the traffic-flow pattern appears and the cars move with the maximal current. Its discontinuity segregates system into two regions with different densities. In phase 3, the discontinuity disappears. In phase 4, the cars do not move ahead but vibrate between the first and second lanes.

  15. A two-stage optimization model for emergency material reserve layout planning under uncertainty in response to environmental accidents.

    PubMed

    Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Liu, Rentao; Wang, Peng

    2016-06-05

    In the emergency management relevant to pollution accidents, efficiency emergency rescues can be deeply influenced by a reasonable assignment of the available emergency materials to the related risk sources. In this study, a two-stage optimization framework is developed for emergency material reserve layout planning under uncertainty to identify material warehouse locations and emergency material reserve schemes in pre-accident phase coping with potential environmental accidents. This framework is based on an integration of Hierarchical clustering analysis - improved center of gravity (HCA-ICG) model and material warehouse location - emergency material allocation (MWL-EMA) model. First, decision alternatives are generated using HCA-ICG to identify newly-built emergency material warehouses for risk sources which cannot be satisfied by existing ones with a time-effective manner. Second, emergency material reserve planning is obtained using MWL-EMA to make emergency materials be prepared in advance with a cost-effective manner. The optimization framework is then applied to emergency management system planning in Jiangsu province, China. The results demonstrate that the developed framework not only could facilitate material warehouse selection but also effectively provide emergency material for emergency operations in a quick response. Copyright © 2016. Published by Elsevier B.V.

  16. Multi-scale approach to the modeling of fission gas discharge during hypothetical loss-of-flow accident in gen-IV sodium fast reactor

    SciTech Connect

    Behafarid, F.; Shaver, D. R.; Bolotnov, I. A.; Jansen, K. E.; Antal, S. P.; Podowski, M. Z.

    2012-07-01

    The required technological and safety standards for future Gen IV Reactors can only be achieved if advanced simulation capabilities become available, which combine high performance computing with the necessary level of modeling detail and high accuracy of predictions. The purpose of this paper is to present new results of multi-scale three-dimensional (3D) simulations of the inter-related phenomena, which occur as a result of fuel element heat-up and cladding failure, including the injection of a jet of gaseous fission products into a partially blocked Sodium Fast Reactor (SFR) coolant channel, and gas/molten sodium transport along the coolant channels. The computational approach to the analysis of the overall accident scenario is based on using two different inter-communicating computational multiphase fluid dynamics (CMFD) codes: a CFD code, PHASTA, and a RANS code, NPHASE-CMFD. Using the geometry and time history of cladding failure and the gas injection rate, direct numerical simulations (DNS), combined with the Level Set method, of two-phase turbulent flow have been performed by the PHASTA code. The model allows one to track the evolution of gas/liquid interfaces at a centimeter scale. The simulated phenomena include the formation and breakup of the jet of fission products injected into the liquid sodium coolant. The PHASTA outflow has been averaged over time to obtain mean phasic velocities and volumetric concentrations, as well as the liquid turbulent kinetic energy and turbulence dissipation rate, all of which have served as the input to the core-scale simulations using the NPHASE-CMFD code. A sliding window time averaging has been used to capture mean flow parameters for transient cases. The results presented in the paper include testing and validation of the proposed models, as well the predictions of fission-gas/liquid-sodium transport along a multi-rod fuel assembly of SFR during a partial loss-of-flow accident. (authors)

  17. Predictive models of radiative neutrino masses

    NASA Astrophysics Data System (ADS)

    Julio, J.

    2016-06-01

    We discuss two models of radiative neutrino mass generation. The first model features one-loop Zee model with Z4 symmetry. The second model is the two-loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.

  18. Predictive models of radiative neutrino masses

    SciTech Connect

    Julio, J.

    2016-06-21

    We discuss two models of radiative neutrino mass generation. The first model features one–loop Zee model with Z{sub 4} symmetry. The second model is the two–loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.

  19. How to Establish Clinical Prediction Models.

    PubMed

    Lee, Yong Ho; Bang, Heejung; Kim, Dae Jung

    2016-03-01

    A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.

  20. Packet loss rate prediction using the sparse basis prediction model.

    PubMed

    Atiya, Amir E; Yoo, Sung Goo; Chong, Kil To; Kim, Hyongsuk

    2007-05-01

    The quality of multimedia communicated through the Internet is highly sensitive to packet loss. In this letter, we develop a time-series prediction model for the end-to-end packet loss rate (PLR). The estimate of the PLR is needed in several transmission control mechanisms such as the TCP-friendly congestion control mechanism for UDP traffic. In addition, it is needed to estimate the amount of redundancy for the forward error correction (FEC) mechanism. An accurate prediction would therefore be very valuable. We used a relatively novel prediction model called sparse basis prediction model. It is an adaptive nonlinear prediction approach, whereby a very large dictionary of possible inputs are extracted from the time series (for example, through moving averages, some nonlinear transformations, etc.). Only few of the very best inputs among the dictionary are selected and are combined linearly. An algorithm adaptively updates the input selection (as well as updates the weights) each time a new time sample arrives in a computationally efficient way. Simulation experiments indicate significantly better prediction performance for the sparse basis approach, as compared to other traditional nonlinear approaches.

  1. Future missions studies: Combining Schatten's solar activity prediction model with a chaotic prediction model

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.

    1991-01-01

    K. Schatten (1991) recently developed a method for combining his prediction model with our chaotic model. The philosophy behind this combined model and his method of combination is explained. Because the Schatten solar prediction model (KS) uses a dynamo to mimic solar dynamics, accurate prediction is limited to long-term solar behavior (10 to 20 years). The Chaotic prediction model (SA) uses the recently developed techniques of nonlinear dynamics to predict solar activity. It can be used to predict activity only up to the horizon. In theory, the chaotic prediction should be several orders of magnitude better than statistical predictions up to that horizon; beyond the horizon, chaotic predictions would theoretically be just as good as statistical predictions. Therefore, chaos theory puts a fundamental limit on predictability.

  2. Radiation risk models for all solid cancers other than those types of cancer requiring individual assessments after a nuclear accident.

    PubMed

    Walsh, Linda; Zhang, Wei

    2016-03-01

    In the assessment of health risks after nuclear accidents, some health consequences require special attention. For example, in their 2013 report on health risk assessment after the Fukushima nuclear accident, the World Health Organisation (WHO) panel of experts considered risks of breast cancer, thyroid cancer and leukaemia. For these specific cancer types, use was made of already published excess relative risk (ERR) and excess absolute risk (EAR) models for radiation-related cancer incidence fitted to the epidemiological data from the Japanese A-bomb Life Span Study (LSS). However, it was also considered important to assess all other types of solid cancer together and the WHO, in their above-mentioned report, stated "No model to calculate the risk for all other solid cancer excluding breast and thyroid cancer risks is available from the LSS data". Applying the LSS models for all solid cancers along with the models for the specific sites means that some cancers have an overlap in the risk evaluations. Thus, calculating the total solid cancer risk plus the breast cancer risk plus the thyroid cancer risk can overestimate the total risk by several per cent. Therefore, the purpose of this paper was to publish the required models for all other solid cancers, i.e. all solid cancers other than those types of cancer requiring special attention after a nuclear accident. The new models presented here have been fitted to the same LSS data set from which the risks provided by the WHO were derived. Although it is known already that the EAR and ERR effect modifications by sex are statistically significant for the outcome "all solid cancer", it is shown here that sex modification is not statistically significant for the outcome "all solid cancer other than thyroid and breast cancer". It is also shown here that the sex-averaged solid cancer risks with and without the sex modification are very similar once breast and thyroid cancers are factored out. Some other notable model

  3. A model for predicting delay in discharge of stroke patients.

    PubMed

    San Segundo, R M; Aguilar, J J; Santos, F; Usabiaga, T

    2007-01-01

    To study the factors that predict delay in discharge (DD) for stroke victims when they are admitted to hospital and to build a model for predicting DD in our hospital. A retrospective study of 214 stroke victims admitted to the Physical Medicine and Rehabilitation Service (PMRS) of a general hospital between January 1, 1994, and December 31, 2001. Seventeen clinical and sociodemographic data were studied to determine which factors were predictors of DD: age, sex, type of stroke, side affected, sphincter control, ability to communicate, level of consciousness, deep sensitivity, antecedents of cardiovascular risk, delay before admission to the PMRS, initial functional state and solitude, whether the patient was employed prior to the cerebrovascular accident, and whether the patient's place of residence had any exterior architectural barriers. A total of 26.6% of patients experienced DD. Factors influencing DD were solitude (odds ratio [OR] 6; 95% confidence interval [CI] 2.2-16.1), an initial functional independence measure (FIM) below 50 (OR 4.5; 95% CI 2.3-8.9) and age greater than 75 years (OR 2.7; 95% CI 1.2-6.1). The best model for predicting DD comprises seven variables: solitude, initial FIM below 50, older than 75 years, left hemiparesis, exterior architectonic barriers at home, cardiovascular antecedents and sex (male). This model has a specificity of 89% and a sensitivity of 40%. Solitude, low initial FIM and age older than 75 years influence DD for patients with stroke admitted to hospital. A model for predicting DD is described.

  4. Predicting the long-term (137)Cs distribution in Fukushima after the Fukushima Dai-ichi nuclear power plant accident: a parameter sensitivity analysis.

    PubMed

    Yamaguchi, Masaaki; Kitamura, Akihiro; Oda, Yoshihiro; Onishi, Yasuo

    2014-09-01

    than those of the other rivers. Annual sediment outflows from the Abukuma River and the total from the other 13 river basins were calculated as 3.2 × 10(4)-3.1 × 10(5) and 3.4 × 10(4)-2.1 × 10(5)ty(-1), respectively. The values vary between calculation cases because of the critical shear stress, the rainfall factor, and other differences. On the other hand, contributions of those parameters were relatively small for (137)Cs concentration within transported soil. This indicates that the total amount of (137)Cs outflow into the ocean would mainly be controlled by the amount of soil erosion and transport and the total amount of (137)Cs concentration remaining within the basin. Outflows of (137)Cs from the Abukuma River and the total from the other 13 river basins during the first year after the accident were calculated to be 2.3 × 10(11)-3.7 × 10(12) and 4.6 × 10(11)-6.5 × 10(12)Bqy(-1), respectively. The former results were compared with the field investigation results, and the order of magnitude was matched between the two, but the value of the investigation result was beyond the upper limit of model prediction.

  5. Extracting falsifiable predictions from sloppy models.

    PubMed

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  6. Childhood asthma prediction models: a systematic review.

    PubMed

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  7. Development of Models for Predicting the Predominant Taste and Odor Compounds in Taihu Lake, China

    PubMed Central

    Sun, Xiaoxue; Deng, Xuwei; Niu, Yuan; Xie, Ping

    2012-01-01

    Taste and odor (T&O) problems, which have adversely affected the quality of water supplied to millions of residents, have repeatedly occurred in Taihu Lake (e.g., a serious odor accident occurred in 2007). Because these accidents are difficult for water resource managers to forecast in a timely manner, there is an urgent need to develop optimum models to predict these T&O problems. For this purpose, various biotic and abiotic environmental parameters were monitored monthly for one year at 30 sites across Taihu Lake. This is the first investigation of this huge lake to sample T&O compounds at the whole-lake level. Certain phytoplankton taxa were important variables in the models; for instance, the concentrations of the particle-bound 2-methylisoborneol (p-MIB) were correlated with the presence of Oscillatoria, whereas those of the p-β-cyclocitral and p-β-ionone were correlated with Microcystis levels. Abiotic factors such as nitrogen (TN, TDN, NO3-N, and NO2-N), pH, DO, COND, COD and Chl-a also contributed significantly to the T&O predictive models. The dissolved (d) T&O compounds were related to both the algal biomass and to certain abiotic environmental factors, whereas the particle-bound (p) T&O compounds were more strongly related to the algal presence. We also tested the validity of these models using an independent data set that was previously collected from Taihu Lake in 2008. In comparing the concentrations of the T&O compounds observed in 2008 with those concentrations predicted from our models, we found that most of the predicted data points fell within the 90% confidence intervals of the observed values. This result supported the validity of these models in the studied system. These models, basing on easily collected environmental data, will be of practical value to the water resource managers of Taihu Lake for evaluating the probability of T&O accidents. PMID:23284835

  8. Numerical 3D modelling of oil dispersion in the sea due to different accident scenarios

    NASA Astrophysics Data System (ADS)

    Guandalini, Roberto; Agate, Giordano; Moia, Fabio

    2017-04-01

    The purpose of the study has been the development of a methodology, based on a numerical 3D approach, for the analysis of oil dispersion in the sea, in order to simulate with a high level of accuracy the dynamic behavior of the oil plume and its displacement in the environment. As a matter of fact, the numerical simulation is the only approach currently able to analyse in detail possible accident scenarios, even with an high degree of complexity, of different type and intensity, allowing to follow their evolution both in time and space, and to evaluate the effectiveness of suggested prevention or recovery actions. The software for these calculations is therefore an essential tool in order to simulate the impact effects in the short, medium and long period, able to account for the complexity of the sea system involved in the dispersion process and its dependency on the meteorological, marine and morphological local conditions. This software, generally based on fluid dynamic 3D simulators and modellers, is therefore extremely specialized and requires expertise for an appropriate usage, but at the same time it allows detailed scenario analyses and design verifications. It takes into account different parameters as the sea current field and its turbulence, the wind acting on the sea surface, the salinity and temperature gradients, the local coastal morphology, the seabed bathymetry and the tide. The applied methodology is based on the Integrated Fluid Dynamic Simulation System HyperSuite developed by RSE. This simulation system includes the consideration of all the parameters previously listed, in the frame of a 3D Eulerian finite element fluid dynamic model, which accuracy is guaranteed by a very detailed spatial mesh and by an automatically optimized time step management. In order to assess the methodology features, an area of more than 2500 km2 and depth of 200 m located in the middle Adriatic Sea has been modelled. The information required for the simulation in

  9. Hybrid approaches to physiologic modeling and prediction

    NASA Astrophysics Data System (ADS)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  10. Health effects models for nuclear power plant accident consequence analysis. Part 1, Introduction, integration, and summary: Revision 2

    SciTech Connect

    Evans, J.S.; Abrahmson, S.; Bender, M.A.; Boecker, B.B.; Scott, B.R.; Gilbert, E.S.

    1993-10-01

    This report is a revision of NUREG/CR-4214, Rev. 1, Part 1 (1990), Health Effects Models for Nuclear Power Plant Accident Consequence Analysis. This revision has been made to incorporate changes to the Health Effects Models recommended in two addenda to the NUREG/CR-4214, Rev. 1, Part 11, 1989 report. The first of these addenda provided recommended changes to the health effects models for low-LET radiations based on recent reports from UNSCEAR, ICRP and NAS/NRC (BEIR V). The second addendum presented changes needed to incorporate alpha-emitting radionuclides into the accident exposure source term. As in the earlier version of this report, models are provided for early and continuing effects, cancers and thyroid nodules, and genetic effects. Weibull dose-response functions are recommended for evaluating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary, and gastrointestinal syndromes are considered. Linear and linear-quadratic models are recommended for estimating the risks of seven types of cancer in adults - leukemia, bone, lung, breast, gastrointestinal, thyroid, and ``other``. For most cancers, both incidence and mortality are addressed. Five classes of genetic diseases -- dominant, x-linked, aneuploidy, unbalanced translocations, and multifactorial diseases are also considered. Data are provided that should enable analysts to consider the timing and severity of each type of health risk.

  11. Status report of advanced cladding modeling work to assess cladding performance under accident conditions

    SciTech Connect

    B.J. Merrill; Shannon M. Bragg-Sitton

    2013-09-01

    Scoping simulations performed using a severe accident code can be applied to investigate the influence of advanced materials on beyond design basis accident progression and to identify any existing code limitations. In 2012 an effort was initiated to develop a numerical capability for understanding the potential safety advantages that might be realized during severe accident conditions by replacing Zircaloy components in light water reactors (LWRs) with silicon carbide (SiC) components. To this end, a version of the MELCOR code, under development at the Sandia National Laboratories in New Mexico (SNL/NM), was modified by replacing Zircaloy for SiC in the MELCOR reactor core oxidation and material properties routines. The modified version of MELCOR was benchmarked against available experimental data to ensure that present SiC oxidation theory in air and steam were correctly implemented in the code. Additional modifications have been implemented in the code in 2013 to improve the specificity in defining components fabricated from non-standard materials. An overview of these modifications and the status of their implementation are summarized below.

  12. Incorporating uncertainty in predictive species distribution modelling

    PubMed Central

    Beale, Colin M.; Lennon, Jack J.

    2012-01-01

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates. PMID:22144387

  13. Incorporating uncertainty in predictive species distribution modelling.

    PubMed

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  14. Constructing Simple Predictive Models from Paleoclimate Records

    NASA Astrophysics Data System (ADS)

    Amrhein, D. E.; Hakim, G. J.; Thompson, L.

    2016-12-01

    Predicting regional climate variability on multidecadal time scales is a central challenge in climate science. To this end, paleoclimatological records have provided reconstructions of climate variability, estimates of climate sensitivity to external forcings, and a testing ground for climate model simulations run under paleoclimate boundary conditions. The direct use of paleoclimate proxy records to quantify predictability and construct and improve climate forecast models is a natural next step. This work seeks to construct simple (linear inverse) climate prediction models directly from marine and terrestrial paleoclimate proxies from the last 400 years with a focus on Atlantic multidecadal variability (AMV). Key questions are: Do paleoproxy records reveal predictability in AMV? If not, is the implication that AMV predictability is low, or are data noise and sparsity limiting factors? If there is predictability, can we diagnose time scales and regional patterns associated with climate dynamics?

  15. Evaluating the Predictive Value of Growth Prediction Models

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  16. Evaluating the Predictive Value of Growth Prediction Models

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  17. Predicting freakish sea state with an operational third-generation wave model

    NASA Astrophysics Data System (ADS)

    Waseda, T.; In, K.; Kiyomatsu, K.; Tamura, H.; Miyazawa, Y.; Iyama, K.

    2014-04-01

    The understanding of freak wave generation mechanisms has advanced and the community has reached a consensus that spectral geometry plays an important role. Numerous marine accident cases were studied and revealed that the narrowing of the directional spectrum is a good indicator of dangerous sea. However, the estimation of the directional spectrum depends on the performance of the third-generation wave model. In this work, a well-studied marine accident case in Japan in 1980 (Onomichi-Maru incident) is revisited and the sea states are hindcasted using both the DIA (discrete interaction approximation) and SRIAM (Simplified Research Institute of Applied Mechanics) nonlinear source terms. The result indicates that the temporal evolution of the basic parameters (directional spreading and frequency bandwidth) agree reasonably well between the two schemes and therefore the most commonly used DIA method is qualitatively sufficient to predict freakish sea state. The analyses revealed that in the case of Onomichi-Maru, a moving gale system caused the spectrum to grow in energy with limited downshifting at the accident's site. This conclusion contradicts the marine inquiry report speculating that the two swell systems crossed at the accident's site. The unimodal wave system grew under strong influence of local wind with a peculiar energy transfer.

  18. Model predictions and trend analysis

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Individual perturbations in atmospheric models are discussed. These are hypothetical perturbations determined by model computation in which it is assumed that one particular input or set of inputs to the model is changed while all others are held constant. The best estimates of past time dependent variations of globally averaged total ozone, and upper tropospheric and stratospheric ozone were determined along with geographical differences in the variations.

  19. Uncertainties propagation in the framework of a Rod Ejection Accident modeling based on a multi-physics approach

    SciTech Connect

    Le Pallec, J. C.; Crouzet, N.; Bergeaud, V.; Delavaud, C.

    2012-07-01

    The control of uncertainties in the field of reactor physics and their propagation in best-estimate modeling are a major issue in safety analysis. In this framework, the CEA develops a methodology to perform multi-physics simulations including uncertainties analysis. The present paper aims to present and apply this methodology for the analysis of an accidental situation such as REA (Rod Ejection Accident). This accident is characterized by a strong interaction between the different areas of the reactor physics (neutronic, fuel thermal and thermal hydraulic). The modeling is performed with CRONOS2 code. The uncertainties analysis has been conducted with the URANIE platform developed by the CEA: For each identified response from the modeling (output) and considering a set of key parameters with their uncertainties (input), a surrogate model in the form of a neural network has been produced. The set of neural networks is then used to carry out a sensitivity analysis which consists on a global variance analysis with the determination of the Sobol indices for all responses. The sensitivity indices are obtained for the input parameters by an approach based on the use of polynomial chaos. The present exercise helped to develop a methodological flow scheme, to consolidate the use of URANIE tool in the framework of parallel calculations. Finally, the use of polynomial chaos allowed computing high order sensitivity indices and thus highlighting and classifying the influence of identified uncertainties on each response of the analysis (single and interaction effects). (authors)

  20. Incorporating model uncertainty into spatial predictions

    SciTech Connect

    Handcock, M.S.

    1996-12-31

    We consider a modeling approach for spatially distributed data. We are concerned with aspects of statistical inference for Gaussian random fields when the ultimate objective is to predict the value of the random field at unobserved locations. However the exact statistical model is seldom known before hand and is usually estimated from the very same data relative to which the predictions are made. Our objective is to assess the effect of the fact that the model is estimated, rather than known, on the prediction and the associated prediction uncertainty. We describe a method for achieving this objective. We, in essence, consider the best linear unbiased prediction procedure based on the model within a Bayesian framework. These ideas are implemented for the spring temperature over the region in the northern United States based on the stations in the United States historical climatological network reported in Karl, Williams, Quinlan & Boden.

  1. Modeling of long range transport pathways for radionuclides to Korea during the Fukushima Dai-ichi nuclear accident and their association with meteorological circulations.

    PubMed

    Lee, Kwan-Hee; Kim, Ki-Hyun; Lee, Jin-Hong; Yun, Ju-Yong; Kim, Cheol-Hee

    2015-10-01

    The Lagrangian FLEXible PARTicle (FLEXPART) dispersion model and National Centers for Environmental Prediction/Global Forecast System (NCEP/GFS) meteorological data were used to simulate the long range transport pathways of three artificial radionuclides: (131)I, (137)Cs, and (133)Xe, coming into Korean Peninsula during the Fukushima Dai-ichi nuclear accident. Using emission rates of these radionuclides estimated from previous studies, three distinctive transport routes of these radionuclides toward the Korean Peninsula for a period from 10 March to 20 April 2011 were exploited by three spatial scales: 1) intercontinental scale - plume released since mid-March 2011 and transported to the North to arrive Korea on 23 March 2011, 2) global (hemispherical) scale - plume traveling over the whole northern hemisphere passing through the Pacific Ocean/Europe to reach the Korean Peninsula with relatively low concentrations in late March 2011 and, 3) regional scale - plume released on early April 2011 arrived at the Korean Peninsula via southwest sea of Japan influenced directly by veering mesoscale wind circulations. Our identification of these transport routes at three different scales of meteorological circulations suggests the feasibility of a multi-scale approach for more accurate prediction of radionuclide transport in the study area. In light of the fact that the observed arrival/duration time of peaks were explained well by the FLEXPART model coupled with NCEP/GFS input data, our approach can be used meaningfully as a decision support model for radiation emergency situations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Impact of a program for the prevention of traffic accidents in a Southern Brazilian city: a model for implementation in a developing country.

    PubMed

    Salvarani, Cármine Porcelli; Colli, Benedicto Oscar; Carlotti Júnior, Carlos Gilberto

    2009-07-01

    Traffic accidents constitute the main cause of death in the first decades of life. Traumatic brain injury is the event most responsible for the severity of these accidents. The SBN started an educational program for the prevention of traffic accidents, adapted from the American model "Think First" to the Brazilian environment, since 1995, with special effort devoted to the prevention of TBI by using seat belts and motorcycle helmets. The objective of the present study was to set up a traffic accident prevention program based on the adapted Think First and to evaluate its impact by comparing epidemiological variables before and after the beginning of the program. The program was executed in Maringá city, from September 2004 to August 2005, with educational actions targeting the entire population, especially teenagers and young adults. The program was implemented by building a network of information facilitators and multipliers inside the organized civil society, with widespread population dissemination. To measure the impact of the program, a specific software was developed for the storage and processing of the epidemiological variables. The results showed a reduction of trauma severity due to traffic accidents after the execution of the program, mainly TBI. The adapted Think First was systematically implemented and its impact measured for the first time in Brazil, revealing the usefulness of the program for reducing trauma and TBI severity in traffic accidents through public education and representing a standardized model of implementation in a developing country.

  3. Mapping and modelling of radionuclide distribution on the ground due to the Fukushima accident.

    PubMed

    Saito, Kimiaki

    2014-08-01

    A large-scale environmental monitoring effort, construction of detailed contamination maps based on the monitoring data, studies on radiocaesium migration in natural environments, construction of a prediction model for the air dose rate distribution in the 80 km zone, and construction of a database to preserve and keep open the obtained data have been implemented as national projects. Temporal changes in contamination conditions were analysed. It was found that air dose rates above roads have decreased much faster than those above undisturbed flat fields. Further, the decreasing tendency was found to depend on land uses, magnitudes of initial dose rates and some other factors.

  4. TRENDS (Transport and Retention of Nuclides in Dominant Sequences): A code for modeling iodine behavior in containment during severe accidents

    SciTech Connect

    Weber, C.F.; Beahm, E.C.; Kress, T.S.; Daish, S.R.; Shockley, W.E.

    1989-01-01

    The ultimate aim of a description of iodine behavior in severe LWR accidents is a time-dependent accounting of iodine species released into containment and to the environment. Factors involved in the behavior of iodine can be conveniently divided into four general categories: (1) initial release into containment, (2) interaction of iodine species in containment not directly involving water pools, (3) interaction of iodine species in, or with, water pools, and (4) interaction with special systems such as ice condensers or gas treatment systems. To fill the large gaps in knowledge and to provide a means for assaying the iodine source term, this program has proceeded along two paths: (1) Experimental studies of the chemical behavior of iodine under containment conditions. (2) Development of TRENDS (Transport and Retention of Nuclides in Dominant Sequences), a computer code for modeling the behavior of iodine in containment and its release from containment. The main body of this report consists of a description of TRENDS. These two parts to the program are complementary in that models within TRENDS use data that were produced in the experimental program; therefore, these models are supported by experimental evidence that was obtained under conditions expected in severe accidents. 7 refs., 1 fig., 2 tabs.

  5. Learning Instance-Specific Predictive Models

    PubMed Central

    Visweswaran, Shyam; Cooper, Gregory F.

    2013-01-01

    This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325

  6. A Global Model for Bankruptcy Prediction

    PubMed Central

    Alaminos, David; del Castillo, Agustín; Fernández, Manuel Ángel

    2016-01-01

    The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy. PMID:27880810

  7. Fingerprint verification prediction model in hand dermatitis.

    PubMed

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  8. Predictive Model of Systemic Toxicity (SOT)

    EPA Science Inventory

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  9. Cervical Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Predictive Modeling in Adult Education

    ERIC Educational Resources Information Center

    Lindner, Charles L.

    2011-01-01

    The current economic crisis, a growing workforce, the increasing lifespan of workers, and demanding, complex jobs have made organizations highly selective in employee recruitment and retention. It is therefore important, to the adult educator, to develop models of learning that better prepare adult learners for the workplace. The purpose of…

  11. Prostate Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Esophageal Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Liver Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Testicular Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Breast Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Bladder Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Lung Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Colorectal Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Pancreatic Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Ovarian Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Predictive Modeling in Adult Education

    ERIC Educational Resources Information Center

    Lindner, Charles L.

    2011-01-01

    The current economic crisis, a growing workforce, the increasing lifespan of workers, and demanding, complex jobs have made organizations highly selective in employee recruitment and retention. It is therefore important, to the adult educator, to develop models of learning that better prepare adult learners for the workplace. The purpose of…

  2. Anthropometric dependence of the response of a thorax FE model under high speed loading: validation and real world accident replication.

    PubMed

    Roth, Sébastien; Torres, Fabien; Feuerstein, Philippe; Thoral-Pierre, Karine

    2013-05-01

    Finite element analysis is frequently used in several fields such as automotive simulations or biomechanics. It helps researchers and engineers to understand the mechanical behaviour of complex structures. The development of computer science brought the possibility to develop realistic computational models which can behave like physical ones, avoiding the difficulties and costs of experimental tests. In the framework of biomechanics, lots of FE models have been developed in the last few decades, enabling the investigation of the behaviour of the human body submitted to heavy damage such as in road traffic accidents or in ballistic impact. In both cases, the thorax/abdomen/pelvis system is frequently injured. The understanding of the behaviour of this complex system is of extreme importance. In order to explore the dynamic response of this system to impact loading, a finite element model of the human thorax/abdomen/pelvis system has, therefore, been developed including the main organs: heart, lungs, kidneys, liver, spleen, the skeleton (with vertebrae, intervertebral discs, ribs), stomach, intestines, muscles, and skin. The FE model is based on a 3D reconstruction, which has been made from medical records of anonymous patients, who have had medical scans with no relation to the present study. Several scans have been analyzed, and specific attention has been paid to the anthropometry of the reconstructed model, which can be considered as a 50th percentile male model. The biometric parameters and laws have been implemented in the dynamic FE code (Radioss, Altair Hyperworks 11©) used for dynamic simulations. Then the 50th percentile model was validated against experimental data available in the literature, in terms of deflection, force, whose curve must be in experimental corridors. However, for other anthropometries (small male or large male models) question about the validation and results of numerical accident replications can be raised. Copyright © 2012 Elsevier

  3. Irma multisensor predictive signature model

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Wellfare, Michael R.; Foster, Joseph; Owens, Monte A.; Vechinski, Douglas A.; Richards, Mike; Resnick, Andrew; Underwood, Vincent

    1998-07-01

    The Irma synthetic signature model was one of the first high resolution infrared (IR) target and background signature models to be developed for tactical weapons application. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory, the Irma model was used exclusively to generate IR scenes for smart weapons research and development. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser channel. This two channel version, Irma 3.0, was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model which supported correlated frame-to-frame imagery. This and other improvements were released in Irma 2.2. Irma 3.2, a passive IR/millimeter wave (MMW) code, was completed in 1994. This served as the cornerstone for the development of the co- registered active/passive IR/MMW model, Irma 4.0. Currently, upgrades are underway to include a near IR (NIR)/visible channel; a facet editor; utilities to support image viewing and scaling; and additional target/data files. The Irma 4.1 software development effort is nearly completion. The purpose of this paper is to illustrate the results of the development. Planned upgrades for Irma 5.0 will be provided as well. Irma is being developed to facilitate multi-sensor research and development. It is currently being used to support a number of civilian and military applications. The current Irma user base includes over 100 agencies within the Air Force, Army, Navy, DARPA, NASA, Department of Transportation, academia, and industry.

  4. Source term estimation using air concentration measurements and a Lagrangian dispersion model - Experiments with pseudo and real cesium-137 observations from the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Chai, Tianfeng; Draxler, Roland; Stein, Ariel

    2015-04-01

    A transfer coefficient matrix (TCM) was created in a previous study using a Lagrangian dispersion model to provide plume predictions under different emission scenarios. The TCM estimates the contribution of each emission period to all sampling locations and can be used to estimate source terms by adjusting emission rates to match the model prediction with the measurements. In this paper, the TCM is used to formulate a cost functional that measures the differences between the model predictions and the actual air concentration measurements. The cost functional also includes a background term which adds the differences between a first guess and the updated emission estimates. Uncertainties of the measurements, as well as those for the first guess of source terms are both considered in the cost functional. In addition, a penalty term is added to create a smooth temporal change in the release rate. The method is first tested with pseudo observations generated using the Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model at the same location and time as the actual observations. The inverse estimation system is able to accurately recover the release rates and performs better than a direct solution using singular value decomposition (SVD). It is found that computing ln(c) differences between model and observations is better than using the original concentration c differences in the cost functional. The inverse estimation results are not sensitive to artificially introduced observational errors or different first guesses. To further test the method, daily average cesium-137 air concentration measurements around the globe from the Fukushima nuclear accident are used to estimate the release of the radionuclide. Compared with the latest estimates by Katata et al. (2014), the recovered release rates successfully capture the main temporal variations. When using subsets of the measured data, the inverse estimation method still manages to identify most of the

  5. Posterior Predictive Model Checking in Bayesian Networks

    ERIC Educational Resources Information Center

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  6. Posterior Predictive Model Checking in Bayesian Networks

    ERIC Educational Resources Information Center

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  7. Partial least square method for modelling ergonomic risks factors on express bus accidents in the east coast of peninsular west Malaysia

    NASA Astrophysics Data System (ADS)

    Hashim, Yusof bin; Taha, Zahari bin

    2015-02-01

    Public, stake holders and authorities in Malaysian government show great concern towards high numbers of passenger's injuries and passengers fatalities in express bus accident. This paper studies the underlying factors involved in determining ergonomics risk factors towards human error as the reasons in express bus accidents in order to develop an integrated analytical framework. Reliable information about drivers towards bus accident should lead to the design of strategies intended to make the public feel safe in public transport services. In addition there is an analysis of ergonomics risk factors to determine highly ergonomic risk factors which led to accidents. The research was performed in east coast of peninsular Malaysia using variance-based structural equation modeling namely the Partial Least Squares (PLS) regression techniques. A questionnaire survey was carried out at random among 65 express bus drivers operating from the city of Kuantan in Pahang and among 49 express bus drivers operating from the city of Kuala Terengganu in Terengganu to all towns in the east coast of peninsular west Malaysia. The ergonomic risks factors questionnaire is based on demographic information, occupational information, organizational safety climate, ergonomic workplace, physiological factors, stress at workplace, physical fatigue and near miss accidents. The correlation and significant values between latent constructs (near miss accident) were analyzed using SEM SmartPLS, 3M. The finding shows that the correlated ergonomic risks factors (occupational information, t=2.04, stress at workplace, t = 2.81, physiological factor, t=2.08) are significant to physical fatigue and as the mediator to near miss accident at t = 2.14 at p<0.05and T-statistics, t>1.96. The results shows that the effects of physical fatigue due to ergonomic risks factors influence the human error as the reasons in express bus accidents.

  8. Partial least square method for modelling ergonomic risks factors on express bus accidents in the east coast of peninsular west Malaysia

    SciTech Connect

    Hashim, Yusof bin; Taha, Zahari bin

    2015-02-03

    Public, stake holders and authorities in Malaysian government show great concern towards high numbers of passenger’s injuries and passengers fatalities in express bus accident. This paper studies the underlying factors involved in determining ergonomics risk factors towards human error as the reasons in express bus accidents in order to develop an integrated analytical framework. Reliable information about drivers towards bus accident should lead to the design of strategies intended to make the public feel safe in public transport services. In addition there is an analysis of ergonomics risk factors to determine highly ergonomic risk factors which led to accidents. The research was performed in east coast of peninsular Malaysia using variance-based structural equation modeling namely the Partial Least Squares (PLS) regression techniques. A questionnaire survey was carried out at random among 65 express bus drivers operating from the city of Kuantan in Pahang and among 49 express bus drivers operating from the city of Kuala Terengganu in Terengganu to all towns in the east coast of peninsular west Malaysia. The ergonomic risks factors questionnaire is based on demographic information, occupational information, organizational safety climate, ergonomic workplace, physiological factors, stress at workplace, physical fatigue and near miss accidents. The correlation and significant values between latent constructs (near miss accident) were analyzed using SEM SmartPLS, 3M. The finding shows that the correlated ergonomic risks factors (occupational information, t=2.04, stress at workplace, t = 2.81, physiological factor, t=2.08) are significant to physical fatigue and as the mediator to near miss accident at t = 2.14 at p<0.05and T-statistics, t>1.96. The results shows that the effects of physical fatigue due to ergonomic risks factors influence the human error as the reasons in express bus accidents.

  9. A Computerized Prediction Model of Hazardous Inflammatory Platelet Transfusion Outcomes

    PubMed Central

    Nguyen, Kim Anh; Hamzeh-Cognasse, Hind; Sebban, Marc; Fromont, Elisa; Chavarin, Patricia; Absi, Lena; Pozzetto, Bruno; Cognasse, Fabrice; Garraud, Olivier

    2014-01-01

    Background Platelet component (PC) transfusion leads occasionally to inflammatory hazards. Certain BRMs that are secreted by the platelets themselves during storage may have some responsibility. Methodology/Principal Findings First, we identified non-stochastic arrangements of platelet-secreted BRMs in platelet components that led to acute transfusion reactions (ATRs). These data provide formal clinical evidence that platelets generate secretion profiles under both sterile activation and pathological conditions. We next aimed to predict the risk of hazardous outcomes by establishing statistical models based on the associations of BRMs within the incriminated platelet components and using decision trees. We investigated a large (n = 65) series of ATRs after platelet component transfusions reported through a very homogenous system at one university hospital. Herein, we used a combination of clinical observations, ex vivo and in vitro investigations, and mathematical modeling systems. We calculated the statistical association of a large variety (n = 17) of cytokines, chemokines, and physiologically likely factors with acute inflammatory potential in patients presenting with severe hazards. We then generated an accident prediction model that proved to be dependent on the level (amount) of a given cytokine-like platelet product within the indicated component, e.g., soluble CD40-ligand (>289.5 pg/109 platelets), or the presence of another secreted factor (IL-13, >0). We further modeled the risk of the patient presenting either a febrile non-hemolytic transfusion reaction or an atypical allergic transfusion reaction, depending on the amount of the chemokine MIP-1α (<20.4 or >20.4 pg/109 platelets, respectively). Conclusions/Significance This allows the modeling of a policy of risk prevention for severe inflammatory outcomes in PC transfusion. PMID:24830754

  10. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  11. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  12. A Predictive Model of Group Panic Behavior.

    ERIC Educational Resources Information Center

    Weinberg, Sanford B.

    1978-01-01

    Reports results of a study which tested the following model to predict group panic behavior: that panic reactions are characterized by the exercise of inappropriate leadership behaviors in situations of high stress. (PD)

  13. A Course in... Model Predictive Control.

    ERIC Educational Resources Information Center

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  14. A Course in... Model Predictive Control.

    ERIC Educational Resources Information Center

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  15. Predictive modelling of boiler fouling

    SciTech Connect

    Not Available

    1991-01-01

    The primary objective of this work is the development of a comprehensive numerical model describing the time evolution of fouling under realistic heat exchanger conditions. As fouling is a complex interaction of gas flow, mineral transport and adhesion mechanisms, understanding and subsequently improved controlling of fouling achieved via appropriate manipulation of the various coupled, nonlinear processes in a complex fluid mechanics environment will undoubtedly help reduce the substantial operating costs incurred by the utilities annually, as well as afford greater flexibility in coal selection and reduce the emission of various pollutants. In a more specialized context, the numerical model to be developed as part of this activity will be used as a tool to address the interaction of the various mechanisms controlling deposit development in specific regimes or correlative relationships. These should prove of direct use to the coal burning industry.

  16. Predictive modelling of boiler fouling

    SciTech Connect

    Not Available

    1990-02-01

    The primary objective of this work is the development of a comprehensive numerical model describing the time evolution of fouling under realistic heat exchanger conditions. As fouling is a complex interaction of gas flow, mineral transport and adhesion mechanisms, understanding and subsequently improved controlling of fouling achieved via appropriate manipulation of the various coupled, nonlinear processes in a complex fluid mechanics environment will undoubtedly help reduce the substantial operating costs incurred by the utilities annually, as well as afford greater flexibility in coal selection and reduce the emission of various pollutants. In a more specialized context the numerical model to be developed as part of this activity will be used as a tool to address the interaction of the various mechanisms controlling deposit development in specific regimes or correlative relationships. These should prove of direct use to the coal burning industry.

  17. Irma multisensor predictive signature model

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Flynn, David S.; Wellfare, Michael R.; Richards, Mike; Prestwood, Lee

    1995-06-01

    The Irma synthetic signature model was one of the first high resolution synthetic infrared (IR) target and background signature models to be developed for tactical air-to-surface weapon scenarios. Originally developed in 1980 by the Armament Directorate of the Air Force Wright Laboratory (WL/MN), the Irma model was used exclusively to generate IR scenes for smart weapons research and development. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser channel. This two channel version, Irma 3.0, was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model which supported correlated frame-to-frame imagery. This and other improvements were released in Irma 2.2. Recently, Irma 3.2, a passive IR/millimeter wave (MMW) code, was completed. Currently, upgrades are underway to include an active MMW channel. Designated Irma 4.0, this code will serve as a cornerstone of sensor fusion research in the laboratory from 6.1 concept development to 6.3 technology demonstration programs for precision guided munitions. Several significant milestones have been reached in this development process and are demonstrated. The Irma 4.0 software design has been developed and interim results are available. Irma is being developed to facilitate multi-sensor smart weapons research and development. It is currently in distribution to over 80 agencies within the U.S. Air Force, U.S. Army, U.S. Navy, ARPA, NASA, Department of Transportation, academia, and industry.

  18. Predicting and Modeling RNA Architecture

    PubMed Central

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  19. Sensitivity analysis of uncertainty in model prediction.

    PubMed

    Russi, Trent; Packard, Andrew; Feeley, Ryan; Frenklach, Michael

    2008-03-27

    Data Collaboration is a framework designed to make inferences from experimental observations in the context of an underlying model. In the prior studies, the methodology was applied to prediction on chemical kinetics models, consistency of a reaction system, and discrimination among competing reaction models. The present work advances Data Collaboration by developing sensitivity analysis of uncertainty in model prediction with respect to uncertainty in experimental observations and model parameters. Evaluation of sensitivity coefficients is performed alongside the solution of the general optimization ansatz of Data Collaboration. The obtained sensitivity coefficients allow one to determine which experiment/parameter uncertainty contributes the most to the uncertainty in model prediction, rank such effects, consider new or even hypothetical experiments to perform, and combine the uncertainty analysis with the cost of uncertainty reduction, thereby providing guidance in selecting an experimental/theoretical strategy for community action.

  20. Irma multisensor predictive signature model

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Wellfare, Michael R.; Chenault, David B.; Talele, Sunjay E.; Blume, Bradley T.; Richards, Mike; Prestwood, Lee

    1997-06-01

    Development of target acquisition and target recognition algorithms in highly cluttered backgrounds in a variety of battlefield conditions demands a flexible, high fidelity capability for synthetic image generation. Cost effective smart weapons research and testing also requires extensive scene generation capability. The Irma software package addresses this need through a first principles, phenomenology based scene generator that enhances research into new algorithms, novel sensors, and sensor fusion approaches. Irma was one of the first high resolution synthetic infrared target and background signature models developed for tactical air-to-surface weapon scenarios. Originally developed in 1980 by the Armament Directorate of the Air Force Wright Laboratory, the Irma model was used exclusively to generate IR scenes for smart weapons research and development. in 1987, Nichols Research Corporation took over the maintenance of Irma and has since added substantial capabilities. The development of Irma has culminated in a program that includes not only passive visible, IR, and millimeter wave (MMW) channels but also active MMW and ladar channels. Each of these channels is co-registered providing the capability to develop algorithms for multi-band sensor fusion concepts and associated algorithms. In this paper, the capabilities of the latest release of Irma, Irma 4.0, will be described. A brief description of the elements of the software that are common to all channels will be provided. Each channel will be described briefly including a summary of the phenomenological effects and the sensor effects modeled in the software. Examples of Irma multi- channel imagery will be presented.

  1. Hybrid modeling and prediction of dynamical systems

    PubMed Central

    Lloyd, Alun L.; Flores, Kevin B.

    2017-01-01

    Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642

  2. Risk terrain modeling predicts child maltreatment.

    PubMed

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.

  3. Modelling the global atmospheric transport and deposition of radionuclides from the Fukushima Dai-ichi nuclear accident.

    NASA Astrophysics Data System (ADS)

    Christoudias, Theodoros; Lelieveld, Jos

    2013-04-01

    We modeled the global atmospheric dispersion and deposition of radionuclides released from the Fukushima Dai-ichi nuclear power plant accident. The EMAC atmospheric chemistry - general circulation model was used, with circulation dynamics nudged towards ERA-Interim reanalysis data. We applied a resolution of approximately 0.5 degrees in latitude and longitude (T255). The model accounts for emissions and transport of the radioactive isotopes 131I and 137Cs, and removal processes through precipitation, particle sedimentation and dry deposition. In addition, we simulated the release of 133Xe, a noble gas that can be regarded as a passive transport tracer of contaminated air. The source terms are based on Chino et al. (2011) and Stohl et al. (2012); especially the emission estimates of 131I are associated with a high degree of uncertainty. The calculated concentrations have been compared to station observations by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO).

  4. Survival Regression Modeling Strategies in CVD Prediction

    PubMed Central

    Barkhordari, Mahnaz; Padyab, Mojgan; Sardarinia, Mahsa; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza

    2016-01-01

    Background A fundamental part of prevention is prediction. Potential predictors are the sine qua non of prediction models. However, whether incorporating novel predictors to prediction models could be directly translated to added predictive value remains an area of dispute. The difference between the predictive power of a predictive model with (enhanced model) and without (baseline model) a certain predictor is generally regarded as an indicator of the predictive value added by that predictor. Indices such as discrimination and calibration have long been used in this regard. Recently, the use of added predictive value has been suggested while comparing the predictive performances of the predictive models with and without novel biomarkers. Objectives User-friendly statistical software capable of implementing novel statistical procedures is conspicuously lacking. This shortcoming has restricted implementation of such novel model assessment methods. We aimed to construct Stata commands to help researchers obtain the aforementioned statistical indices. Materials and Methods We have written Stata commands that are intended to help researchers obtain the following. 1, Nam-D’Agostino X2 goodness of fit test; 2, Cut point-free and cut point-based net reclassification improvement index (NRI), relative absolute integrated discriminatory improvement index (IDI), and survival-based regression analyses. We applied the commands to real data on women participating in the Tehran lipid and glucose study (TLGS) to examine if information relating to a family history of premature cardiovascular disease (CVD), waist circumference, and fasting plasma glucose can improve predictive performance of Framingham’s general CVD risk algorithm. Results The command is adpredsurv for survival models. Conclusions Herein we have described the Stata package “adpredsurv” for calculation of the Nam-D’Agostino X2 goodness of fit test as well as cut point-free and cut point-based NRI, relative

  5. Model predictive control: A new approach

    NASA Astrophysics Data System (ADS)

    Nagy, Endre

    2017-01-01

    New methods are proposed in this paper for solution of the model predictive control problem. Nonlinear state space design techniques are also treated. For nonlinear state prediction (state evolution computation) a new predictor given with an operator is introduced and tested. Settling the model predictive control problem may be obtained through application of the principle "direct stochastic optimum tracking" with a simple algorithm, which can be derived from a previously developed optimization procedure. The final result is obtained through iterations. Two examples show the applicability and advantages of the method.

  6. Radiological dose assessment for bounding accident scenarios at the Critical Experiment Facility, TA-18, Los Alamos National Laboratory

    SciTech Connect

    1991-09-01

    A computer modeling code, CRIT8, was written to allow prediction of the radiological doses to workers and members of the public resulting from these postulated maximum-effect accidents. The code accounts for the relationships of the initial parent radionuclide inventory at the time of the accident to the growth of radioactive daughter products, and considers the atmospheric conditions at time of release. The code then calculates a dose at chosen receptor locations for the sum of radionuclides produced as a result of the accident. Both criticality and non-criticality accidents are examined.

  7. Long term radiocesium contamination of fruit trees following the Chernobyl accident

    SciTech Connect

    Antonopoulos-Domis, M.; Clouvas, A.; Gagianas, A.

    1996-12-01

    Radiocesium contamination form the Chernobyl accident of fruits and leaves from various fruit trees was systematically studied form 1990-1995 on two agricultural experimentation farms in Northern Greece. The results are discussed in the framework of a previously published model describing the long-term radiocesium contamination mechanism of deciduous fruit trees after a nuclear accident. The results of the present work qualitatively verify the model predictions. 11 refs., 5 figs., 1 tab.

  8. Modeling of Radionuclides from the Fukushima Dai-ichi Nuclear Accident to Korea

    NASA Astrophysics Data System (ADS)

    Lee, K.; Yun, J. Y.

    2016-12-01

    FLEXPART Lagrangian model and NCEP/GFS meteorological data were employed and transport of radionuclides from Fukushima Dai-ichi nuclear plant toward Korean Peninsula was simulated for three key artificial radionuclides (Cs-137, I-131, and Xe-133). By simulating horizontal distributions and tracking the trajectories of the radionuclides for the period of 10 March 2011 to 20 April 2011, the following three distinctive different arrival pathways were detected; 1) intercontinental scale - plume released since mid-March 2011 and transported to the North to arrive Korea on 23 March 2011, 2) global(hemispherical) scale - plume traveling over the whole northern hemisphere passing through the Pacific Ocean/Europe to reach the Korean Peninsula with relatively low concentrations in late March 2011 and, 3) regional scale - plume released on early April 2011 arrived at the Korean Peninsula via southwest sea of Japan influenced directly by veering mesoscale wind circulations. Our identification of these transport routes at three different scales of meteorological circulations suggests the feasibility of a multi-scale approach for more accurate prediction of radionuclide transport in the study area. In light of the fact that the observed arrival/duration time of peaks were explained well by the FLEXPART model coupled with NCEP/GFS input data, our approach can be used meaningfully as a decision support model for radiation emergency situations

  9. Interpretable Deep Models for ICU Outcome Prediction

    PubMed Central

    Che, Zhengping; Purushotham, Sanjay; Khemani, Robinder; Liu, Yan

    2016-01-01

    Exponential surge in health care data, such as longitudinal data from electronic health records (EHR), sensor data from intensive care unit (ICU), etc., is providing new opportunities to discover meaningful data-driven characteristics and patterns ofdiseases. Recently, deep learning models have been employedfor many computational phenotyping and healthcare prediction tasks to achieve state-of-the-art performance. However, deep models lack interpretability which is crucial for wide adoption in medical research and clinical decision-making. In this paper, we introduce a simple yet powerful knowledge-distillation approach called interpretable mimic learning, which uses gradient boosting trees to learn interpretable models and at the same time achieves strong prediction performance as deep learning models. Experiment results on Pediatric ICU dataset for acute lung injury (ALI) show that our proposed method not only outperforms state-of-the-art approaches for morality and ventilator free days prediction tasks but can also provide interpretable models to clinicians. PMID:28269832

  10. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Pilsner, B. H.; Hillery, R. V.; Mcknight, R. L.; Cook, T. S.; Kim, K. S.; Duderstadt, E. C.

    1986-01-01

    The objectives of this program are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system, and then to develop and verify life prediction models accounting for these degradation modes. The program is divided into two phases, each consisting of several tasks. The work in Phase 1 is aimed at identifying the relative importance of the various failure modes, and developing and verifying life prediction model(s) for the predominant model for a thermal barrier coating system. Two possible predominant failure mechanisms being evaluated are bond coat oxidation and bond coat creep. The work in Phase 2 will develop design-capable, causal, life prediction models for thermomechanical and thermochemical failure modes, and for the exceptional conditions of foreign object damage and erosion.

  11. Predictive Validation of an Influenza Spread Model

    PubMed Central

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  12. Predictive validation of an influenza spread model.

    PubMed

    Hyder, Ayaz; Buckeridge, David L; Leung, Brian

    2013-01-01

    Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive ability.

  13. Application of the Life Change Unit model for the prevention of accident proneness among small to medium sized industries in Korea.

    PubMed

    Kang, Youngsig; Hahm, Hyojoon; Yang, Sunghwan; Kim, Taegu

    2008-10-01

    Behavior models have provided an accident proneness concept based on life change unit (LCU) factors. This paper describes the development of a Korean Life Change Unit (KLCU) model for workers and managers in fatal accident areas, as well as an evaluation of its application. Results suggest that death of parents is the highest stress-giving factor for employees of small and medium sized industries a rational finding the viewpoint of Korean culture. The next stress-giving factors were shown to be the death of a spouse or loved ones, followed by the death of close family members, the death of close friends, changes of family members' health, unemployment, and jail terms. It turned out that these factors have a serious effect on industrial accidents and work-related diseases. The death of parents and close friends are ranked higher in the KLCU model than that of Western society. Crucial information for industrial accident prevention in real fields will be provided and the provided information will be useful for safety management programs related to accident prevention.

  14. A lower baseline glomerular filtration rate predicts high mortality and newly cerebrovascular accidents in acute ischemic stroke patients

    PubMed Central

    Dong, Kai; Huang, Xiaoqin; Zhang, Qian; Yu, Zhipeng; Ding, Jianping; Song, Haiqing

    2017-01-01

    Abstract Chronic kidney disease (CKD) is gradually recognized as an independent risk factor for cardiovascular and cardio-/cerebrovascular disease. This study aimed to examine the association of the estimated glomerular filtration rate (eGFR) and clinical outcomes at 3 months after the onset of ischemic stroke in a hospitalized Chinese population. Totally, 972 patients with acute ischemic stroke were enrolled into this study. Modified of Diet in Renal Disease (MDRD) equations were used to calculate eGFR and define CKD. The site and degree of the stenosis were examined. Patients were followed-up for 3 months. Endpoint events included all-cause death and newly ischemic events. The multivariate logistic model was used to determine the association between renal dysfunction and patients’ outcomes. Of all patients, 130 patients (13.4%) had reduced eGFR (<60 mL/min/1.73 m2), and 556 patients had a normal eGFR (≥90 mL/min/1.73 m2). A total of 694 patients suffered from cerebral artery stenosis, in which 293 patients only had intracranial artery stenosis (ICAS), 110 only with extracranial carotid atherosclerotic stenosis (ECAS), and 301 with both ICAS and ECAS. The patients with eGFR <60 mL/min/1.73m2 had a higher proportion of death and newly ischemic events compared with those with a relatively normal eGFR. Multivariate analysis revealed that a baseline eGFR <60 mL/min/1.73 m2 increased the risk of mortality by 3.089-fold and newly ischemic events by 4.067-fold. In further analysis, a reduced eGFR was associated with increased rates of mortality and newly events both in ICAS patients and ECAS patients. However, only an increased risk of newly events was found as the degree of renal function deteriorated in ICAS patients (odds ratio = 8.169, 95% confidence interval = 2.445–14.127). A low baseline eGFR predicted a high mortality and newly ischemic events at 3 months in ischemic stroke patients. A low baseline eGFR was also a strong independent

  15. A lower baseline glomerular filtration rate predicts high mortality and newly cerebrovascular accidents in acute ischemic stroke patients.

    PubMed

    Dong, Kai; Huang, Xiaoqin; Zhang, Qian; Yu, Zhipeng; Ding, Jianping; Song, Haiqing

    2017-02-01

    Chronic kidney disease (CKD) is gradually recognized as an independent risk factor for cardiovascular and cardio-/cerebrovascular disease. This study aimed to examine the association of the estimated glomerular filtration rate (eGFR) and clinical outcomes at 3 months after the onset of ischemic stroke in a hospitalized Chinese population.Totally, 972 patients with acute ischemic stroke were enrolled into this study. Modified of Diet in Renal Disease (MDRD) equations were used to calculate eGFR and define CKD. The site and degree of the stenosis were examined. Patients were followed-up for 3 months. Endpoint events included all-cause death and newly ischemic events. The multivariate logistic model was used to determine the association between renal dysfunction and patients' outcomes.Of all patients, 130 patients (13.4%) had reduced eGFR (<60 mL/min/1.73 m), and 556 patients had a normal eGFR (≥90 mL/min/1.73 m). A total of 694 patients suffered from cerebral artery stenosis, in which 293 patients only had intracranial artery stenosis (ICAS), 110 only with extracranial carotid atherosclerotic stenosis (ECAS), and 301 with both ICAS and ECAS. The patients with eGFR <60 mL/min/1.73m had a higher proportion of death and newly ischemic events compared with those with a relatively normal eGFR. Multivariate analysis revealed that a baseline eGFR <60 mL/min/1.73 m increased the risk of mortality by 3.089-fold and newly ischemic events by 4.067-fold. In further analysis, a reduced eGFR was associated with increased rates of mortality and newly events both in ICAS patients and ECAS patients. However, only an increased risk of newly events was found as the degree of renal function deteriorated in ICAS patients (odds ratio = 8.169, 95% confidence interval = 2.445-14.127).A low baseline eGFR predicted a high mortality and newly ischemic events at 3 months in ischemic stroke patients. A low baseline eGFR was also a strong independent predictor for newly

  16. Atmospheric dispersion modeling for the worst-case detonation accident at the proposed Advanced Hydrotest Facility

    SciTech Connect

    Bowen, B.M., LLNL

    1996-10-01

    The Atmospheric Release Advisory Capability (ARAC) was requested to estimate credible worst-case dose, air concentration, and deposition of airborne hazardous materials that would result from a worst-case detonation accident at the proposed Advanced Hydrotest Facility (AHT) at the Nevada Test Site (NTS). Consequences were estimated at the closest onsite facility, the Device Assembly Facility (DOFF), and offsite location (intersection of Highway and U.S. 95). The materials considered in this analysis were weapon-grade plutonium, beryllium, and hydrogen fluoride which is a combustion product whose concentration is dependent upon the quantity of high explosives. The analysis compares the calculated results with action guidelines published by the Department of Defense in DoD 5100.52-M (Nuclear Weapon Accident Response Procedures). Results indicate that based on a one kg release of plutonium the whole body radiation dose could be as high as 3 Rem at the DOFF. This level approaches the 5 Rem level for which the Department of Defense requires respiratory protection, recommends sheltering and the consideration of evacuation. Deposition levels at the DOFF could approach 6 uCi/m{sup 2} for which the Department of Defense recommends access on a need-only basis and suggests that a possible controlled evacuation might be required. For a one kg release of plutonium, the dose at the nearest off-site location could reach 0.5. At this level, the Department of Defense suggests that sheltering be considered. For a one kg release of beryllium, the peak 5-minute concentration at the DOFF could be as as 20% of 6xlO{sup -3} mg/m{sup 2} which is the applicable by Response Planning Guideline (ERPG-1). At the nearest offsite location, the beryllium concentrations from a one kg release would be two orders of magnitude less than the same guideline. For the detonation of 100 kg of the explosive LX-17, the concentration of hydrogen fluoride at both the DOFF and the nearest offsite location

  17. Failure behavior of internally pressurized flawed and unflawed steam generator tubing at high temperatures -- Experiments and comparison with model predictions

    SciTech Connect

    Majumdar, S.; Shack, W.J.; Diercks, D.R.; Mruk, K.; Franklin, J.; Knoblich, L.

    1998-03-01

    This report summarizes experimental work performed at Argonne National Laboratory on the failure of internally pressurized steam generator tubing at high temperatures ({le} 700 C). A model was developed for predicting failure of flawed and unflawed steam generator tubes under internal pressure and temperature histories postulated to occur during severe accidents. The model was validated by failure tests on specimens with part-through-wall axial and circumferential flaws of various lengths and depths, conducted under various constant and ramped internal pressure and temperature conditions. The failure temperatures predicted by the model for two temperature and pressure histories, calculated for severe accidents initiated by a station blackout, agree very well with tests performed on both flawed and unflawed specimens.

  18. Estimation of the duration after methamphetamine injection using a pharmacokinetic model in suspects who caused fatal traffic accidents.

    PubMed

    Matsubara, Kazuo; Asari, Masaru; Suno, Manabu; Awaya, Toshio; Sugawara, Mitsuru; Omura, Tomohiro; Yamamoto, Joe; Maseda, Chikatoshi; Tasaki, Yoshikazu; Shiono, Hiroshi; Shimizu, Keiko

    2012-07-01

    When the population parameters of drug pharmacokinetics in the human body system are known, the time-course of a certain drug in an individual can generally be estimated by pharmacokinetics. In the present two cases where methamphetamine abusers were suspected to have inflicted mortalities in traffic accidents, the time-elapse or duration immediately after methamphetamine injection to the time when the accidents occurred became points of contention. In each case, we estimated the time-course of blood methamphetamine after the self-administration in the suspects using a 2-compartment pharmacokinetic model with known pharmacokinetic parameters from the literatures. If the injected amount can be determined to a certain extent, it is easy to calculate the average time-elapse after injection by referring to reference values. However, there is considerable individual variability in the elimination rate based on genetic polymorphism and a considerably large error range in the estimated time-elapse results. To minimize estimation errors in such cases, we also analyzed genotype of CYP2D6, which influenced methamphetamine metabolism. Estimation based on two time-point blood samples would usefully benefit legal authorities in passing ruling sentences in cases involving similar personalities and circumstances as those involved in the present study.

  19. The "AQUASCOPE" simplified model for predicting 89,90Sr, 131I, and 134,137Cs in surface waters after a large-scale radioactive fallout.

    PubMed

    Smith, J T; Belova, N V; Bulgakov, A A; Comans, R N J; Konoplev, A V; Kudelsky, A V; Madruga, M J; Voitsekhovitch, O V; Zibold, G

    2005-12-01

    Simplified dynamic models have been developed for predicting the concentrations of radiocesium, radiostrontium, and I in surface waters and freshwater fish following a large-scale radioactive fallout. The models are intended to give averaged estimates for radionuclides in water bodies and in fish for all times after a radioactive fallout event. The models are parameterized using empirical data collected for many lakes and rivers in Belarus, Russia, Ukraine, UK, Finland, Italy, The Netherlands, and Germany. These measurements span a long time period after fallout from atmospheric nuclear weapons testing and following the Chernobyl accident. The models thus developed were tested against independent measurements from the Kiev Reservoir and Chernobyl Cooling Pond (Ukraine) and the Sozh River (Belarus) after the Chernobyl accident, from Lake Uruskul (Russia), following the Kyshtym accident in 1957, and from Haweswater Reservoir (UK), following atmospheric nuclear weapons testing. The AQUASCOPE models (implemented in EXCEL spreadsheets) and model documentation are available free of charge from the corresponding author.

  20. Posterior predictive checking of multiple imputation models.

    PubMed

    Nguyen, Cattram D; Lee, Katherine J; Carlin, John B

    2015-07-01

    Multiple imputation is gaining popularity as a strategy for handling missing data, but there is a scarcity of tools for checking imputation models, a critical step in model fitting. Posterior predictive checking (PPC) has been recommended as an imputation diagnostic. PPC involves simulating "replicated" data from the posterior predictive distribution of the model under scrutiny. Model fit is assessed by examining whether the analysis from the observed data appears typical of results obtained from the replicates produced by the model. A proposed diagnostic measure is the posterior predictive "p-value", an extreme value of which (i.e., a value close to 0 or 1) suggests a misfit between the model and the data. The aim of this study was to evaluate the performance of the posterior predictive p-value as an imputation diagnostic. Using simulation methods, we deliberately misspecified imputation models to determine whether posterior predictive p-values were effective in identifying these problems. When estimating the regression parameter of interest, we found that more extreme p-values were associated with poorer imputation model performance, although the results highlighted that traditional thresholds for classical p-values do not apply in this context. A shortcoming of the PPC method was its reduced ability to detect misspecified models with increasing amounts of missing data. Despite the limitations of posterior predictive p-values, they appear to have a valuable place in the imputer's toolkit. In addition to automated checking using p-values, we recommend imputers perform graphical checks and examine other summaries of the test quantity distribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Reconstruction of (131)I radioactive contamination in Ukraine caused by the Chernobyl accident using atmospheric transport modelling.

    PubMed

    Talerko, Nikolai

    2005-01-01

    The evaluation of (131)I air and ground contamination field formation in the territory of Ukraine was made using the model of atmospheric transport LEDI (Lagrangian-Eulerian DIffusion model). The (131)I atmospheric transport over the territory of Ukraine was simulated during the first 12 days after the accident (from 26 April to 7 May 1986) using real aerological information and rain measurement network data. The airborne (131)I concentration and ground deposition fields were calculated as the database for subsequent thyroid dose reconstruction for inhabitants of radioactive contaminated regions. The small-scale deposition field variability is assessed using data of (137)Cs detailed measurements in the territory of Ukraine. The obtained results are compared with available data of radioiodine daily deposition measurements made at the network of meteorological stations in Ukraine and data of the assessments of (131)I soil contamination obtained from the (129)I measurements.

  2. Mathematical model for predicting human vertebral fracture

    NASA Technical Reports Server (NTRS)

    Benedict, J. V.

    1973-01-01

    Mathematical model has been constructed to predict dynamic response of tapered, curved beam columns in as much as human spine closely resembles this form. Model takes into consideration effects of impact force, mass distribution, and material properties. Solutions were verified by dynamic tests on curved, tapered, elastic polyethylene beam.

  3. Predictive risk models for proximal aortic surgery

    PubMed Central

    Díaz, Rocío; Pascual, Isaac; Álvarez, Rubén; Alperi, Alberto; Rozado, Jose; Morales, Carlos; Silva, Jacobo; Morís, César

    2017-01-01

    Predictive risk models help improve decision making, information to our patients and quality control comparing results between surgeons and between institutions. The use of these models promotes competitiveness and led to increasingly better results. All these virtues are of utmost importance when the surgical operation entails high-risk. Although proximal aortic surgery is less frequent than other cardiac surgery operations, this procedure itself is more challenging and technically demanding than other common cardiac surgery techniques. The aim of this study is to review the current status of predictive risk models for patients who undergo proximal aortic surgery, which means aortic root replacement, supracoronary ascending aortic replacement or aortic arch surgery. PMID:28616348

  4. Predictive risk models for proximal aortic surgery.

    PubMed

    Hernandez-Vaquero, Daniel; Díaz, Rocío; Pascual, Isaac; Álvarez, Rubén; Alperi, Alberto; Rozado, Jose; Morales, Carlos; Silva, Jacobo; Morís, César

    2017-05-01

    Predictive risk models help improve decision making, information to our patients and quality control comparing results between surgeons and between institutions. The use of these models promotes competitiveness and led to increasingly better results. All these virtues are of utmost importance when the surgical operation entails high-risk. Although proximal aortic surgery is less frequent than other cardiac surgery operations, this procedure itself is more challenging and technically demanding than other common cardiac surgery techniques. The aim of this study is to review the current status of predictive risk models for patients who undergo proximal aortic surgery, which means aortic root replacement, supracoronary ascending aortic replacement or aortic arch surgery.

  5. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Meier, Susan M.; Nissley, David M.; Sheffler, Keith D.; Cruse, Thomas A.

    1991-01-01

    A thermal barrier coated (TBC) turbine component design system, including an accurate TBC life prediction model, is needed to realize the full potential of available TBC engine performance and/or durability benefits. The objective of this work, which was sponsored in part by NASA, was to generate a life prediction model for electron beam - physical vapor deposited (EB-PVD) zirconia TBC. Specific results include EB-PVD zirconia mechanical and physical properties, coating adherence strength measurements, interfacial oxide growth characteristics, quantitative cyclic thermal spallation life data, and a spallation life model.

  6. Model Predictive Control of Fractional Order Systems.

    PubMed

    Rhouma, Aymen; Bouani, Faouzi; Bouzouita, Badreddine; Ksouri, Mekki

    2014-07-01

    This paper provides the model predictive control (MPC) of fractional order systems. The direct method will be used as internal model to predict the future dynamic behavior of the process, which is used to achieve the control law. This method is based on the Grünwald-Letnikov's definition that consists of replacing the noninteger derivation operator of the adopted system representation by a discrete approximation. The performances and the efficiency of this approach are illustrated with practical results on a thermal system and compared to the MPC based on the integer ARX model.

  7. The R-γ transition prediction model

    NASA Astrophysics Data System (ADS)

    Goldberg, Uriel C.; Batten, Paul; Peroomian, Oshin; Chakravarthy, Sukumar

    2015-01-01

    The Rt turbulence closure (Goldberg 2003) is coupled with an intermittency transport equation, γ, to enable prediction of laminar-to-turbulent flow by-pass transition. The model is not correlation-based and is completely topography-parameter-free, thus ready for use in parallelized Computational Fluid Dynamics (CFD) solvers based on unstructured book-keeping. Several examples compare the R-γ model's performance with experimental data and with predictions by the Langtry-Menter γ-Reθ transition closure (2009). Like the latter, the R-γ model is very sensitive to freestream turbulence levels, limiting its utility for engineering purposes.

  8. GASFLOW: The theoretical model to analyze accidents in nuclear containments, confinements, and facility buildings

    SciTech Connect

    Travis, J.R.; Wilson, T.L.

    1993-05-01

    This report documents the governing physical equations for GASFLOW, a finite-volume computer code for solving transient, three-dimensional, compressible, Navier-Stokes equations for multiple gas species. The code is designed to be a best-estimate tool for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments, confinements, and other facility buildings. An analysis with GASFLOW will result in time-dependent gas-species concentrations throughout the structure analyzed, and in the event of combustion the pressure and temperature loadings on the walls and internal structures. GASFLOW can model geometrically complex containment systems with multiple compartments and internal structures. It can calculate gas behavior of low-speed buoyancy-driven flows, of diffusion-dominated flows, and during deflagrations. The code can model condensation heat transfer to walls and internal structures by natural and forced convection; chemical kinetics of combustion of hydrogen or hydrocarbons fuels; and fluid turbulence. Heat conduction within walls and structures is considered one-dimensional.

  9. Model review and evaluation for application in DOE safety basis documentation of chemical accidents - modeling guidance for atmospheric dispersion and consequence assessment

    SciTech Connect

    Lazaro, M. A.; Woodarad, K.; Hanna, S. R.; Hesse, D. J.; Huang, J. -C.; Lewis, J.; Mazzola, C. A.

    1997-09-01

    The U.S. Department of Energy (DOE), through its Defense Programs (DP), Office of Engineering and Operations Suppon, established the Accident Phenomenology and Consequence (AP AC) Methodology Evaluation Program to identify and evaluate methodologies and computer codes to support accident phenomenological and consequence calculations for both radiological and nonradiological materials at DOE facilities and to identify development needs. The program is also intended to define and recommend "best or good engineering/safety analysis practices" to be followed in preparing ''design or beyond design basis" assessments to be included in DOE nuclear and nonnuclear facility safety documents. The AP AC effort is intended to provide scientifically sound and more consistent analytical approaches, by identifying model selection procedures and application methodologies, in order to enhance safety analysis activities throughout the DOE complex.

  10. A High Precision Prediction Model Using Hybrid Grey Dynamic Model

    ERIC Educational Resources Information Center

    Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro

    2008-01-01

    In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…

  11. A High Precision Prediction Model Using Hybrid Grey Dynamic Model

    ERIC Educational Resources Information Center

    Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro

    2008-01-01

    In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…

  12. Prediction of PARP Inhibition with Proteochemometric Modelling and Conformal Prediction.

    PubMed

    Cortés-Ciriano, Isidro; Bender, Andreas; Malliavin, Thérèse

    2015-06-01

    Poly(ADP-ribose) polymerases (PARPs) play a key role in DNA damage repair. PARP inhibitors act as chemo- and radio- sensitizers and thus potentiate the cytotoxicity of DNA damaging agents. Although PARP inhibitors are currently investigated as chemotherapeutic agents, their cross-reactivity with other members of the PARP family remains unclear. Here, we apply Proteochemometric Modelling (PCM) to model the activity of 181 compounds on 12 human PARPs. We demonstrate that PCM (R0 (2) test =0.65-0.69; RMSEtest =0.95-1.01 °C) displays higher performance on the test set (interpolation) than Family QSAR and Family QSAM (Tukey's HSD, α 0.05), and outperforms Inductive Transfer knowledge among targets (Tukey's HSD, α 0.05). We benchmark the predictive signal of 8 amino acid and 11 full-protein sequence descriptors, obtaining that all of them (except for SOCN) perform at the same level of statistical significance (Tukey's HSD, α 0.05). The extrapolation power of PCM to new compounds (RMSE=1.02±0.80 °C) and targets (RMSE=1.03±0.50 °C) is comparable to interpolation, although the extrapolation ability is not uniform across the chemical and the target space. For this reason, we also provide confidence intervals calculated with conformal prediction. In addition, we present the R package conformal, which permits the calculation of confidence intervals for regression and classification caret models.

  13. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.; Mcknight, R. L.; Cook, T. S.; Hartle, M. S.

    1988-01-01

    This report describes work performed to determine the predominat modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consisted of a low pressure plasma sprayed NiCrAlY bond coat, an air plasma sprayed ZrO2-Y2O3 top coat, and a Rene' 80 substrate. The work was divided into 3 technical tasks. The primary failure mode to be addressed was loss of the zirconia layer through spalling. Experiments showed that oxidation of the bond coat is a significant contributor to coating failure. It was evident from the test results that the species of oxide scale initially formed on the bond coat plays a role in coating degradation and failure. It was also shown that elevated temperature creep of the bond coat plays a role in coating failure. An empirical model was developed for predicting the test life of specimens with selected coating, specimen, and test condition variations. In the second task, a coating life prediction model was developed based on the data from Task 1 experiments, results from thermomechanical experiments performed as part of Task 2, and finite element analyses of the TBC system during thermal cycles. The third and final task attempted to verify the validity of the model developed in Task 2. This was done by using the model to predict the test lives of several coating variations and specimen geometries, then comparing these predicted lives to experimentally determined test lives. It was found that the model correctly predicts trends, but that additional refinement is needed to accurately predict coating life.

  14. Are animal models predictive for humans?

    PubMed Central

    2009-01-01

    It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics. PMID:19146696

  15. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    PubMed Central

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  16. Prediction oriented QSAR modelling of EGFR inhibition.

    PubMed

    Szántai-Kis, C; Kövesdi, I; Eros, D; Bánhegyi, P; Ullrich, A; Kéri, G; Orfi, L

    2006-01-01

    Epidermal Growth Factor Receptor (EGFR) is a high priority target in anticancer drug research. Thousands of very effective EGFR inhibitors have been developed in the last decade. The known inhibitors are originated from a very diverse chemical space but--without exception--all of them act at the Adenosine TriPhosphate (ATP) binding site of the enzyme. We have collected all of the diverse inhibitor structures and the relevant biological data obtained from comparable assays and built prediction oriented Quantitative Structure-Activity Relationship (QSAR) which models the ATP binding pocket's interactive surface from the ligand side. We describe a QSAR method with automatic Variable Subset Selection (VSS) by Genetic Algorithm (GA) and goodness-of-prediction driven QSAR model building, resulting an externally validated EGFR inhibitory model built from pIC50 values of a diverse structural set of 623 EGFR inhibitors. Repeated Trainings/Evaluations (RTE) were used to obtain model fitness values and the effectiveness of VSS is amplified by using predictive ability scores of descriptors. Numerous models were generated by different methods and viable models were collected. Then, intensive RTE were applied to identify ultimate models for external validations. Finally, suitable models were validated by statistical tests. Since we use calculated molecular descriptors in the modeling, these models are suitable for virtual screening for obtaining novel potential EGFR inhibitors.

  17. Mesoscale Wind Predictions for Wave Model Evaluation

    DTIC Science & Technology

    2016-06-07

    contains the following components for atmospheric analysis and prediction : complex data quality control ; a multivariate optimum interpolation analysis...subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 30...SEP 1999 2. REPORT TYPE 3. DATES COVERED 00-00-1999 to 00-00-1999 4. TITLE AND SUBTITLE Mesoscale Wind Predictions for Wave Model Evaluation

  18. Predictive capability of chlorination disinfection byproducts models.

    PubMed

    Ged, Evan C; Chadik, Paul A; Boyer, Treavor H

    2015-02-01

    There are over 100 models that have been developed for predicting trihalomethanes (THMs), haloacetic acids (HAAs), bromate, and unregulated disinfection byproducts (DBPs). Until now no publication has evaluated the variability of previous THM and HAA models using a common data set. In this article, the standard error (SE), Marquardt's percent standard deviation (MPSD), and linear coefficient of determination (R(2)) were used to analyze the variability of 87 models from 23 different publications. The most robust models were capable of predicting THM4 with an SE of 48 μg L(-1) and HAA6 with an SE of 15 μg L(-1), both achieving R(2) > 0.90. The majority of models were formulated for THM4. There is a lack of published models evaluating total HAAs, individual THM and HAA species, bromate, and unregulated DBPs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Predictive models of large neutrino mixing angles

    SciTech Connect

    Barr, S.M.

    1997-02-01

    Several experimental results could be interpreted as evidence that certain neutrino mixing angles are large, of order unity. However, in the context of grand unified models the neutrino angles come out characteristically to be small, like the KM angles. It is shown how to construct simple grand-unified models in which neutrino angles are not only large but completely predicted with some precision. Six models are presented for illustration. {copyright} {ital 1997} {ital The American Physical Society}

  20. Plasma Stabilization Based on Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Sotnikova, Margarita

    The nonlinear model predictive control algorithms for plasma current and shape stabilization are proposed. Such algorithms are quite suitable for the situations when the plant to be controlled has essentially nonlinear dynamics. Besides that, predictive model based control algorithms allow to take into account a lot of requirements and constraints involved both on the controlled and manipulated variables. The significant drawback of the algorithms is that they require a lot of time to compute control input at each sampling instant. In this paper the model predictive control algorithms are demonstrated by the example of plasma vertical stabilization for ITER-FEAT tokamak. The tuning of parameters of algorithms is performed in order to decrease computational load.

  1. Radiation accidents

    SciTech Connect

    Saenger, E.L.

    1986-09-01

    It is essential that emergency physicians understand ways to manage patients contaminated by radioactive materials and/or exposed to external radiation sources. Contamination accidents require careful surveys to identify the metabolic pathway of the radionuclides to guide prognosis and treatment. The level of treatment required will depend on careful surveys and meticulous decontamination. There is no specific therapy for the acute radiation syndrome. Prophylactic antibodies are desirable. For severely exposed patients treatment is similar to the supportive care given to patients undergoing organ transplantation. For high-dose extremity injury, no methods have been developed to reverse the fibrosing endarteritis that eventually leads to tissue death so frequently found with this type of injury. Although the Three Mile Island episode of March 1979 created tremendous public concern, there were no radiation injuries. The contamination outside the reactor building and the release of radioiodine were negligible. The accidental fuel element meltdown at Chernobyl, USSR, resulted in many cases of acute radiation syndrome. More than 100,000 people were exposed to high levels of radioactive fallout. The general principles outlined here are applicable to accidents of that degree of severity.

  2. Predictive model for segmented poly(urea)

    NASA Astrophysics Data System (ADS)

    Gould, P. J.; Cornish, R.; Frankl, P.; Lewtas, I.

    2012-08-01

    Segmented poly(urea) has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM) - a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  3. Multi-Model Ensemble Wake Vortex Prediction

    NASA Technical Reports Server (NTRS)

    Koerner, Stephan; Ahmad, Nash'at N.; Holzaepfel, Frank; VanValkenburg, Randal L.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  4. Multi-Model Ensemble Wake Vortex Prediction

    NASA Technical Reports Server (NTRS)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  5. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl Nuclear Power Plant accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-07-01

    The coupled model LMDZORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5° × 1.27°, and the same grid stretched over Europe to reach a resolution of 0.66° × 0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels respectively, extending up to the mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The model is validated with the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986 using the emission inventory from Brandt et al. (2002). This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. The best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to De Cort et al., 1998), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for

  6. Development of an in vitro porcine aorta model to study the stability of stent grafts in motor vehicle accidents.

    PubMed

    Darvish, Kurosh; Shafieian, Mehdi; Romanov, Vasily; Rotella, Vittorio; Salvatore, Michael D; Blebea, John

    2009-04-01

    Endovascular stent grafts for the treatment of thoracic aortic aneurysms have become increasingly utilized and yet their locational stability in moderate chest trauma is unknown. A high speed impact system was developed to study the stability of aortic endovascular stent grafts in vitro. A straight segment of porcine descending aorta with stent graft was constrained in a custom-made transparent urethane casing. The specimen was tested in a novel impact system at an anterior inclination of 45 deg and an average deceleration of 55 G, which represented a frontal automobile crash. Due to the shock of the impact, which was shown to be below the threshold of aortic injury, the stent graft moved 0.6 mm longitudinally. This result was repeatable. The presented experimental model may be helpful in developing future grafts to withstand moderate shocks experienced in motor vehicle accidents or other dynamic loadings of the chest.

  7. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    PubMed

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Stewart, S. E.; Ortiz, M.

    1988-01-01

    A life prediction model for correlating the spallation life of ceramic thermal barrier coatings is developed which includes both cyclic and time-dependent damage. The cyclic damage is related to the calculated cyclic inelastic strain range, while the time-dependent damage is related to the oxidation kinetics at the bond-ceramic interface. The cyclic inelastic strain range is calculated using a modified form of the Walker viscoplastic material model; calculation of the oxidation kinetics is based on traditional oxidation algorithms using experimentally determined parameters. The correlation between the actual and predicted spallation lives is within a factor of 3.

  9. Clustering-based classification of road traffic accidents using hierarchical clustering and artificial neural networks.

    PubMed

    Taamneh, Madhar; Taamneh, Salah; Alkheder, Sharaf

    2017-09-01

    Artificial neural networks (ANNs) have been widely used in predicting the severity of road traffic crashes. All available information about previously occurred accidents is typically used for building a single prediction model (i.e., classifier). Too little attention has been paid to the differences between these accidents, leading, in most cases, to build less accurate predictors. Hierarchical clustering is a well-known clustering method that seeks to group data by creating a hierarchy of clusters. Using hierarchical clustering and ANNs, a clustering-based classification approach for predicting the injury severity of road traffic accidents was proposed. About 6000 road accidents occurred over a six-year period from 2008 to 2013 in Abu Dhabi were used throughout this study. In order to reduce the amount of variation in data, hierarchical clustering was applied on the data set to organize it into six different forms, each with different number of clusters (i.e., clusters from 1 to 6). Two ANN models were subsequently built for each cluster of accidents in each generated form. The first model was built and validated using all accidents (training set), whereas only 66% of the accidents were used to build the second model, and the remaining 34% were used to test it (percentage split). Finally, the weighted average accuracy was computed for each type of models in each from of data. The results show that when testing the models using the training set, clustering prior to classification achieves (11%-16%) more accuracy than without using clustering, while the percentage split achieves (2%-5%) more accuracy. The results also suggest that partitioning the accidents into six clusters achieves the best accuracy if both types of models are taken into account.

  10. Verification and Validation of Neutronic/Thermalhydraulic 3D-Time Dependent Model for Treatment of Super-critical States of Light water Research Reactors Accidents

    SciTech Connect

    Khaled, S.M.

    2015-07-01

    This work presents the Verification and testing both the neutronic and thermal-hydraulics response of the positive reactivity-initiated power excursion accidents in small light water research reactors. Some research reactors have to build its own severe accidents code system. In this sense, a 3D space-time-dependent neutron diffusion models with thermal hydraulic feedback have been introduced, compared and tested both experimentally at criticality 14-cent and theoretically up to 1.5 $ with a number of similar codes. The results shows that no expected core failure or moderator boiling. (author)

  11. Calibrated predictions for multivariate competing risks models.

    PubMed

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  12. SEC proton prediction model: verification and analysis.

    PubMed

    Balch, C C

    1999-06-01

    This paper describes a model that has been used at the NOAA Space Environment Center since the early 1970s as a guide for the prediction of solar energetic particle events. The algorithms for proton event probability, peak flux, and rise time are described. The predictions are compared with observations. The current model shows some ability to distinguish between proton event associated flares and flares that are not associated with proton events. The comparisons of predicted and observed peak flux show considerable scatter, with an rms error of almost an order of magnitude. Rise time comparisons also show scatter, with an rms error of approximately 28 h. The model algorithms are analyzed using historical data and improvements are suggested. Implementation of the algorithm modifications reduces the rms error in the log10 of the flux prediction by 21%, and the rise time rms error by 31%. Improvements are also realized in the probability prediction by deriving the conditional climatology for proton event occurrence given flare characteristics.

  13. Observation simulation experiments with regional prediction models

    NASA Technical Reports Server (NTRS)

    Diak, George; Perkey, Donald J.; Kalb, Michael; Robertson, Franklin R.; Jedlovec, Gary

    1990-01-01

    Research efforts in FY 1990 included studies employing regional scale numerical models as aids in evaluating potential contributions of specific satellite observing systems (current and future) to numerical prediction. One study involves Observing System Simulation Experiments (OSSEs) which mimic operational initialization/forecast cycles but incorporate simulated Advanced Microwave Sounding Unit (AMSU) radiances as input data. The objective of this and related studies is to anticipate the potential value of data from these satellite systems, and develop applications of remotely sensed data for the benefit of short range forecasts. Techniques are also being used that rely on numerical model-based synthetic satellite radiances to interpret the information content of various types of remotely sensed image and sounding products. With this approach, evolution of simulated channel radiance image features can be directly interpreted in terms of the atmospheric dynamical processes depicted by a model. Progress is being made in a study using the internal consistency of a regional prediction model to simplify the assessment of forced diabatic heating and moisture initialization in reducing model spinup times. Techniques for model initialization are being examined, with focus on implications for potential applications of remote microwave observations, including AMSU and Special Sensor Microwave Imager (SSM/I), in shortening model spinup time for regional prediction.

  14. Constructing and Validating a Decadal Prediction Model

    NASA Astrophysics Data System (ADS)

    Foss, I.; Woolf, D. K.; Gagnon, A. S.; Merchant, C. J.

    2010-05-01

    For the purpose of identifying potential sources of predictability of Scottish mean air temperature (SMAT), a redundancy analysis (RA) was accomplished to quantitatively assess the predictability of SMAT from North Atlantic SSTs as well as the temporal consistency of this predictability. The RA was performed between the main principal components of North Atlantic SST anomalies and SMAT anomalies for two time periods: 1890-1960 and 1960-2006. The RA models developed using data from the 1890-1960 period were validated using the 1960-2006 period; in a similar way the model developed based on the 1960-2006 period was validated using data from the 1890-1960 period. The results indicate the potential to forecast decadal trends in SMAT for all seasons in 1960-2006 time period and all seasons with the exception of winter for the period 1890-1960 with the best predictability achieved in summer. The statistical models show the best performance when SST anomalies in the European shelf seas (45°N-65°N, 20W-20E) rather than those for the SSTs over the entire North Atlantic (30°N-75°N, 80°W-30°E) were used as a predictor. The results of the RA demonstrated that similar SSTs modes were responsible for predictions in the first and second half of the 20th century, establishing temporal consistency, though with stronger influence in the more recent half. The SST pattern responsible for explaining the largest amount of variance in SMAT was stronger in the second half of the 20th century and showed increasing influence from the area of the North Sea, possibly due to faster sea-surface warming in that region in comparison with the open North Atlantic. The Wavelet Transform (WT), Cross Wavelet Transform (XWT) and Wavelet Coherence (WTC) techniques were used to analyse RA-model-based forecasts of SMAT in the time-frequency domain. Wavelet-based techniques applied to the predicted and observed time series revealed a good performance of RA models to predict the frequency variability

  15. Combining Modeling and Gaming for Predictive Analytics

    SciTech Connect

    Riensche, Roderick M.; Whitney, Paul D.

    2012-08-22

    Many of our most significant challenges involve people. While human behavior has long been studied, there are recent advances in computational modeling of human behavior. With advances in computational capabilities come increases in the volume and complexity of data that humans must understand in order to make sense of and capitalize on these modeling advances. Ultimately, models represent an encapsulation of human knowledge. One inherent challenge in modeling is efficient and accurate transfer of knowledge from humans to models, and subsequent retrieval. The simulated real-world environment of games presents one avenue for these knowledge transfers. In this paper we describe our approach of combining modeling and gaming disciplines to develop predictive capabilities, using formal models to inform game development, and using games to provide data for modeling.

  16. Modelling language evolution: Examples and predictions

    NASA Astrophysics Data System (ADS)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  17. Persistence and predictability in a perfect model

    NASA Technical Reports Server (NTRS)

    Schubert, Siegfried D.; Suarez, Max J.; Schemm, Jae-Kyung

    1992-01-01

    A realistic two-level GCM is used to examine the relationship between predictability and persistence. Predictability is measured by the average divergence of ensembles of solutions starting from perturbed initial conditions, and persistence is defined in terms of the autocorrelation function based on a single long-term model integration. The average skill of the dynamical forecasts is compared with the skill of simple persistence-based statistical forecasts. For initial errors comparable in magnitude to present-day analysis errors, the statistical forecast loses all skill after about one week, reflecting the lifetime of the lowest frequency fluctuations in the model. Large ensemble mean dynamical forecasts would be expected to remain skillful for about 3 wk. The disparity between the skill of the statistical and dynamical forecasts is greater for the higher frequency modes, which have little memory beyond 1 d, yet remain predictable for about 2 wk. The results are analyzed in terms of two characteristic time scales.

  18. Persistence and predictability in a perfect model

    NASA Technical Reports Server (NTRS)

    Schubert, Siegfried D.; Suarez, Max J.; Schemm, Jae-Kyung

    1992-01-01

    A realistic two-level GCM is used to examine the relationship between predictability and persistence. Predictability is measured by the average divergence of ensembles of solutions starting from perturbed initial conditions, and persistence is defined in terms of the autocorrelation function based on a single long-term model integration. The average skill of the dynamical forecasts is compared with the skill of simple persistence-based statistical forecasts. For initial errors comparable in magnitude to present-day analysis errors, the statistical forecast loses all skill after about one week, reflecting the lifetime of the lowest frequency fluctuations in the model. Large ensemble mean dynamical forecasts would be expected to remain skillful for about 3 wk. The disparity between the skill of the statistical and dynamical forecasts is greater for the higher frequency modes, which have little memory beyond 1 d, yet remain predictable for about 2 wk. The results are analyzed in terms of two characteristic time scales.

  19. Model Predictive Control of Sewer Networks

    NASA Astrophysics Data System (ADS)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.

    2017-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.

  20. Climate Models have Accurately Predicted Global Warming

    NASA Astrophysics Data System (ADS)

    Nuccitelli, D. A.

    2016-12-01

    Climate model projections of global temperature changes over the past five decades have proven remarkably accurate, and yet the myth that climate models are inaccurate or unreliable has formed the basis of many arguments denying anthropogenic global warming and the risks it poses to the climate system. Here we compare average global temperature predictions made by both mainstream climate scientists using climate models, and by contrarians using less physically-based methods. We also explore the basis of the myth by examining specific arguments against climate model accuracy and their common characteristics of science denial.

  1. Advancements in predictive plasma formation modeling

    NASA Astrophysics Data System (ADS)

    Purvis, Michael A.; Schafgans, Alexander; Brown, Daniel J. W.; Fomenkov, Igor; Rafac, Rob; Brown, Josh; Tao, Yezheng; Rokitski, Slava; Abraham, Mathew; Vargas, Mike; Rich, Spencer; Taylor, Ted; Brandt, David; Pirati, Alberto; Fisher, Aaron; Scott, Howard; Koniges, Alice; Eder, David; Wilks, Scott; Link, Anthony; Langer, Steven

    2016-03-01

    We present highlights from plasma simulations performed in collaboration with Lawrence Livermore National Labs. This modeling is performed to advance the rate of learning about optimal EUV generation for laser produced plasmas and to provide insights where experimental results are not currently available. The goal is to identify key physical processes necessary for an accurate and predictive model capable of simulating a wide range of conditions. This modeling will help to drive source performance scaling in support of the EUV Lithography roadmap. The model simulates pre-pulse laser interaction with the tin droplet and follows the droplet expansion into the main pulse target zone. Next, the interaction of the expanded droplet with the main laser pulse is simulated. We demonstrate the predictive nature of the code and provide comparison with experimental results.

  2. An exponential filter model predicts lightness illusions

    PubMed Central

    Zeman, Astrid; Brooks, Kevin R.; Ghebreab, Sennay

    2015-01-01

    Lightness, or perceived reflectance of a surface, is influenced by surrounding context. This is demonstrated by the Simultaneous Contrast Illusion (SCI), where a gray patch is perceived lighter against a black background and vice versa. Conversely, assimilation is where the lightness of the target patch moves toward that of the bounding areas and can be demonstrated in White's effect. Blakeslee and McCourt (1999) introduced an oriented difference-of-Gaussian (ODOG) model that is able to account for both contrast and assimilation in a number of lightness illusions and that has been subsequently improved using localized normalization techniques. We introduce a model inspired by image statistics that is based on a family of exponential filters, with kernels spanning across multiple sizes and shapes. We include an optional second stage of normalization based on contrast gain control. Our model was tested on a well-known set of lightness illusions that have previously been used to evaluate ODOG and its variants, and model lightness values were compared with typical human data. We investigate whether predictive success depends on filters of a particular size or shape and whether pooling information across filters can improve performance. The best single filter correctly predicted the direction of lightness effects for 21 out of 27 illusions. Combining two filters together increased the best performance to 23, with asymptotic performance at 24 for an arbitrarily large combination of filter outputs. While normalization improved prediction magnitudes, it only slightly improved overall scores in direction predictions. The prediction performance of 24 out of 27 illusions equals that of the best performing ODOG variant, with greater parsimony. Our model shows that V1-style orientation-selectivity is not necessary to account for lightness illusions and that a low-level model based on image statistics is able to account for a wide range of both contrast and assimilation effects

  3. DKIST Polarization Modeling and Performance Predictions

    NASA Astrophysics Data System (ADS)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  4. Predictive performance models and multiple task performance

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  5. Predictive performance models and multiple task performance

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  6. Predictive analytics can support the ACO model.

    PubMed

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  7. Predictive Modeling of the CDRA 4BMS

    NASA Technical Reports Server (NTRS)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  8. Aspects of modeling uncertainty and prediction

    SciTech Connect

    McKay, M.D.

    1993-12-31

    Probabilistic assessment of variability in model prediction considers input uncertainty and structural uncertainty. For input uncertainty, understanding of practical origins of probabilistic treatments as well as restrictions and limitations of methodology is much more developed than for structural uncertainty. There is a simple basis for structural uncertainty that parallels that for input uncertainty. Although methodologies for assessing structural uncertainty for models in general are very limited, more options are available for submodels.

  9. Nearshore Operational Model for Rip Current Predictions

    NASA Astrophysics Data System (ADS)

    Sembiring, L. E.; Van Dongeren, A. R.; Van Ormondt, M.; Winter, G.; Roelvink, J.

    2012-12-01

    A coastal operational model system can serve as a tool in order to monitor and predict coastal hazards, and to acquire up-to-date information on coastal state indicators. The objective of this research is to develop a nearshore operational model system for the Dutch coast focusing on swimmer safety. For that purpose, an operational model system has been built which can predict conditions up to 48 hours ahead. The model system consists of three different nested model domain covering The North Sea, The Dutch coastline, and one local model which is the area of interest. Three different process-based models are used to simulate physical processes within the system: SWAN to simulate wave propagation, Delft3D-Flow for hydraulics flow simulation, and XBeach for the nearshore models. The SWAN model is forced by wind fields from operational HiRLAM, as well as two dimensional wave spectral data from WaveWatch 3 Global as the ocean boundaries. The Delft3D Flow model is forced by assigning the boundaries with tidal constants for several important astronomical components as well as HiRLAM wind fields. For the local XBeach model, up-to-date bathymetry will be obtained by assimilating model computation and Argus video data observation. A hindcast is carried out on the Continental Shelf Model, covering the North Sea and nearby Atlantic Ocean, for the year 2009. Model skills are represented by several statistical measures such as rms error and bias. In general the results show that the model system exhibits a good agreement with field data. For SWAN results, integral significant wave heights are predicted well by the model for all wave buoys considered, with rms errors ranging from 0.16 m for the month of May with observed mean significant wave height of 1.08 m, up to rms error of 0.39 m for the month of November, with observed mean significant wave height of 1.91 m. However, it is found that the wave model slightly underestimates the observation for the period of June, especially

  10. Quantifying Prediction Fidelity in Ocean Circulation Models

    DTIC Science & Technology

    2013-09-30

    Quantifying Prediction Fidelity in Ocean CirculationModels Mohamed Iskandarani Rosenstiel School of Marine and Atmoshperic Science University of...Miami, Rosenstiel School of Marine and Atmoshperic Science (RSMAS),4600 Rickenbacker Causeway,Miami,FL,33149 8. PERFORMING ORGANIZATION REPORT NUMBER 9

  11. A Predictive Model for MSSW Student Success

    ERIC Educational Resources Information Center

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  12. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  13. PREDICTIVE MODELING OF CHOLERA OUTBREAKS IN BANGLADESH

    PubMed Central

    Koepke, Amanda A.; Longini, Ira M.; Halloran, M. Elizabeth; Wakefield, Jon; Minin, Vladimir N.

    2016-01-01

    Despite seasonal cholera outbreaks in Bangladesh, little is known about the relationship between environmental conditions and cholera cases. We seek to develop a predictive model for cholera outbreaks in Bangladesh based on environmental predictors. To do this, we estimate the contribution of environmental variables, such as water depth and water temperature, to cholera outbreaks in the context of a disease transmission model. We implement a method which simultaneously accounts for disease dynamics and environmental variables in a Susceptible-Infected-Recovered-Susceptible (SIRS) model. The entire system is treated as a continuous-time hidden Markov model, where the hidden Markov states are the numbers of people who are susceptible, infected, or recovered at each time point, and the observed states are the numbers of cholera cases reported. We use a Bayesian framework to fit this hidden SIRS model, implementing particle Markov chain Monte Carlo methods to sample from the posterior distribution of the environmental and transmission parameters given the observed data. We test this method using both simulation and data from Mathbaria, Bangladesh. Parameter estimates are used to make short-term predictions that capture the formation and decline of epidemic peaks. We demonstrate that our model can successfully predict an increase in the number of infected individuals in the population weeks before the observed number of cholera cases increases, which could allow for early notification of an epidemic and timely allocation of resources. PMID:27746850

  14. A SCOPING STUDY: Development of Probabilistic Risk Assessment Models for Reactivity Insertion Accidents During Shutdown In U.S. Commercial Light Water Reactors

    SciTech Connect

    S. Khericha

    2011-06-01

    This report documents the scoping study of developing generic simplified fuel damage risk models for quantitative analysis from inadvertent reactivity insertion events during shutdown (SD) in light water pressurized and boiling water reactors. In the past, nuclear fuel reactivity accidents have been analyzed both mainly deterministically and probabilistically for at-power and SD operations of nuclear power plants (NPPs). Since then, many NPPs had power up-rates and longer refueling intervals, which resulted in fuel configurations that may potentially respond differently (in an undesirable way) to reactivity accidents. Also, as shown in a recent event, several inadvertent operator actions caused potential nuclear fuel reactivity insertion accident during SD operations. The set inadvertent operator actions are likely to be plant- and operation-state specific and could lead to accident sequences. This study is an outcome of the concern which arose after the inadvertent withdrawal of control rods at Dresden Unit 3 in 2008 due to operator actions in the plant inadvertently three control rods were withdrawn from the reactor without knowledge of the main control room operator. The purpose of this Standardized Plant Analysis Risk (SPAR) Model development project is to develop simplified SPAR Models that can be used by staff analysts to perform risk analyses of operating events and/or conditions occurring during SD operation. These types of accident scenarios are dominated by the operator actions, (e.g., misalignment of valves, failure to follow procedures and errors of commissions). Human error probabilities specific to this model were assessed using the methodology developed for SPAR model human error evaluations. The event trees, fault trees, basic event data and data sources for the model are provided in the report. The end state is defined as the reactor becomes critical. The scoping study includes a brief literature search/review of historical events, developments of

  15. Predictive modeling in pediatric traumatic brain injury using machine learning.

    PubMed

    Chong, Shu-Ling; Liu, Nan; Barbier, Sylvaine; Ong, Marcus Eng Hock

    2015-03-17

    Pediatric traumatic brain injury (TBI) constitutes a significant burden and diagnostic challenge in the emergency department (ED). While large North American research networks have derived clinical prediction rules for the head injured child, these may not be generalizable to practices in countries with traditionally low rates of computed tomography (CT). We aim to study predictors for moderate to severe TBI in our ED population aged < 16 years. This was a retrospective case-control study based on data from a prospective surveillance head injury database. Cases were included if patients presented from 2006 to 2014, with moderate to severe TBI. Controls were age-matched head injured children from the registry, obtained in a 4 control: 1 case ratio. These children remained well on diagnosis and follow up. Demographics, history, and physical examination findings were analyzed and patients followed up for the clinical course and outcome measures of death and neurosurgical intervention. To predict moderate to severe TBI, we built a machine learning (ML) model and a multivariable logistic regression model and compared their performances by means of Receiver Operating Characteristic (ROC) analysis. There were 39 cases and 156 age-matched controls. The following 4 predictors remained statistically significant after multivariable analysis: Involvement in road traffic accident, a history of loss of consciousness, vomiting and signs of base of skull fracture. The logistic regression model was created with these 4 variables while the ML model was built with 3 extra variables, namely the presence of seizure, confusion and clinical signs of skull fracture. At the optimal cutoff scores, the ML method improved upon the logistic regression method with respect to the area under the ROC curve (0.98 vs 0.93), sensitivity (94.9% vs 82.1%), specificity (97.4% vs 92.3%), PPV (90.2% vs 72.7%), and NPV (98.7% vs 95.4%). In this study, we demonstrated the feasibility of using machine

  16. A stepwise model to predict monthly streamflow

    NASA Astrophysics Data System (ADS)

    Mahmood Al-Juboori, Anas; Guven, Aytac

    2016-12-01

    In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.

  17. Thermophysical properties of liquid UO2, ZrO2 and corium by molecular dynamics and predictive models

    NASA Astrophysics Data System (ADS)

    Kim, Woong Kee; Shim, Ji Hoon; Kaviany, Massoud

    2017-08-01

    Predicting the fate of accident-melted nuclear fuel-cladding requires the understanding of the thermophysical properties which are lacking or have large scatter due to high-temperature experimental challenges. Using equilibrium classical molecular dynamics (MD), we predict the properties of melted UO2 and ZrO2 and compare them with the available experimental data and the predictive models. The existing interatomic potential models have been developed mainly for the polymorphic solid phases of these oxides, so they cannot be used to predict all the properties accurately. We compare and decipher the distinctions of those MD predictions using the specific property-related autocorrelation decays. The predicted properties are density, specific heat, heat of fusion, compressibility, viscosity, surface tension, and the molecular and electronic thermal conductivities. After the comparisons, we provide readily usable temperature-dependent correlations (including UO2-ZrO2 compounds, i.e. corium melt).

  18. Disease prediction models and operational readiness.

    PubMed

    Corley, Courtney D; Pullum, Laura L; Hartley, David M; Benedum, Corey; Noonan, Christine; Rabinowitz, Peter M; Lancaster, Mary J

    2014-01-01

    The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6), spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A component of this review is the identification of verification and validation (V&V) methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology Readiness

  19. Disease Prediction Models and Operational Readiness

    PubMed Central

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey; Noonan, Christine; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-01-01

    The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6), spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A component of this review is the identification of verification and validation (V&V) methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology Readiness

  20. Can contaminant transport models predict breakthrough?

    USGS Publications Warehouse

    Peng, Wei-Shyuan; Hampton, Duane R.; Konikow, Leonard F.; Kambham, Kiran; Benegar, Jeffery J.

    2000-01-01

    A solute breakthrough curve measured during a two-well tracer test was successfully predicted in 1986 using specialized contaminant transport models. Water was injected into a confined, unconsolidated sand aquifer and pumped out 125 feet (38.3 m) away at the same steady rate. The injected water was spiked with bromide for over three days; the outflow concentration was monitored for a month. Based on previous tests, the horizontal hydraulic conductivity of the thick aquifer varied by a factor of seven among 12 layers. Assuming stratified flow with small dispersivities, two research groups accurately predicted breakthrough with three-dimensional (12-layer) models using curvilinear elements following the arc-shaped flowlines in this test. Can contaminant transport models commonly used in industry, that use rectangular blocks, also reproduce this breakthrough curve? The two-well test was simulated with four MODFLOW-based models, MT3D (FD and HMOC options), MODFLOWT, MOC3D, and MODFLOW-SURFACT. Using the same 12 layers and small dispersivity used in the successful 1986 simulations, these models fit almost as accurately as the models using curvilinear blocks. Subtle variations in the curves illustrate differences among the codes. Sensitivities of the results to number and size of grid blocks, number of layers, boundary conditions, and values of dispersivity and porosity are briefly presented. The fit between calculated and measured breakthrough curves degenerated as the number of layers and/or grid blocks decreased, reflecting a loss of model predictive power as the level of characterization lessened. Therefore, the breakthrough curve for most field sites can be predicted only qualitatively due to limited characterization of the hydrogeology and contaminant source strength.

  1. Detailed source term estimation of the atmospheric release for the Fukushima Daiichi Nuclear Power Station accident by coupling simulations of an atmospheric dispersion model with an improved deposition scheme and oceanic dispersion model

    NASA Astrophysics Data System (ADS)

    Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.

    2015-01-01

    Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Daiichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate the detailed atmospheric releases during the accident using a reverse estimation method which calculates the release rates of radionuclides by comparing measurements of air concentration of a radionuclide or its dose rate in the environment with the ones calculated by atmospheric and oceanic transport, dispersion and deposition models. The atmospheric and oceanic models used are WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN-FDM (Finite difference oceanic dispersion model), both developed by the authors. A sophisticated deposition scheme, which deals with dry and fog-water depositions, cloud condensation nuclei (CCN) activation, and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The results revealed that the major releases of radionuclides due to the FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, midnight of 14 March when the SRV (safety relief valve) was opened three times at Unit 2, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates. The simulation by WSPEEDI-II using the new source term reproduced the local and regional patterns of cumulative

  2. Theoretical Modeling for Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Somerville, R. C. J.

    1984-01-01

    The goal is to utilize predictability theory and numerical experimentation to identify and understand some of the dynamical processes which must be modeled more realistically if large-scale numerical weather predictions are to be improved. The emphasis is on the use of relatively simple models to exlore the properties of physically comprehensive general circulation models (GCM's). A global linear quasi-geostrophic model and the Goddard Laboratory for Atmospheric Sciences (GLAS) GCM were used to investigate several mechanisms which are responsible for the decay of large-scale forecast skill in mid-latitude numerical weather predictions. Five-day forecasts for an ensemble of cases were made using First GARP Global Experiment data. It was found that forecast skill depends crucially on the specification of the stationary forcing. A lack of stationary forcing leads to spurious westwad propagation of the ultralong waves. Forecasts made with stationary forcings derived from climatological data are superior to those using forcings inferred from observations immediately preceding the forecast period. Interhemispheric forecast differences were analyzed, and the model errors were compared to errors of a simple persistence-damped-to-climatology scheme and to errors of the GLAS GCM.

  3. Illuminating Flash Point: Comprehensive Prediction Models.

    PubMed

    Le, Tu C; Ballard, Mathew; Casey, Phillip; Liu, Ming S; Winkler, David A

    2015-01-01

    Flash point is an important property of chemical compounds that is widely used to evaluate flammability hazard. However, there is often a significant gap between the demand for experimental flash point data and their availability. Furthermore, the determination of flash point is difficult and costly, particularly for some toxic, explosive, or radioactive compounds. The development of a reliable and widely applicable method to predict flash point is therefore essential. In this paper, the construction of a quantitative structure - property relationship model with excellent performance and domain of applicability is reported. It uses the largest data set to date of 9399 chemically diverse compounds, with flash point spanning from less than -130 °C to over 900 °C. The model employs only computed parameters, eliminating the need for experimental data that some earlier computational models required. The model allows accurate prediction of flash point for a broad range of compounds that are unavailable or not yet synthesized. This single model with a very broad range of chemical and flash point applicability will allow accurate predictions of this important property to be made for a broad range of new materials. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A predictive model for artificial mechanical cochlea

    NASA Astrophysics Data System (ADS)

    Ahmed, Riaz U.; Adiba, Afifa; Banerjee, Sourav

    2015-03-01

    To recover the hearing deficiency, cochlea implantation is essential if the inner ear is damaged. Existing implantable cochlea is an electronic device, usually placed outside the ear to receive sound from environment, convert into electric impulses and send to auditory nerve bypassing the damaged cochlea. However, due to growing demand researchers are interested in fabricating artificial mechanical cochlea to overcome the limitations of electronic cochlea. Only a hand full number of research have been published in last couple of years showing fabrication of basilar membrane mimicking the cochlear operations. Basilar membrane plays the most important role in a human cochlea by responding only on sonic frequencies using its varying material properties from basal to apical end. Only few sonic frequencies have been sensed with the proposed models; however no process was discussed on how the frequency selectivity of the models can be improved to sense the entire sonic frequency range. Thus, we argue that a predictive model is the missing-link and is the utmost necessity. Hence, in this study, we intend to develop a multi-scale predictive model for basilar membrane such that sensing potential of the artificial cochlea can be designed and tuned predictively by altering the model parameters.

  5. Robust model predictive control of Wiener systems

    NASA Astrophysics Data System (ADS)

    Biagiola, S. I.; Figueroa, J. L.

    2011-03-01

    Block-oriented models (BOMs) have shown to be appealing and efficient as nonlinear representations for many applications. They are at the same time valid and simple models in a more extensive region than time-invariant linear models. In this work, Wiener models are considered. They are one of the most diffused BOMs, and their structure consists in a linear dynamics in cascade with a nonlinear static block. Particularly, the problem of control of these systems in the presence of uncertainty is treated. The proposed methodology makes use of a robust identification procedure in order to obtain a robust model to represent the uncertain system. This model is then employed to design a model predictive controller. The mathematical problem involved in the controller design is formulated in the context of the existing linear matrix inequalities (LMI) theory. The main feature of this approach is that it takes advantage of the static nature of the nonlinearity, which allows to solve the control problem by focusing only in the linear dynamics. This formulation results in a simplified design procedure, because the original nonlinear model predictive control (MPC) problem turns into a linear one.

  6. Genetic models of homosexuality: generating testable predictions

    PubMed Central

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  7. Genetic models of homosexuality: generating testable predictions.

    PubMed

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  8. Characterizing Attention with Predictive Network Models.

    PubMed

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Explorative spatial analysis of traffic accident statistics and road mortality among the provinces of Turkey.

    PubMed

    Erdogan, Saffet

    2009-10-01

    The aim of the study is to describe the inter-province differences in traffic accidents and mortality on roads of Turkey. Two different risk indicators were used to evaluate the road safety performance of the provinces in Turkey. These indicators are the ratios between the number of persons killed in road traffic accidents (1) and the number of accidents (2) (nominators) and their exposure to traffic risk (denominator). Population and the number of registered motor vehicles in the provinces were used as denominators individually. Spatial analyses were performed to the mean annual rate of deaths and to the number of fatal accidents that were calculated for the period of 2001-2006. Empirical Bayes smoothing was used to remove background noise from the raw death and accident rates because of the sparsely populated provinces and small number of accident and death rates of provinces. Global and local spatial autocorrelation analyses were performed to show whether the provinces with high rates of deaths-accidents show clustering or are located closer by chance. The spatial distribution of provinces with high rates of deaths and accidents was nonrandom and detected as clustered with significance of P<0.05 with spatial autocorrelation analyses. Regions with high concentration of fatal accidents and deaths were located in the provinces that contain the roads connecting the Istanbul, Ankara, and Antalya provinces. Accident and death rates were also modeled with some independent variables such as number of motor vehicles, length of roads, and so forth using geographically weighted regression analysis with forward step-wise elimination. The level of statistical significance was taken as P<0.05. Large differences were found between the rates of deaths and accidents according to denominators in the provinces. The geographically weighted regression analyses did significantly better predictions for both accident rates and death rates than did ordinary least regressions, as

  10. Modeling Benthic Sediment Processes to Predict Water ...

    EPA Pesticide Factsheets

    The benthic sediment acts as a huge reservoir of particulate and dissolved material (within interstitial water) which can contribute to loading of contaminants and nutrients to the water column. A benthic sediment model is presented in this report to predict spatial and temporal benthic fluxes of nutrients and chemicals in Narragansett Bay. A benthic sediment model is presented in this report to identify benthic flux into the water column in Narragansett Bay. Benthic flux is essential to properly model water quality and ecology in estuarine and coastal systems.

  11. Prediction failure of a wolf landscape model

    USGS Publications Warehouse

    Mech, L.D.

    2006-01-01

    I compared 101 wolf (Canis lupus) pack territories formed in Wisconsin during 1993-2004 to the logistic regression predictive model of Mladenoff et al. (1995, 1997, 1999). Of these, 60% were located in putative habitat suitabilities 50% remained unoccupied by known packs after 24 years of recolonization. This model was a poor predictor of wolf re-colonizing locations in Wisconsin, apparently because it failed to consider the adaptability of wolves. Such models should be used cautiously in wolf-management or restoration plans.

  12. Modeling Benthic Sediment Processes to Predict Water ...

    EPA Pesticide Factsheets

    The benthic sediment acts as a huge reservoir of particulate and dissolved material (within interstitial water) which can contribute to loading of contaminants and nutrients to the water column. A benthic sediment model is presented in this report to predict spatial and temporal benthic fluxes of nutrients and chemicals in Narragansett Bay. A benthic sediment model is presented in this report to identify benthic flux into the water column in Narragansett Bay. Benthic flux is essential to properly model water quality and ecology in estuarine and coastal systems.

  13. A predictive model of human performance.

    NASA Technical Reports Server (NTRS)

    Walters, R. F.; Carlson, L. D.

    1971-01-01

    An attempt is made to develop a model describing the overall responses of humans to exercise and environmental stresses for prediction of exhaustion vs an individual's physical characteristics. The principal components of the model are a steady state description of circulation and a dynamic description of thermal regulation. The circulatory portion of the system accepts changes in work load and oxygen pressure, while the thermal portion is influenced by external factors of ambient temperature, humidity and air movement, affecting skin blood flow. The operation of the model is discussed and its structural details are given.

  14. STELLA experiment: Design and model predictions

    NASA Astrophysics Data System (ADS)

    Kimura, W. D.; Babzien, M.; Ben-Zvi, I.; Campbell, L. P.; Cline, D. B.; Fiorito, R. B.; Gallardo, J. C.; Gottschalk, S. C.; He, P.; Kusche, K. P.; Liu, Y.; Pantell, R. H.; Pogorelsky, I. V.; Quimby, D. C.; Robinson, K. E.; Rule, D. W.; Sandweiss, J.; Skaritka, J.; van Steenbergen, A.; Steinhauer, L. C.; Yakimenko, V.

    1999-07-01

    The STaged ELectron Laser Acceleration (STELLA) experiment will be one of the first to examine the critical issue of staging the laser acceleration process. The BNL inverse free electron laser (IFEL) will serve as a prebuncher to generate ˜1-μm long microbunches. These microbunches will be accelerated by an inverse Cerenkov acceleration (ICA) stage. A comprehensive model of the STELLA experiment is described. This model includes the IFEL prebunching, drift and focusing of the microbunches into the ICA stage, and their subsequent acceleration. The model predictions will be presented, including the results of a system error study to determine the sensitivity to uncertainties in various system parameters.

  15. Internal Flow Thermal/Fluid Modeling of STS-107 Port Wing in Support of the Columbia Accident Investigation Board

    NASA Technical Reports Server (NTRS)

    Sharp, John R.; Kittredge, Ken; Schunk, Richard G.

    2003-01-01

    As part of the aero-thermodynamics team supporting the Columbia Accident Investigation Board (CAB), the Marshall Space Flight Center was asked to perform engineering analyses of internal flows in the port wing. The aero-thermodynamics team was split into internal flow and external flow teams with the support being divided between shorter timeframe engineering methods and more complex computational fluid dynamics. In order to gain a rough order of magnitude type of knowledge of the internal flow in the port wing for various breach locations and sizes (as theorized by the CAB to have caused the Columbia re-entry failure), a bulk venting model was required to input boundary flow rates and pressures to the computational fluid dynamics (CFD) analyses. This paper summarizes the modeling that was done by MSFC in Thermal Desktop. A venting model of the entire Orbiter was constructed in FloCAD based on Rockwell International s flight substantiation analyses and the STS-107 reentry trajectory. Chemical equilibrium air thermodynamic properties were generated for SINDA/FLUINT s fluid property routines from a code provided by Langley Research Center. In parallel, a simplified thermal mathematical model of the port wing, including the Thermal Protection System (TPS), was based on more detailed Shuttle re-entry modeling previously done by the Dryden Flight Research Center. Once the venting model was coupled with the thermal model of the wing structure with chemical equilibrium air properties, various breach scenarios were assessed in support of the aero-thermodynamics team. The construction of the coupled model and results are presented herein.

  16. Modeling of leachable 137Cs in throughfall and stemflow for Japanese forest canopies after Fukushima Daiichi Nuclear Power Plant accident.

    PubMed

    Loffredo, Nicolas; Onda, Yuichi; Kawamori, Ayumi; Kato, Hiroaki

    2014-09-15

    The Fukushima accident dispersed significant amounts of radioactive cesium (Cs) in the landscape. Our research investigated, from June 2011 to November 2013, the mobility of leachable Cs in forests canopies. In particular, (137)Cs and (134)Cs activity concentrations were measured in rainfall, throughfall, and stemflow in broad-leaf and cedar forests in an area located 40 km from the power plant. Leachable (137)Cs loss was modeled by a double exponential (DE) model. This model could not reproduce the variation in activity concentration observed. In order to refine the DE model, the main physical measurable parameters (rainfall intensity, wind velocity, and snowfall occurrence) were assessed, and rainfall was identified as the dominant factor controlling observed variation. A corrective factor was then developed to incorporate rainfall intensity in an improved DE model. With the original DE model, we estimated total (137)Cs loss by leaching from canopies to be 72 ± 4%, 67 ± 4%, and 48 ± 2% of the total plume deposition under mature cedar, young cedar, and broad-leaf forests, respectively. In contrast, with the improved DE model, the total (137)Cs loss by leaching was estimated to be 34 ± 2%, 34 ± 2%, and 16 ± 1% of the total plume deposition under mature cedar, young cedar, and broad-leaf forests, respectively. The improved DE model corresponds better to observed data in literature. Understanding (137)Cs and (134)Cs forest dynamics is important for forecasting future contamination of forest soils around the FDNPP. It also provides a basis for understanding forest transfers in future potential nuclear disasters.

  17. SAS4A: A computer model for the analysis of hypothetical core disruptive accidents in liquid metal reactors

    SciTech Connect

    Tentner, A.M.; Birgersson, G.; Cahalan, J.E.; Dunn, F.E.; Kalimullah; Miles, K.J.

    1987-01-01

    To ensure that the public health and safety are protected under any accident conditions in a Liquid Metal Fast Breeder Reactor (LMFBR), many accidents are analyzed for their potential consequences. The SAS4A code system, described in this paper, provides such an analysis capability, including the ability to analyze low probability events such as the Hypothetical Core Disruptive Accidents (HCDAs). The SAS4A code system has been designed to simulate all the events that occur in a LMFBR core during the initiating phase of a Hypothetical Core Disruptive Accident. During such postulated accident scenarios as the Loss-of-Flow and Transient Overpower events, a large number of interrelated physical phenomena occur during a relatively short time. These phenomena include transient heat transfer and hydrodynamic events, coolant boiling and fuel and cladding melting and relocation. During to the strong neutronic feedback present in a nuclear reactor, these events can significantly influence the reactor power. The SAS4A code system is used in the safety analysis of nuclear reactors, in order to estimate the energetic potential of very low probability accidents. The results of SAS4A simulations are also used by reactor designers in order to build safer reactors and eliminate the possibility of any accident which could endanger the public safety.

  18. Modelling the global atmospheric transport and deposition of radionuclides from the Fukushima Dai-ichi nuclear accident

    NASA Astrophysics Data System (ADS)

    Christoudias, T.; Lelieveld, J.

    2013-02-01

    We modeled the global atmospheric dispersion and deposition of radionuclides released from the Fukushima Dai-ichi nuclear power plant accident. The EMAC atmospheric chemistry - general circulation model was used, with circulation dynamics nudged towards ERA-Interim reanalysis data. We applied a resolution of approximately 0.5 degrees in latitude and longitude (T255). The model accounts for emissions and transport of the radioactive isotopes 131I and 137Cs, and removal processes through precipitation, particle sedimentation and dry deposition. In addition, we simulated the release of 133Xe, a noble gas that can be regarded as a passive transport tracer of contaminated air. The source terms are based on Chino et al. (2011) and Stohl et al. (2012); especially the emission estimates of 131I are associated with a high degree of uncertainty. The calculated concentrations have been compared to station observations by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO). We calculated that about 80% of the radioactivity from Fukushima which was released to the atmosphere deposited into the Pacific Ocean. In Japan a large inhabited land area was contaminated by more than 40 kBq m-2. We also estimated the inhalation and 50-year dose by 137Cs, 134Cs and 131I to which the people in Japan are exposed.

  19. Modelling the global atmospheric transport and deposition of radionuclides from the Fukushima Dai-ichi nuclear accident

    NASA Astrophysics Data System (ADS)

    Christoudias, T.; Lelieveld, J.

    2012-09-01

    We modeled the global atmospheric dispersion and deposition of radionuclides released from the Fukushima Dai-ichi nuclear power plant accident. The EMAC atmospheric chemistry - general circulation model was used, with circulation dynamics nudged towards ERA-Interim reanalysis data. We applied a resolution of approximately 0.5 degrees in latitude and longitude (T255). The model accounts for emissions and transport of the radioactive isotopes 131I and 137Cs, and removal processes through precipitation, particle sedimentation and dry deposition. In addition, we simulated the release of 133Xe, a noble gas that can be regarded as a passive transport tracer of contaminated air. The source terms are based on Stohl et al. (2012) and Chino et al. (2011); especially the emission estimates of 131I are associated with a high degree of uncertainty. The calculated concentrations have been compared to station observations by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO). We calculated that about 80% of the radioactivity from Fukushima which was released to the atmosphere deposited into the Pacific Ocean. In Japan a land area of 34 000 km2 around the reactors, inhabited by nearly 10 million people, was contaminated by more than 40 kBq m-2. We also estimated the inhalation and 50-yr dose by 137Cs and 131I to which the people in Japan have been exposed.

  20. Statistical assessment of predictive modeling uncertainty

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  1. Predictive Models for Carcinogenicity and Mutagenicity ...

    EPA Pesticide Factsheets

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  2. Seasonal Predictability in a Model Atmosphere.

    NASA Astrophysics Data System (ADS)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  3. Regression model analysis of the decreasing trend of cesium-137 concentration in the atmosphere since the Fukushima accident.

    PubMed

    Kitayama, Kyo; Ohse, Kenji; Shima, Nagayoshi; Kawatsu, Kencho; Tsukada, Hirofumi

    2016-11-01

    The decreasing trend of the atmospheric (137)Cs concentration in two cities in Fukushima prefecture was analyzed by a regression model to clarify the relation between the parameter of the decrease in the model and the trend and to compare the trend with that after the Chernobyl accident. The (137)Cs particle concentration measurements were conducted in urban Fukushima and rural Date sites from September 2012 to June 2015. The (137)Cs particle concentrations were separated in two groups: particles of more than 1.1 μm aerodynamic diameters (coarse particles) and particles with aerodynamic diameter lower than 1.1 μm (fine particles). The averages of the measured concentrations were 0.1 mBq m(-3) in Fukushima and Date sites. The measured concentrations were applied in the regression model which decomposed them into two components: trend and seasonal variation. The trend concentration included the parameters for the constant and the exponential decrease. The parameter for the constant was slightly different between the Fukushima and Date sites. The parameter for the exponential decrease was similar for all the cases, and much higher than the value of the physical radioactive decay except for the concentration in the fine particles at the Date site. The annual decreasing rates of the (137)Cs concentration evaluated by the trend concentration ranged from 44 to 53% y(-1) with average and standard deviation of 49 ± 8% y(-1) for all the cases in 2013. In the other years, the decreasing rates also varied slightly for all cases. These indicated that the decreasing trend of the (137)Cs concentration was nearly unchanged for the location and ground contamination level in the three years after the accident. The (137)Cs activity per aerosol particle mass also decreased with the same trend as the (137)Cs concentration in the atmosphere. The results indicated that the decreasing trend of the atmospheric (137)Cs concentration was related with the reduction of the (137)Cs

  4. Disease Prediction Models and Operational Readiness

    SciTech Connect

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  5. Validated predictive modelling of the environmental resistome.

    PubMed

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.

  6. Validated predictive modelling of the environmental resistome

    PubMed Central

    Amos, Gregory CA; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-01-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  7. Prediction Model of Serum Lithium Concentrations.

    PubMed

    Yoshida, Kazunari; Uchida, Hiroyuki; Suzuki, Takefumi; Watanabe, Masahiro; Yoshino, Nariyasu; Houchi, Hitoshi; Mimura, Masaru; Fukuoka, Noriyasu

    2017-08-02

    Introduction Therapeutic drug monitoring is necessary for lithium, but clinical application of several prediction strategies is still limited because of insufficient predictive accuracy. We herein proposed a suitable model, using creatinine clearance (CLcr)-based lithium clearance (Li-CL). Methods Patients receiving lithium provided the following information: serum lithium and creatinine concentrations, time of blood draw, dosing regimen, concomitant medications, and demographics. Li-CL was calculated as a daily dose per trough concentration for each subject, and the mean of Li-CL/CLcr was used to estimate Li-CL for another 30 subjects. Serum lithium concentrations at the time of sampling were estimated by 1-compartment model with Li-CL, fixed distribution volume (0.79 L/kg), and absorption rate (1.5/hour) in the 30 subjects. Results One hundred thirty-one samples from 82 subjects (44 men; mean±standard deviation age: 51.4±16.0 years; body weight: 64.6±13.8 kg; serum creatinine: 0.78±0.20 mg/dL; dose of lithium: 680.2±289.1 mg/day) were used to develop the pharmacokinetic model. The mean±standard deviation (95% confidence interval) of absolute error was 0.13±0.09 (0.10-0.16) mEq/L. Discussion Serum concentrations of lithium can be predicted from oral dosage with high precision, using our prediction model. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Predictive Modeling in Actinide Chemistry and Catalysis

    SciTech Connect

    Yang, Ping

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  9. Review of the chronic exposure pathways models in MACCS (MELCOR Accident Consequence Code System) and several other well-known probabilistic risk assessment models

    SciTech Connect

    Tveten, U. )

    1990-06-01

    The purpose of this report is to document the results of the work performed by the author in connection with the following task, performed for US Nuclear Regulatory Commission, (USNRC) Office of Nuclear Regulatory Research, Division of Systems Research: MACCS Chronic Exposure Pathway Models: Review the chronic exposure pathway models implemented in the MELCOR Accident Consequence Code System (MACCS) and compare those models to the chronic exposure pathway models implemented in similar codes developed in countries that are members of the OECD. The chronic exposures concerned are via: the terrestrial food pathways, the water pathways, the long-term groundshine pathway, and the inhalation of resuspended radionuclides pathway. The USNRC has indicated during discussions of the task that the major effort should be spent on the terrestrial food pathways. There is one chapter for each of the categories of chronic exposure pathways listed above.

  10. Estimation of radionuclide ((137)Cs) emission rates from a nuclear power plant accident using the Lagrangian Particle Dispersion Model (LPDM).

    PubMed

    Park, Soon-Ung; Lee, In-Hye; Ju, Jae-Won; Joo, Seung Jin

    2016-10-01

    A methodology for the estimation of the emission rate of (137)Cs by the Lagrangian Particle Dispersion Model (LPDM) with the use of monitored (137)Cs concentrations around a nuclear power plant has been developed. This method has been employed with the MM5 meteorological model in the 600 km × 600 km model domain with the horizontal grid scale of 3 km × 3 km centered at the Fukushima nuclear power plant to estimate (137)Cs emission rate for the accidental period from 00 UTC 12 March to 00 UTC 6 April 2011. The Lagrangian Particles are released continuously with the rate of one particle per minute at the first level modelled, about 15 m above the power plant site. The presently developed method was able to simulate quite reasonably the estimated (137)Cs emission rate compared with other studies, suggesting the potential usefulness of the present method for the estimation of the emission rate from the accidental power plant without detailed inventories of reactors and fuel assemblies and spent fuels. The advantage of this method is not so complicated but can be applied only based on one-time forward LPDM simulation with monitored concentrations around the power plant, in contrast to other inverse models. It was also found that continuously monitored radionuclides concentrations from possibly many sites located in all directions around the power plant are required to get accurate continuous emission rates from the accident power plant. The current methodology can also be used to verify the previous version of radionuclides emissions used among other modeling groups for the cases of intermittent or discontinuous samplings. Copyright © 2016. Published by Elsevier Ltd.

  11. The Concept of Conversion Factors and Reference Crops for the Prediction of 137Cs Root Uptake: Field Verification in Post-Chernobyl Landscape, 30 Years after Nuclear Accident

    NASA Astrophysics Data System (ADS)

    Komissarova, Olga; Paramonova, Tatiana

    2017-04-01

    One of the notable lessons obtained from nuclear accidents could be revealing the general features of 137Cs root uptake by agricultural crops for prediction the radionuclide accumulation in plants and its further distribution via food chains. Transfer factors (TFs) (the ratio of 137Cs activities in vegetation and in soil) have become a basis for such assessment when the characteristics of radioactive contamination, soil properties and phylogenetic features of different plant taxons important for root uptake are known. For the sake of simplicity the concept of conversion factor (CF) was accepted by IAEA (2006) to obtain unknown value of TF from the TF value of the reference crop cultivated on the same soil. Cereals were selected like reference group of agricultural crops. Presuming TF for cereals equal 1, CFs for tubers and fodder leguminous are 4, for grasses - 4.5, for leafy vegetables - 9, ets. To verify TFs and corresponding CFs values under environmental conditions of post-Chernobyl agricultural landscape the study in the area of Plavsky radioactive hotspot (Tula region, Russia) was conducted. Nowadays, after 30 years after the Chernobyl accident ( the first half-life period of 137Cs), arable chernozems of the territory are still polluted at the level 126-282 kBq/m2. The main crops of field rotation: wheat and barley (cereals), potatoes (tubers), soybean (leguminous), amaranth (non-leafy vegetables), rape ("other crops"), as well as galega-bromegrass mixture (cultivated species of grasses) and pasture grasses of semi-natural dry and wet meadows have been studied. Accumulation parameters of 137Cs in aboveground biomass, belowground biomass and edible parts of the plants were examined separately. Experimentally obtained 137Cs TFs in cereals are 0.24-0.32 for total biomass, 0.07-0.14 for aerial parts, 0.54-0.64 for roots and 0.01-0.02 for grain. Thus, (i) 137Cs transfer in grain of wheat and barley is insignificant and (ii) corresponding TFs values in both crops

  12. Probabilistic prediction models for aggregate quarry siting

    USGS Publications Warehouse

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  13. Monte Carlo modeling of beta-radiometer device used to measure milk contaminated as a result of the Chernobyl accident.

    PubMed

    Khrutchinsky, A; Kutsen, S; Minenko, V; Zhukova, O; Luckyanov, N; Bouville, A; Drozdovitch, V

    2009-06-01

    This paper presents results of Monte Carlo modeling of the beta-radiometer device with Geiger-Mueller detector used in Belarus and Russia to measure the radioactive contamination of milk after the Chernobyl accident. This type of detector, which is not energy selective, measured the total beta-activity of the radionuclide mix. A mathematical model of the beta-radiometer device, namely DP-100, was developed, and the calibration factors for the different radionuclides that might contribute to the milk contamination were calculated. The estimated calibration factors for (131)I, (137)Cs, (134)Cs, (90)Sr, (144)Ce, and (106)Ru reasonably agree with calibration factors determined experimentally. The calculated calibration factors for (132)Te, (132)I, (133)I, (136)Cs, (89)Sr, (103)Ru, (140)Ba, (140)La, and (141)Ce had not been previously determined experimentally. The obtained results allow to derive the activity of specific radionuclides, in particular (131)I, from the results of the total beta-activity measurements in milk. Results of this study are important for the purposes of retrospective dosimetry that uses measurements of radioactivity in environmental samples performed with beta-radiometer devices.

  14. MONTE CARLO MODELING OF BETA-RADIOMETER DEVICE USED TO MEASURE MILK CONTAMINATED AS A RESULT OF THE CHERNOBYL ACCIDENT

    PubMed Central

    Khrutchinsky, A.; Kutsen, S.; Minenko, V.; Zhukova, O.; Luckyanov, N.; Bouville, A.; Drozdovitch, V.

    2009-01-01

    This paper presents results of Monte Carlo modeling of the beta-radiometer device with Geiger-Mueller detector used in Belarus and Russia to measure the radioactive contamination of milk after the Chernobyl accident. This type of detector, which is not energy selective, measured the total beta-activity of the radionuclide mix. A mathematical model of the beta-radiometer device, namely DP-100, was developed, and the calibration factors for the different radionuclides that might contribute to the milk contamination were calculated. The estimated calibration factors for 131I, 137Cs, 134Cs, 90Sr, 144Ce, and 106Ru reasonably agree with calibration factors determined experimentally. The calculated calibration factors for 132Te, 132I, 133I, 136Cs, 103Ru, 140Ba, 140La, and 141Ce had not been previously determined experimentally. The obtained results allow to derive the activity of specific radionuclides, in particular 131I, from the results of the total beta-activity measurements in milk. Results of this study are important for the purposes of retrospective dosimetry that uses measurements of radioactivity in environmental samples performed with beta-radiometer devices. PMID:19233662

  15. Predictive Modeling of the CDRA 4BMS

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  16. A predictive geologic model of radon occurrence

    SciTech Connect

    Gregg, L.T. )

    1990-01-01

    Earlier work by LeGrand on predictive geologic models for radon focused on hydrogeologic aspects of radon transport from a given uranium/radium source in a fractured crystalline rock aquifer, and included submodels for bedrock lithology (uranium concentration), topographic slope, and water-table behavior and characteristics. LeGrand's basic geologic model has been modified and extended into a submodel for crystalline rocks (Blue Ridge and Piedmont Provinces) and a submodel for sedimentary rocks (Valley and Ridge and Coastal Plain Provinces). Each submodel assigns a ranking of 1 to 15 to the bedrock type, based on (a) known or supposed uranium/thorium content, (b) petrography/lithology, and (c) structural features such as faults, shear or breccia zones, diabase dikes, and jointing/fracturing. The bedrock ranking is coupled with a generalized soil/saprolite model which ranks soil/saprolite type and thickness from 1 to 10. A given site is thus assessed a ranking of 1 to 150 as a guide to its potential for high radon occurrence in the upper meter or so of soil. Field trials of the model are underway, comparing model predictions with measured soil-gas concentrations of radon.

  17. Computer Model Predicts the Movement of Dust

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A new computer model of the atmosphere can now actually pinpoint where global dust events come from, and can project where they're going. The model may help scientists better evaluate the impact of dust on human health, climate, ocean carbon cycles, ecosystems, and atmospheric chemistry. Also, by seeing where dust originates and where it blows people with respiratory problems can get advanced warning of approaching dust clouds. 'The model is physically more realistic than previous ones,' said Mian Chin, a co-author of the study and an Earth and atmospheric scientist at Georgia Tech and the Goddard Space Flight Center (GSFC) in Greenbelt, Md. 'It is able to reproduce the short term day-to-day variations and long term inter-annual variations of dust concentrations and distributions that are measured from field experiments and observed from satellites.' The above images show both aerosols measured from space (left) and the movement of aerosols predicted by computer model for the same date (right). For more information, read New Computer Model Tracks and Predicts Paths Of Earth's Dust Images courtesy Paul Giroux, Georgia Tech/NASA Goddard Space Flight Center

  18. Constructing predictive models of human running.

    PubMed

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Constructing predictive models of human running

    PubMed Central

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-01-01

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. PMID:25505131

  20. Statistical Seasonal Sea Surface based Prediction Model

    NASA Astrophysics Data System (ADS)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  1. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  2. Predictive mathematical models of cancer signalling pathways.

    PubMed

    Bachmann, J; Raue, A; Schilling, M; Becker, V; Timmer, J; Klingmüller, U

    2012-02-01

    Complex intracellular signalling networks integrate extracellular signals and convert them into cellular responses. In cancer cells, the tightly regulated and fine-tuned dynamics of information processing in signalling networks is altered, leading to uncontrolled cell proliferation, survival and migration. Systems biology combines mathematical modelling with comprehensive, quantitative, time-resolved data and is most advanced in addressing dynamic properties of intracellular signalling networks. Here, we introduce different modelling approaches and their application to medical systems biology, focusing on the identifiability of parameters in ordinary differential equation models and their importance in network modelling to predict cellular decisions. Two related examples are given, which include processing of ligand-encoded information and dual feedback regulation in erythropoietin (Epo) receptor signalling. Finally, we review the current understanding of how systems biology could foster the development of new treatment strategies in the context of lung cancer and anaemia.

  3. Applications of predictive environmental strain models.

    PubMed

    Reardon, M J; Gonzalez, R R; Pandolf, K B

    1997-02-01

    Researchers at the U.S. Army Research Institute of Environmental Medicine have developed and validated numerical models capable of predicting the extent of physiologic strain and adverse terrain and weather-related medical consequences of military operations in harsh environments. A descriptive historical account is provided that details how physiologic models for hot and cold weather exposure have been integrated into portable field advisory devices, computer-based meteorologic planning software, and combat-oriented simulation systems. It is important that medical officers be aware of the existence of these types of decision support tools so that they can assure that outputs are interpreted in a balanced and medically realistic manner. Additionally, these modeling applications may facilitate timely preventive medicine planning and efficient dissemination of appropriate measures to prevent weather- and altitude-related illnesses and performance decrements. Such environmental response modeling applications may therefore be utilized to support deployment preventive medicine planning by field medical officers.

  4. Predictive Computational Modeling of Chromatin Folding

    NASA Astrophysics Data System (ADS)

    di Pierro, Miichele; Zhang, Bin; Wolynes, Peter J.; Onuchic, Jose N.

    In vivo, the human genome folds into well-determined and conserved three-dimensional structures. The mechanism driving the folding process remains unknown. We report a theoretical model (MiChroM) for chromatin derived by using the maximum entropy principle. The proposed model allows Molecular Dynamics simulations of the genome using as input the classification of loci into chromatin types and the presence of binding sites of loop forming protein CTCF. The model was trained to reproduce the Hi-C map of chromosome 10 of human lymphoblastoid cells. With no additional tuning the model was able to predict accurately the Hi-C maps of chromosomes 1-22 for the same cell line. Simulations show unknotted chromosomes, phase separation of chromatin types and a preference of chromatin of type A to sit at the periphery of the chromosomes.

  5. Detailed source term estimation of atmospheric release during the Fukushima Dai-ichi nuclear power plant accident by coupling atmospheric and oceanic dispersion models

    NASA Astrophysics Data System (ADS)

    Katata, Genki; Chino, Masamichi; Terada, Hiroaki; Kobayashi, Takuya; Ota, Masakazu; Nagai, Haruyasu; Kajino, Mizuo

    2014-05-01

    Temporal variations of release amounts of radionuclides during the Fukushima Dai-ichi Nuclear Power Plant (FNPP1) accident and their dispersion process are essential to evaluate the environmental impacts and resultant radiological doses to the public. Here, we estimated a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data and coupling atmospheric and oceanic dispersion simulations by WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN developed by the authors. New schemes for wet, dry, and fog depositions of radioactive iodine gas (I2 and CH3I) and other particles (I-131, Te-132, Cs-137, and Cs-134) were incorporated into WSPEEDI-II. The deposition calculated by WSPEEDI-II was used as input data of ocean dispersion calculations by SEA-GEARN. The reverse estimation method based on the simulation by both models assuming unit release rate (1 Bq h-1) was adopted to estimate the source term at the FNPP1 using air dose rate, and air sea surface concentrations. The results suggested that the major release of radionuclides from the FNPP1 occurred in the following periods during March 2011: afternoon on the 12th when the venting and hydrogen explosion occurred at Unit 1, morning on the 13th after the venting event at Unit 3, midnight on the 14th when several openings of SRV (steam relief valve) were conducted at Unit 2, morning and night on the 15th, and morning on the 16th. The modified WSPEEDI-II using the newly estimated source term well reproduced local and regional patterns of air dose rate and surface deposition of I-131 and Cs-137 obtained by airborne observations. Our dispersion simulations also revealed that the highest radioactive contamination areas around FNPP1 were created from 15th to 16th March by complicated interactions among rainfall (wet deposition), plume movements, and phase properties (gas or particle) of I-131 and release rates

  6. Progress towards a PETN Lifetime Prediction Model

    SciTech Connect

    Burnham, A K; Overturf III, G E; Gee, R; Lewis, P; Qiu, R; Phillips, D; Weeks, B; Pitchimani, R; Maiti, A; Zepeda-Ruiz, L; Hrousis, C

    2006-09-11

    Dinegar (1) showed that decreases in PETN surface area causes EBW detonator function times to increase. Thermal aging causes PETN to agglomerate, shrink, and densify indicating a ''sintering'' process. It has long been a concern that the formation of a gap between the PETN and the bridgewire may lead to EBW detonator failure. These concerns have led us to develop a model to predict the rate of coarsening that occurs with age for thermally driven PETN powder (50% TMD). To understand PETN contributions to detonator aging we need three things: (1) Curves describing function time dependence on specific surface area, density, and gap. (2) A measurement of the critical gap distance for no fire as a function of density and surface area for various wire configurations. (3) A model describing how specific surface area, density and gap change with time and temperature. We've had good success modeling high temperature surface area reduction and function time increase using a phenomenological deceleratory kinetic model based on a distribution of parallel nth-order reactions having evenly spaced activation energies where weighing factors of the reactions follows a Gaussian distribution about the reaction with the mean activation energy (Figure 1). Unfortunately, the mean activation energy derived from this approach is high (typically {approx}75 kcal/mol) so that negligible sintering is predicted for temperatures below 40 C. To make more reliable predictions, we've established a three-part effort to understand PETN mobility. First, we've measured the rates of step movement and pit nucleation as a function of temperature from 30 to 50 C for single crystals. Second, we've measured the evaporation rate from single crystals and powders from 105 to 135 C to obtain an activation energy for evaporation. Third, we've pursued mechanistic kinetic modeling of surface mobility, evaporation, and ripening.

  7. Light-Weight Radioisotope Heater Unit Safety Analysis Report (LWRHU-SAR). Volume II. Accident model document

    SciTech Connect

    Johnson, E.W.

    1985-10-01

    Purposes of this volume (AMD), are to: Identify all malfunctions, both singular and multiple, which can occur during the complete mission profile that could lead to release outside the clad of the radioisotopic material contained therein; provide estimates of occurrence probabilities associated with these various accidents; evaluate the response of the LWRHU (or its components) to the resultant accident environments; and associate the potential event history with test data or analysis to determine the potential interaction of the released radionuclides with the biosphere.

  8. Overview of lower length scale model development for accident tolerant fuels regarding U3Si2 fuel and FeCrAl cladding

    SciTech Connect

    Zhang, Yongfeng

    2016-09-01

    U3Si2 and FeCrAl have been proposed as fuel and cladding concepts, respectively, for accident tolerance fuels with higher tolerance to accident scenarios compared to UO2. However, a lot of key physics and material properties regarding their in-pile performance are yet to be explored. To accelerate the understanding and reduce the cost of experimental studies, multiscale modeling and simulation are used to develop physics-based materials models to assist engineering scale fuel performance modeling. In this report, the lower-length-scale efforts in method and material model development supported by the Accident Tolerance Fuel (ATF) high-impact-problem (HIP) under the NEAMS program are summarized. Significant progresses have been made regarding interatomic potential, phase field models for phase decomposition and gas bubble formation, and thermal conductivity for U3Si2 fuel, and precipitation in FeCrAl cladding. The accomplishments are very useful by providing atomistic and mesoscale tools, improving the current understanding, and delivering engineering scale models for these two ATF concepts.

  9. Predictive modeling by the cerebellum improves proprioception.

    PubMed

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  10. Severe accident analysis using dynamic accident progression event trees

    NASA Astrophysics Data System (ADS)

    Hakobyan, Aram P.

    In present, the development and analysis of Accident Progression Event Trees (APETs) are performed in a manner that is computationally time consuming, difficult to reproduce and also can be phenomenologically inconsistent. One of the principal deficiencies lies in the static nature of conventional APETs. In the conventional event tree techniques, the sequence of events is pre-determined in a fixed order based on the expert judgments. The main objective of this PhD dissertation was to develop a software tool (ADAPT) for automated APET generation using the concept of dynamic event trees. As implied by the name, in dynamic event trees the order and timing of events are determined by the progression of the accident. The tool determines the branching times from a severe accident analysis code based on user specified criteria for branching. It assigns user specified probabilities to every branch, tracks the total branch probability, and truncates branches based on the given pruning/truncation rules to avoid an unmanageable number of scenarios. The function of a dynamic APET developed includes prediction of the conditions, timing, and location of containment failure or bypass leading to the release of radioactive material, and calculation of probabilities of those failures. Thus, scenarios that can potentially lead to early containment failure or bypass, such as through accident induced failure of steam generator tubes, are of particular interest. Also, the work is focused on treatment of uncertainties in severe accident phenomena such as creep rupture of major RCS components, hydrogen burn, containment failure, timing of power recovery, etc. Although the ADAPT methodology (Analysis of Dynamic Accident Progression Trees) could be applied to any severe accident analysis code, in this dissertation the approach is demonstrated by applying it to the MELCOR code [1]. A case study is presented involving station blackout with the loss of auxiliary feedwater system for a

  11. Modelling language evolution: Examples and predictions.

    PubMed

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Clinical Predictive Modeling Development and Deployment through FHIR Web Services

    PubMed Central

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction. PMID:26958207

  13. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J. T.

    1986-01-01

    A methodology is established to predict thermal barrier coating life in a environment similar to that experienced by gas turbine airfoils. Experiments were conducted to determine failure modes of the thermal barrier coating. Analytical studies were employed to derive a life prediction model. A review of experimental and flight service components as well as laboratory post evaluations indicates that the predominant mode of TBC failure involves thermomechanical spallation of the ceramic coating layer. This ceramic spallation involves the formation of a dominant crack in the ceramic coating parallel to and closely adjacent to the topologically complex metal ceramic interface. This mechanical failure mode clearly is influenced by thermal exposure effects as shown in experiments conducted to study thermal pre-exposure and thermal cycle-rate effects. The preliminary life prediction model developed focuses on the two major damage modes identified in the critical experiments tasks. The first of these involves a mechanical driving force, resulting from cyclic strains and stresses caused by thermally induced and externally imposed mechanical loads. The second is an environmental driving force based on experimental results, and is believed to be related to bond coat oxidation. It is also believed that the growth of this oxide scale influences the intensity of the mechanical driving force.

  14. A generative model for predicting terrorist incidents

    NASA Astrophysics Data System (ADS)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  15. Modeling and Prediction of Krueger Device Noise

    NASA Technical Reports Server (NTRS)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  16. Prediction of Chemical Function: Model Development and ...

    EPA Pesticide Factsheets

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  17. Prediction of Chemical Function: Model Development and ...

    EPA Pesticide Factsheets

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  18. Entity Network Prediction Using Multitype Topic Models

    NASA Astrophysics Data System (ADS)

    Shiozaki, Hitohiro; Eguchi, Koji; Ohkawa, Takenao

    Conveying information about who, what, when and where is a primary purpose of some genres of documents, typically news articles. Statistical models that capture dependencies between named entities and topics can play an important role. Although some relationships between who and where should be mentioned in such a document, no statistical topic models explicitly address in handling such information the textual interactions between a who-entity and a where-entity. This paper presents a statistical model that directly captures the dependencies between an arbitrary number of word types, such as who-entities, where-entities and topics, mentioned in each document. We show that this multitype topic model performs better at making predictions on entity networks, in which each vertex represents an entity and each edge weight represents how a pair of entities at the incident vertices is closely related, through our experiments on predictions of who-entities and links between them. We also demonstrate the scale-free property in the weighted networks of entities extracted from written mentions.

  19. Gamma-ray Pulsars: Models and Predictions

    NASA Technical Reports Server (NTRS)

    Harding Alice K.; White, Nicholas E. (Technical Monitor)

    2000-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is, dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10(exp 12) - 10(exp 13) G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers of the primary curvature emission around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. Next-generation gamma-ray telescopes sensitive to GeV-TeV emission will provide critical tests of pulsar acceleration and emission mechanisms.

  20. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.; Cook, T. S.; Kim, K. S.

    1986-01-01

    This is the second annual report of the first 3-year phase of a 2-phase, 5-year program. The objectives of the first phase are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consists of an air plasma sprayed ZrO-Y2O3 top coat, a low pressure plasma sprayed NiCrAlY bond coat, and a Rene' 80 substrate. Task I was to evaluate TBC failure mechanisms. Both bond coat oxidation and bond coat creep have been identified as contributors to TBC failure. Key property determinations have also been made for the bond coat and the top coat, including tensile strength, Poisson's ratio, dynamic modulus, and coefficient of thermal expansion. Task II is to develop TBC life prediction models for the predominant failure modes. These models will be developed based on the results of thermmechanical experiments and finite element analysis. The thermomechanical experiments have been defined and testing initiated. Finite element models have also been developed to handle TBCs and are being utilized to evaluate different TBC failure regimes.

  1. Test Data for USEPR Severe Accident Code Validation

    SciTech Connect

    J. L. Rempe

    2007-05-01

    This document identifies data that can be used for assessing various models embodied in severe accident analysis codes. Phenomena considered in this document, which were limited to those anticipated to be of interest in assessing severe accidents in the USEPR developed by AREVA, include: • Fuel Heatup and Melt Progression • Reactor Coolant System (RCS) Thermal Hydraulics • In-Vessel Molten Pool Formation and Heat Transfer • Fuel/Coolant Interactions during Relocation • Debris Heat Loads to the Vessel • Vessel Failure • Molten Core Concrete Interaction (MCCI) and Reactor Cavity Plug Failure • Melt Spreading and Coolability • Hydrogen Control Each section of this report discusses one phenomenon of interest to the USEPR. Within each section, an effort is made to describe the phenomenon and identify what data are available modeling it. As noted in this document, models in US accident analysis codes (MAAP, MELCOR, and SCDAP/RELAP5) differ. Where possible, this report identifies previous assessments that illustrate the impact of modeling differences on predicting various phenomena. Finally, recommendations regarding the status of data available for modeling USEPR severe accident phenomena are summarized.

  2. Systematic strategies for the third