Science.gov

Sample records for accident prediction model

  1. An alternative accident prediction model for highway-rail interfaces.

    PubMed

    Austin, Ross D; Carson, Jodi L

    2002-01-01

    Safety levels at highway/rail interfaces continue to be of major concern despite an ever-increasing focus on improved design and appurtenance application practices. Despite the encouraging trend towards improved safety, accident frequencies remain high, many of which result in fatalities. More than half of these accidents occur at public crossings, where active warning devices (i.e. gates, lights, bells, etc.) are in place and functioning properly. This phenomenon speaks directly to the need to re-examine both safety evaluation (i.e. accident prediction) methods and design practices at highway-rail crossings. With respect to earlier developed accident prediction methods, the Peabody Dimmick Formula, the New Hampshire Index and the National Cooperative Highway Research Program (NCHRP) Hazard Index, all lack descriptive capabilities due to their limited number of explanatory variables. Further, each has unique limitations that are detailed in this paper. The US Department of Transportation's (USDOT) Accident Prediction Formula, which is most widely, also has limitations related to the complexity of the three-stage formula and its decline in accident prediction model accuracy over time. This investigation resulted in the development of an alternate highway-rail crossing accident prediction model, using negative binomial regression that shows great promise. The benefit to be gained through the application of this alternate model is (1) a greatly simplified, one-step estimation process; (2) comparable supporting data requirements and (3) interpretation of both the magnitude and direction of the effect of the factors found to significantly influence highway-rail crossing accident frequencies.

  2. Estimating vehicle roadside encroachment frequency using accident prediction models

    SciTech Connect

    Miaou, S.-P.

    1996-07-01

    The existing data to support the development of roadside encroachment- based accident models are extremely limited and largely outdated. Under the sponsorship of the Federal Highway Administration and Transportation Research Board, several roadside safety projects have attempted to address this issue by providing rather comprehensive data collection plans and conducting pilot data collection efforts. It is clear from the results of these studies that the required field data collection efforts will be expensive. Furthermore, the validity of any field collected encroachment data may be questionable because of the technical difficulty to distinguish intentional from unintentional encroachments. This paper proposes an alternative method for estimating the basic roadside encroachment data without actually field collecting them. The method is developed by exploring the probabilistic relationships between a roadside encroachment event and a run-off-the-road event With some mild assumptions, the method is capable of providing a wide range of basic encroachment data from conventional accident prediction models. To illustrate the concept and use of such a method, some basic encroachment data are estimated for rural two-lane undivided roads. In addition, the estimated encroachment data are compared with the existing collected data. The illustration shows that the method described in this paper can be a viable approach to estimating basic encroachment data without actually collecting them which can be very costly.

  3. Accident prediction model for railway-highway interfaces.

    PubMed

    Oh, Jutaek; Washington, Simon P; Nam, Doohee

    2006-03-01

    Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes.

  4. Accident prediction model for public highway-rail grade crossings.

    PubMed

    Lu, Pan; Tolliver, Denver

    2016-05-01

    Considerable research has focused on roadway accident frequency analysis, but relatively little research has examined safety evaluation at highway-rail grade crossings. Highway-rail grade crossings are critical spatial locations of utmost importance for transportation safety because traffic crashes at highway-rail grade crossings are often catastrophic with serious consequences. The Poisson regression model has been employed to analyze vehicle accident frequency as a good starting point for many years. The most commonly applied variations of Poisson including negative binomial, and zero-inflated Poisson. These models are used to deal with common crash data issues such as over-dispersion (sample variance is larger than the sample mean) and preponderance of zeros (low sample mean and small sample size). On rare occasions traffic crash data have been shown to be under-dispersed (sample variance is smaller than the sample mean) and traditional distributions such as Poisson or negative binomial cannot handle under-dispersion well. The objective of this study is to investigate and compare various alternate highway-rail grade crossing accident frequency models that can handle the under-dispersion issue. The contributions of the paper are two-fold: (1) application of probability models to deal with under-dispersion issues and (2) obtain insights regarding to vehicle crashes at public highway-rail grade crossings.

  5. Predicting System Accidents with Model Analysis During Hybrid Simulation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land D.; Throop, David R.

    2002-01-01

    Standard discrete event simulation is commonly used to identify system bottlenecks and starving and blocking conditions in resources and services. The CONFIG hybrid discrete/continuous simulation tool can simulate such conditions in combination with inputs external to the simulation. This provides a means for evaluating the vulnerability to system accidents of a system's design, operating procedures, and control software. System accidents are brought about by complex unexpected interactions among multiple system failures , faulty or misleading sensor data, and inappropriate responses of human operators or software. The flows of resource and product materials play a central role in the hazardous situations that may arise in fluid transport and processing systems. We describe the capabilities of CONFIG for simulation-time linear circuit analysis of fluid flows in the context of model-based hazard analysis. We focus on how CONFIG simulates the static stresses in systems of flow. Unlike other flow-related properties, static stresses (or static potentials) cannot be represented by a set of state equations. The distribution of static stresses is dependent on the specific history of operations performed on a system. We discuss the use of this type of information in hazard analysis of system designs.

  6. Model predictions of wind and turbulence profiles associated with an ensemble of aircraft accidents

    NASA Technical Reports Server (NTRS)

    Williamson, G. G.; Lewellen, W. S.; Teske, M. E.

    1977-01-01

    The feasibility of predicting conditions under which wind/turbulence environments hazardous to aviation operations exist is studied by examining a number of different accidents in detail. A model of turbulent flow in the atmospheric boundary layer is used to reconstruct wind and turbulence profiles which may have existed at low altitudes at the time of the accidents. The predictions are consistent with available flight recorder data, but neither the input boundary conditions nor the flight recorder observations are sufficiently precise for these studies to be interpreted as verification tests of the model predictions.

  7. Predictive model for motorcycle accidents at three-legged priority junctions.

    PubMed

    Harnen, S; Umar, R S Radin; Wong, S V; Wan Hashim, W I

    2003-12-01

    In conjunction with a nationwide motorcycle safety program, the provision of exclusive motorcycle lanes has been implemented to overcome link-motorcycle accidents along trunk roads in Malaysia. However, not much work has been done to address accidents at junctions involving motorcycles. This article presents the development of predictive model for motorcycle accidents at three-legged major-minor priority junctions of urban roads in Malaysia. The generalized linear modeling technique was used to develop the model. The final model reveals that motorcycle accidents are proportional to the power of traffic flow. An increase in nonmotorcycle and motorcycle flows entering the junctions is associated with an increase in motorcycle accidents. Nonmotorcycle flow on major roads had the highest effect on the probability of motorcycle accidents. Approach speed, lane width, number of lanes, shoulder width, and land use were found to be significant in explaining motorcycle accidents at the three-legged major-minor priority junctions. These findings should enable traffic engineers to specifically design appropriate junction treatment criteria for nonexclusive motorcycle lane facilities.

  8. Combined prediction model of death toll for road traffic accidents based on independent and dependent variables.

    PubMed

    Feng, Zhong-xiang; Lu, Shi-sheng; Zhang, Wei-hua; Zhang, Nan-nan

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.

  9. Impact of rainstorm and runoff modeling on predicted consequences of atmospheric releases from nuclear reactor accidents

    SciTech Connect

    Ritchie, L.T.; Brown, W.D.; Wayland, J.R.

    1980-05-01

    A general temperate latitude cyclonic rainstorm model is presented which describes the effects of washout and runoff on consequences of atmospheric releases of radioactive material from potential nuclear reactor accidents. The model treats the temporal and spatial variability of precipitation processes. Predicted air and ground concentrations of radioactive material and resultant health consequences for the new model are compared to those of the original WASH-1400 model under invariant meteorological conditions and for realistic weather events using observed meteorological sequences. For a specific accident under a particular set of meteorological conditions, the new model can give significantly different results from those predicted by the WASH-1400 model, but the aggregate consequences produced for a large number of meteorological conditions are similar.

  10. Sensitivity analysis of an accident prediction model by the fractional factorial method.

    PubMed

    Akgüngör, Ali P; Yildiz, Osman

    2007-01-01

    Sensitivity analysis of a model can help us determine relative effects of model parameters on model results. In this study, the sensitivity of the accident prediction model proposed by Zegeer et al. [Zegeer, C.V., Reinfurt, D., Hummer, J., Herf, L., Hunter, W., 1987. Safety Effect of Cross-section Design for Two-lane Roads, vols. 1-2. Report FHWA-RD-87/008 and 009 Federal Highway Administration, Department of Transportation, USA] to its parameters was investigated by the fractional factorial analysis method. The reason for selecting this particular model is that it incorporates both traffic and road geometry parameters besides terrain characteristics. The evaluation of sensitivity analysis indicated that average daily traffic (ADT), lane width (W), width of paved shoulder (PA), median (H) and their interactions (i.e., ADT-W, ADT-PA and ADT-H) have significant effects on number of accidents. Based on the absolute value of parameter effects at the three- and two-standard deviation thresholds ADT was found to be of primary importance, while the remaining identified parameters seemed to be of secondary importance. This agrees with the fact that ADT is among the most effective parameters to determine road geometry and therefore, it is directly related to number of accidents. Overall, the fractional factorial method was found to be an efficient tool to examine the relative importance of the selected accident prediction model parameters.

  11. An Evaluation of the Hazard Prediction and Assessment Capability (HPAC) Software’s Ability to Model the Chornobyl Accident

    DTIC Science & Technology

    2002-03-01

    source term. Several publications provided a thorough accounting of the accident, including “ Chernobyl Record” [Mould], and the NRC technical report...Report on the Accident at the Chernobyl Nuclear Power Station” [NUREG-1250]. The most comprehensive study of transport models to predict the...from the Chernobyl Accident: The ATMES Report” [Klug, et al.]. The Atmospheric Transport 5 Model Evaluation Study (ATMES) report used data

  12. Application of Gray Markov SCGM(1,1) c Model to Prediction of Accidents Deaths in Coal Mining.

    PubMed

    Lan, Jian-Yi; Zhou, Ying

    2014-01-01

    The prediction of mine accident is the basis of aviation safety assessment and decision making. Gray prediction is suitable for such kinds of system objects with few data, short time, and little fluctuation, and Markov chain theory is just suitable for forecasting stochastic fluctuating dynamic process. Analyzing the coal mine accident human error cause, combining the advantages of both Gray prediction and Markov theory, an amended Gray Markov SCGM(1,1) c model is proposed. The gray SCGM(1,1) c model is applied to imitate the development tendency of the mine safety accident, and adopt the amended model to improve prediction accuracy, while Markov prediction is used to predict the fluctuation along the tendency. Finally, the new model is applied to forecast the mine safety accident deaths from 1990 to 2010 in China, and, 2011-2014 coal accidents deaths were predicted. The results show that the new model not only discovers the trend of the mine human error accident death toll but also overcomes the random fluctuation of data affecting precision. It possesses stronger engineering application.

  13. Compartment model for long-term contamination prediction in deciduous fruit trees after a nuclear accident

    SciTech Connect

    Antonopoulos-Domis, M.; Clouvas, A.; Gagianas, A. )

    1990-06-01

    Radiocesium contamination from the Chernobyl accident of different parts (fruits, leaves, and shoots) of selected apricot trees in North Greece was systematically measured in 1987 and 1988. The results are presented and discussed in the framework of a simple compartment model describing the long-term contamination uptake mechanism of deciduous fruit trees after a nuclear accident.

  14. A combined M5P tree and hazard-based duration model for predicting urban freeway traffic accident durations.

    PubMed

    Lin, Lei; Wang, Qian; Sadek, Adel W

    2016-06-01

    The duration of freeway traffic accidents duration is an important factor, which affects traffic congestion, environmental pollution, and secondary accidents. Among previous studies, the M5P algorithm has been shown to be an effective tool for predicting incident duration. M5P builds a tree-based model, like the traditional classification and regression tree (CART) method, but with multiple linear regression models as its leaves. The problem with M5P for accident duration prediction, however, is that whereas linear regression assumes that the conditional distribution of accident durations is normally distributed, the distribution for a "time-to-an-event" is almost certainly nonsymmetrical. A hazard-based duration model (HBDM) is a better choice for this kind of a "time-to-event" modeling scenario, and given this, HBDMs have been previously applied to analyze and predict traffic accidents duration. Previous research, however, has not yet applied HBDMs for accident duration prediction, in association with clustering or classification of the dataset to minimize data heterogeneity. The current paper proposes a novel approach for accident duration prediction, which improves on the original M5P tree algorithm through the construction of a M5P-HBDM model, in which the leaves of the M5P tree model are HBDMs instead of linear regression models. Such a model offers the advantage of minimizing data heterogeneity through dataset classification, and avoids the need for the incorrect assumption of normality for traffic accident durations. The proposed model was then tested on two freeway accident datasets. For each dataset, the first 500 records were used to train the following three models: (1) an M5P tree; (2) a HBDM; and (3) the proposed M5P-HBDM, and the remainder of data were used for testing. The results show that the proposed M5P-HBDM managed to identify more significant and meaningful variables than either M5P or HBDMs. Moreover, the M5P-HBDM had the lowest overall mean

  15. Predicting road accidents: Structural time series approach

    NASA Astrophysics Data System (ADS)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-07-01

    In this paper, the model for occurrence of road accidents in Malaysia between the years of 1970 to 2010 was developed and throughout this model the number of road accidents have been predicted by using the structural time series approach. The models are developed by using stepwise method and the residual of each step has been analyzed. The accuracy of the model is analyzed by using the mean absolute percentage error (MAPE) and the best model is chosen based on the smallest Akaike information criterion (AIC) value. A structural time series approach found that local linear trend model is the best model to represent the road accidents. This model allows level and slope component to be varied over time. In addition, this approach also provides useful information on improving the conventional time series method.

  16. Predictions of structural integrity of steam generator tubes under normal operating, accident, and severe accident conditions

    SciTech Connect

    Majumdar, S.

    1996-09-01

    Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation is confirmed by further tests at high temperatures as well as by finite element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation is confirmed by finite element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure is developed and validated by tests under varying temperature and pressure loading expected during severe accidents.

  17. Predictions of structural integrity of steam generator tubes under normal operating, accident, an severe accident conditions

    SciTech Connect

    Majumdar, S.

    1997-02-01

    Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation was confirmed by further tests at high temperatures, as well as by finite-element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation was confirmed by finite-element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate-sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure was developed and validated by tests under various temperature and pressure loadings that can occur during postulated severe accidents.

  18. Do cognitive models help in predicting the severity of posttraumatic stress disorder, phobia, and depression after motor vehicle accidents? A prospective longitudinal study.

    PubMed

    Ehring, Thomas; Ehlers, Anke; Glucksman, Edward

    2008-04-01

    The study investigated the power of theoretically derived cognitive variables to predict posttraumatic stress disorder (PTSD), travel phobia, and depression following injury in a motor vehicle accident (MVA). MVA survivors (N = 147) were assessed at the emergency department on the day of their accident and 2 weeks, 1 month, 3 months, and 6 months later. Diagnoses were established with the Structured Clinical Interview for DSM-IV. Predictors included initial symptom severities; variables established as predictors of PTSD in E. J. Ozer, S. R. Best, T. L. Lipsey, and D. S. Weiss's (2003) meta-analysis; and variables derived from cognitive models of PTSD, phobia, and depression. Results of nonparametric multiple regression analyses showed that the cognitive variables predicted subsequent PTSD and depression severities over and above what could be predicted from initial symptom levels. They also showed greater predictive power than the established predictors, although the latter showed similar effect sizes as in the meta-analysis. In addition, the predictors derived from cognitive models of PTSD and depression were disorder-specific. The results support the role of cognitive factors in the maintenance of emotional disorders following trauma.

  19. Do Cognitive Models Help in Predicting the Severity of Posttraumatic Stress Disorder, Phobia, and Depression After Motor Vehicle Accidents? A Prospective Longitudinal Study

    PubMed Central

    Ehring, Thomas; Ehlers, Anke; Glucksman, Edward

    2008-01-01

    The study investigated the power of theoretically derived cognitive variables to predict posttraumatic stress disorder (PTSD), travel phobia, and depression following injury in a motor vehicle accident (MVA). MVA survivors (N = 147) were assessed at the emergency department on the day of their accident and 2 weeks, 1 month, 3 months, and 6 months later. Diagnoses were established with the Structured Clinical Interview for DSM–IV. Predictors included initial symptom severities; variables established as predictors of PTSD in E. J. Ozer, S. R. Best, T. L. Lipsey, and D. S. Weiss's (2003) meta-analysis; and variables derived from cognitive models of PTSD, phobia, and depression. Results of nonparametric multiple regression analyses showed that the cognitive variables predicted subsequent PTSD and depression severities over and above what could be predicted from initial symptom levels. They also showed greater predictive power than the established predictors, although the latter showed similar effect sizes as in the meta-analysis. In addition, the predictors derived from cognitive models of PTSD and depression were disorder-specific. The results support the role of cognitive factors in the maintenance of emotional disorders following trauma. PMID:18377119

  20. Modelling road accidents: An approach using structural time series

    NASA Astrophysics Data System (ADS)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  1. Do Cognitive Models Help in Predicting the Severity of Posttraumatic Stress Disorder, Phobia, and Depression after Motor Vehicle Accidents? A Prospective Longitudinal Study

    ERIC Educational Resources Information Center

    Ehring, Thomas; Ehlers, Anke; Glucksman, Edward

    2008-01-01

    The study investigated the power of theoretically derived cognitive variables to predict posttraumatic stress disorder (PTSD), travel phobia, and depression following injury in a motor vehicle accident (MVA). MVA survivors (N = 147) were assessed at the emergency department on the day of their accident and 2 weeks, 1 month, 3 months, and 6 months…

  2. Modelling Accident Tolerant Fuel Concepts

    SciTech Connect

    Hales, Jason Dean; Gamble, Kyle Allan Lawrence

    2016-05-01

    The catastrophic events that occurred at the Fukushima-Daiichi nuclear power plant in 2011 have led to widespread interest in research of alternative fuels and claddings that are proposed to be accident tolerant. The United States Department of Energy (DOE) through its Nuclear Energy Advanced Modeling and Simulation (NEAMS) program has funded an Accident Tolerant Fuel (ATF) High Impact Problem (HIP). The ATF HIP is a three-year project to perform research on two accident tolerant concepts. The final outcome of the ATF HIP will be an in-depth report to the DOE Advanced Fuels Campaign (AFC) giving a recommendation on whether either of the two concepts should be included in their lead test assembly scheduled for placement into a commercial reactor in 2022. The two ATF concepts under investigation in the HIP are uranium silicide fuel and iron-chromium-aluminum (FeCrAl) alloy cladding. Utilizing the expertise of three national laboratory participants (Idaho National Laboratory, Los Alamos National Laboratory, and Argonne National Laboratory), a comprehensive multiscale approach to modeling is being used that includes atomistic modeling, molecular dynamics, rate theory, phase-field, and fuel performance simulations. Model development and fuel performance analysis are critical since a full suite of experimental studies will not be complete before AFC must prioritize concepts for focused development. In this paper, we present simulations of the two proposed accident tolerance fuel systems: U3Si2 fuel with Zircaloy-4 cladding, and UO2 fuel with FeCrAl cladding. Sensitivity analyses are completed using Sandia National Laboratories’ Dakota software to determine which input parameters (e.g., fuel specific heat) have the greatest influence on the output metrics of interest (e.g., fuel centerline temperature). We also outline the multiscale modelling approach being employed. Considerable additional work is required prior to preparing the recommendation report for the Advanced

  3. Development of a model to predict flow oscillations in low-flow sodium boiling. [Loss-of-Piping Integrity accidents

    SciTech Connect

    Levin, A.E.; Griffith, P.

    1980-04-01

    Tests performed in a small scale water loop showed that voiding oscillations, similar to those observed in sodium, were present in water, as well. An analytical model, appropriate for either sodium or water, was developed and used to describe the water flow behavior. The experimental results indicate that water can be successfully employed as a sodium simulant, and further, that the condensation heat transfer coefficient varies significantly during the growth and collapse of vapor slugs during oscillations. It is this variation, combined with the temperature profile of the unheated zone above the heat source, which determines the oscillatory behavior of the system. The analytical program has produced a model which qualitatively does a good job in predicting the flow behavior in the wake experiment. The amplitude discrepancies are attributable to experimental uncertainties and model inadequacies. Several parameters (heat transfer coefficient, unheated zone temperature profile, mixing between hot and cold fluids during oscillations) are set by the user. Criteria for the comparison of water and sodium experiments have been developed.

  4. FASTGRASS: A mechanistic model for the prediction of Xe, I, Cs, Te, Ba, and Sr release from nuclear fuel under normal and severe-accident conditions

    SciTech Connect

    Rest, J.; Zawadzki, S.A. )

    1992-09-01

    The primary physical/chemical models that form the basis of the FASTGRASS mechanistic computer model for calculating fission-product release from nuclear fuel are described. Calculated results are compared with test data and the major mechanisms affecting the transport of fission products during steady-state and accident conditions are identified.

  5. [Chest modelling and automotive accidents].

    PubMed

    Trosseille, Xavier

    2011-11-01

    Automobile development is increasingly based on mathematical modeling. Accurate models of the human body are now available and serve to develop new means of protection. These models used to consist of rigid, articulated bodies but are now made of several million finite elements. They are now capable of predicting some risks of injury. To develop these models, sophisticated tests were conducted on human cadavers. For example, chest modeling started with material characterization and led to complete validation in the automobile environment. Model personalization, based on medical imaging, will permit studies of the behavior and tolerances of the entire population.

  6. An exploration of the utility of mathematical modeling predicting fatigue from sleep/wake history and circadian phase applied in accident analysis and prevention: the crash of Comair Flight 5191.

    PubMed

    Pruchnicki, Shawn A; Wu, Lora J; Belenky, Gregory

    2011-05-01

    On 27 August 2006 at 0606 eastern daylight time (EDT) at Bluegrass Airport in Lexington, KY (LEX), the flight crew of Comair Flight 5191 inadvertently attempted to take off from a general aviation runway too short for their aircraft. The aircraft crashed killing 49 of the 50 people on board. To better understand this accident and to aid in preventing similar accidents, we applied mathematical modeling predicting fatigue-related degradation in performance for the Air Traffic Controller on-duty at the time of the crash. To provide the necessary input to the model, we attempted to estimate circadian phase and sleep/wake histories for the Captain, First Officer, and Air Traffic Controller. We were able to estimate with confidence the circadian phase for each. We were able to estimate with confidence the sleep/wake history for the Air Traffic Controller, but unable to do this for the Captain and First Officer. Using the sleep/wake history estimates for the Air Traffic Controller as input, the mathematical modeling predicted moderate fatigue-related performance degradation at the time of the crash. This prediction was supported by the presence of what appeared to be fatigue-related behaviors in the Air Traffic Controller during the 30 min prior to and in the minutes after the crash. Our modeling results do not definitively establish fatigue in the Air Traffic Controller as a cause of the accident, rather they suggest that had he been less fatigued he might have detected Comair Flight 5191's lining up on the wrong runway. We were not able to perform a similar analysis for the Captain and First Officer because we were not able to estimate with confidence their sleep/wake histories. Our estimates of sleep/wake history and circadian rhythm phase for the Air Traffic Controller might generalize to other air traffic controllers and to flight crew operating in the early morning hours at LEX. Relative to other times of day, the modeling results suggest an elevated risk of fatigue

  7. A critical review of macro models for road accidents.

    PubMed

    Hakim, S; Shefer, D; Hakkert, A S; Hocherman, I

    1991-10-01

    This paper presents a critical review of state-of-the-art macro models for road accidents. Such a review is meant to identify and establish the significance of policy and socioeconomic variables affecting the level of road accidents. The aim is to identify those variables associated with effective policies and interventions to enable decision makers to improve the level of road safety. The variables that appear to affect the number of fatalities or injuries are: vehicle miles travelled (VMT), vehicle population, income (in its various forms), percentage of young drivers, intervention policies such as speed limits, periodic vehicle inspection, and minimum alcohol-drinking age. Viewed critically, the state-of-the-art models being used to explain and predict road accidents are still deficient. One possible approach to correcting this deficiency draws from consumer utility theory, using analytical models built on a newly constructed theoretical framework. Success in estimating such models may improve predictions of road accidents, thus demonstrating the comparative cost effectiveness of alternative intervention policies.

  8. Modeling secondary accidents identified by traffic shock waves.

    PubMed

    Junhua, Wang; Boya, Liu; Lanfang, Zhang; Ragland, David R

    2016-02-01

    The high potential for occurrence and the negative consequences of secondary accidents make them an issue of great concern affecting freeway safety. Using accident records from a three-year period together with California interstate freeway loop data, a dynamic method for more accurate classification based on the traffic shock wave detecting method was used to identify secondary accidents. Spatio-temporal gaps between the primary and secondary accident were proven be fit via a mixture of Weibull and normal distribution. A logistic regression model was developed to investigate major factors contributing to secondary accident occurrence. Traffic shock wave speed and volume at the occurrence of a primary accident were explicitly considered in the model, as a secondary accident is defined as an accident that occurs within the spatio-temporal impact scope of the primary accident. Results show that the shock waves originating in the wake of a primary accident have a more significant impact on the likelihood of a secondary accident occurrence than the effects of traffic volume. Primary accidents with long durations can significantly increase the possibility of secondary accidents. Unsafe speed and weather are other factors contributing to secondary crash occurrence. It is strongly suggested that when police or rescue personnel arrive at the scene of an accident, they should not suddenly block, decrease, or unblock the traffic flow, but instead endeavor to control traffic in a smooth and controlled manner. Also it is important to reduce accident processing time to reduce the risk of secondary accident.

  9. Relating aviation service difficulty reports to accident data for safety trend prediction

    SciTech Connect

    Fullwood, R.; Hall, R.; Martinez, G.; Uryasev, S.

    1996-03-13

    This work explores the hypothesis that Service Difficulty Reports (SDR - primarily inspection reports) are related to Accident Incident Data System (AIDS - reports primarily compiled from National Transportation Safety Board (NTSB) accident investigations). This work sought and found relations between equipment operability reported in the SDR and aviation safety reported in AIDS. Equipment is not the only factor in aviation accidents, but it is the factor reported in the SDR. Two approaches to risk analysis were used: (1) The conventional method, in which reporting frequencies are taken from a data base (SDR), and used with an aircraft reliability block diagram model of the critical systems to predict aircraft failure, and (2) Shape analysis that uses the magnitude and shape of the SDR distribution compared with the AIDS distribution to predict aircraft failure.

  10. Dust mobilization and transport modeling for loss of vacuum accidents

    SciTech Connect

    P.W. Humrickhouse; J.P. Sharpe

    2007-10-01

    We develop a general continuum fluid dynamic model for dust transport in loss of vacuum accidents in fusion energy systems. The relationship between this general approach and established particle transport methods is clarified, in particular the relationship between the seemingly disparate treatments of aerosol dynamics and Lagrangian particle tracking. Constitutive equations for granular flow are found to be inadequate for prediction of mobilization, as these models essentially impose a condition of flow from the outset. Experiments confirm that at low shear, settled dust piles behave more like a continuum solid, and suitable solid models will be required to predict the onset of dust mobilization.

  11. Predicting and analyzing the trend of traffic accidents deaths in Iran in 2014 and 2015

    PubMed Central

    Mehmandar, Mohammadreza; Soori, Hamid; Mehrabi, Yadolah

    2016-01-01

    Background: Predicting the trend in traffic accidents deaths and its analysis can be a useful tool for planning and policy-making, conducting interventions appropriate with death trend, and taking the necessary actions required for controlling and preventing future occurrences. Objective: Predicting and analyzing the trend of traffic accidents deaths in Iran in 2014 and 2015. Settings and Design: It was a cross-sectional study. Materials and Methods: All the information related to fatal traffic accidents available in the database of Iran Legal Medicine Organization from 2004 to the end of 2013 were used to determine the change points (multi-variable time series analysis). Using autoregressive integrated moving average (ARIMA) model, traffic accidents death rates were predicted for 2014 and 2015, and a comparison was made between this rate and the predicted value in order to determine the efficiency of the model. Results: From the results, the actual death rate in 2014 was almost similar to that recorded for this year, while in 2015 there was a decrease compared with the previous year (2014) for all the months. A maximum value of 41% was also predicted for the months of January and February, 2015. Conclusion: From the prediction and analysis of the death trends, proper application and continuous use of the intervention conducted in the previous years for road safety improvement, motor vehicle safety improvement, particularly training and culture-fostering interventions, as well as approval and execution of deterrent regulations for changing the organizational behaviors, can significantly decrease the loss caused by traffic accidents. PMID:27308255

  12. Predicted spatio-temporal dynamics of radiocesium deposited onto forests following the Fukushima nuclear accident

    PubMed Central

    Hashimoto, Shoji; Matsuura, Toshiya; Nanko, Kazuki; Linkov, Igor; Shaw, George; Kaneko, Shinji

    2013-01-01

    The majority of the area contaminated by the Fukushima Dai-ichi nuclear power plant accident is covered by forest. To facilitate effective countermeasure strategies to mitigate forest contamination, we simulated the spatio-temporal dynamics of radiocesium deposited into Japanese forest ecosystems in 2011 using a model that was developed after the Chernobyl accident in 1986. The simulation revealed that the radiocesium inventories in tree and soil surface organic layer components drop rapidly during the first two years after the fallout. Over a period of one to two years, the radiocesium is predicted to move from the tree and surface organic soil to the mineral soil, which eventually becomes the largest radiocesium reservoir within forest ecosystems. Although the uncertainty of our simulations should be considered, the results provide a basis for understanding and anticipating the future dynamics of radiocesium in Japanese forests following the Fukushima accident. PMID:23995073

  13. PREDICTIVE MODELS

    SciTech Connect

    Ray, R.M. )

    1986-12-01

    PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1) chemical flooding, where soap-like surfactants are injected into the reservoir to wash out the oil; 2) carbon dioxide miscible flooding, where carbon dioxide mixes with the lighter hydrocarbons making the oil easier to displace; 3) in-situ combustion, which uses the heat from burning some of the underground oil to thin the product; 4) polymer flooding, where thick, cohesive material is pumped into a reservoir to push the oil through the underground rock; and 5) steamflood, where pressurized steam is injected underground to thin the oil. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes.

  14. Relating aviation service difficulty reports to accident data for safety trend prediction

    SciTech Connect

    Fullwood, R.R.; Hall, R.E.; Martinez-Guridi, G.; Uryasev, S.; Sampath, S.G.

    1996-10-01

    A synthetic model of scheduled-commercial U.S. aviation fatalities was constructed from linear combinations of the time-spectra of critical systems reporting using 5.5 years of Service Difficulty Reports (SDR){sup 2} and Accident Incident Data System (AIDS) records{sup 3}. This model, used to predict near-future trends in aviation accidents, was tested by using the first 36 months of data to construct the synthetic model which was used to predict fatalities during the following eight months. These predictions were tested by comparison with the fatality data. A reliability block diagram (RBD) and third-order extrapolations also were used as predictive models and compared with actuality. The synthetic model was the best predictor because of its use of systems data. Other results of the study are a database of service difficulties for major aviation systems, and a rank ordering of systems according to their contribution to the synthesis. 4 refs., 8 figs., 3 tabs.

  15. Characterizing the Severe Turbulence Environments Associated With Commercial Aviation Accidents: A Real-Time Turbulence Model (RTTM) Designed for the Operational Prediction of Hazardous Aviation Turbulence Environments

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Lux, Kevin M.; Cetola, Jeffrey D.; Huffman, Allan W.; Riordan, Allen J.; Slusser, Sarah W.; Lin, Yuh-Lang; Charney, Joseph J.; Waight, Kenneth T.

    2004-01-01

    Real-time prediction of environments predisposed to producing moderate-severe aviation turbulence is studied. We describe the numerical model and its postprocessing system designed for said prediction of environments predisposed to severe aviation turbulence as well as presenting numerous examples of its utility. The numerical model is MASS version 5.13, which is integrated over three different grid matrices in real time on a university work station in support of NASA Langley Research Center s B-757 turbulence research flight missions. The postprocessing system includes several turbulence-related products, including four turbulence forecasting indices, winds, streamlines, turbulence kinetic energy, and Richardson numbers. Additionally, there are convective products including precipitation, cloud height, cloud mass fluxes, lifted index, and K-index. Furthermore, soundings, sounding parameters, and Froude number plots are also provided. The horizontal cross-section plot products are provided from 16 000 to 46 000 ft in 2000-ft intervals. Products are available every 3 hours at the 60- and 30-km grid interval and every 1.5 hours at the 15-km grid interval. The model is initialized from the NWS ETA analyses and integrated two times a day.

  16. A catastrophe-theory model for simulating behavioral accidents

    SciTech Connect

    Souder, W.E.

    1988-01-01

    Behavioral accidents are a particular type of accident. They are caused by inappropriate individual behaviors and faulty reactions. Catastrophe theory is a means for mathematically modeling the dynamic processes that underlie behavioral accidents. Based on a comprehensive data base of mining accidents, a computerized catastrophe model has been developed by the Bureau of Mines. This model systematically links individual psychological, group behavioral, and mine environmental variables with other accident causing factors. It answers several longstanding questions about why some normally safe behaving persons may spontaneously engage in unsafe acts that have high risks of serious injury. Field tests with the model indicate that it has three imnportant uses: it can be used as a effective training aid for increasing employee safety consciousness; it can be used as a management laboratory for testing decision alternatives and policies; and it can be used to help design the most effective work teams.

  17. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    PubMed

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere.

  18. Usefulness of high resolution coastal models for operational oil spill forecast: the Full City accident

    NASA Astrophysics Data System (ADS)

    Broström, G.; Carrasco, A.; Hole, L. R.; Dick, S.; Janssen, F.; Mattsson, J.; Berger, S.

    2011-06-01

    Oil spill modeling is considered to be an important decision support system (DeSS) useful for remedial action in case of accidents, as well as for designing the environmental monitoring system that is frequently set up after major accidents. Many accidents take place in coastal areas implying that low resolution basin scale ocean models is of limited use for predicting the trajectories of an oil spill. In this study, we target the oil spill in connection with the Full City accident on the Norwegian south coast and compare three different oil spill models for the area. The result of the analysis is that all models do a satisfactory job. The "standard" operational model for the area is shown to have severe flaws but including an analysis based on a higher resolution model (1.5 km resolution) for the area the model system show results that compare well with observations. The study also shows that an ensemble using three different models is useful when predicting/analyzing oil spill in coastal areas.

  19. Usefulness of high resolution coastal models for operational oil spill forecast: the "Full City" accident

    NASA Astrophysics Data System (ADS)

    Broström, G.; Carrasco, A.; Hole, L. R.; Dick, S.; Janssen, F.; Mattsson, J.; Berger, S.

    2011-11-01

    Oil spill modeling is considered to be an important part of a decision support system (DeSS) for oil spill combatment and is useful for remedial action in case of accidents, as well as for designing the environmental monitoring system that is frequently set up after major accidents. Many accidents take place in coastal areas, implying that low resolution basin scale ocean models are of limited use for predicting the trajectories of an oil spill. In this study, we target the oil spill in connection with the "Full City" accident on the Norwegian south coast and compare operational simulations from three different oil spill models for the area. The result of the analysis is that all models do a satisfactory job. The "standard" operational model for the area is shown to have severe flaws, but by applying ocean forcing data of higher resolution (1.5 km resolution), the model system shows results that compare well with observations. The study also shows that an ensemble of results from the three different models is useful when predicting/analyzing oil spill in coastal areas.

  20. Prediction of Severe Accident Counter Current Natural Circulation Flows in the Hot Leg of a Pressurized Water Reactor

    SciTech Connect

    Boyd, Christopher F.

    2006-07-01

    During certain phases of a severe accident in a pressurized water reactor (PWR), the core becomes uncovered and steam carries heat to the steam generators through natural circulation. For PWR's with U-tube steam generators and loop seals filled with water, a counter current flow pattern is established in the hot leg. This flow pattern has been experimentally observed and has been predicted using computational fluid dynamics (CFD). Predictions of severe accident behavior are routinely carried out using severe accident system analysis codes such as SCDAP/RELAP5 or MELCOR. These codes, however, were not developed for predicting the three-dimensional natural circulation flow patterns during this phase of a severe accident. CFD, along with a set of experiments at 1/7. scale, have been historically used to establish the flow rates and mixing for the system analysis tools. One important aspect of these predictions is the counter current flow rate in the nearly 30 inch diameter hot leg between the reactor vessel and steam generator. This flow rate is strongly related to the amount of energy that can be transported away from the reactor core. This energy transfer plays a significant role in the prediction of core failures as well as potential failures in other reactor coolant system piping. CFD is used to determine the counter current flow rate during a severe accident. Specific sensitivities are completed for parameters such as surge line flow rates, hydrogen content, as well as vessel and steam generator temperatures. The predictions are carried out for the reactor vessel upper plenum, hot leg, a portion of the surge line, and a steam generator blocked off at the outlet plenum. All predictions utilize the FLUENT V6 CFD code. The volumetric flow in the hot leg is assumed to be proportional to the square root of the product of normalized density difference, gravity, and hydraulic diameter to the 5. power. CFD is used to determine the proportionality constant in the range

  1. Accidents and unpleasant incidents: worry in transport and prediction of travel behavior.

    PubMed

    Backer-Grøndahl, Agathe; Fyhri, Aslak; Ulleberg, Pål; Amundsen, Astrid Helene

    2009-09-01

    Worry on nine different means of transport was measured in a Norwegian sample of 853 respondents. The main aim of the study was to investigate differences in worry about accidents and worry about unpleasant incidents, and how these two sorts of worry relate to various means of transport as well as transport behavior. Factor analyses of worry about accidents suggested a division between rail transport, road transport, and nonmotorized transport, whereas analyses of worry about unpleasant incidents suggested a division between transport modes where you interact with other people and "private" transport modes. Moreover, mean ratings of worry showed that respondents worried more about accidents than unpleasant incidents on private transport modes, and more about unpleasant incidents than accidents on public transport modes. Support for the distinction between worry about accidents and unpleasant incidents was also found when investigating relationships between both types of worry and behavioral adaptations: worry about accidents was more important than worry about unpleasant incidents in relation to behavioral adaptations on private means of transport, whereas the opposite was true for public means of transport. Finally, predictors of worry were investigated. The models of worry about accidents and worry about unpleasant incidents differed as to what predictors turned out significant. Knowledge about peoples' worries on different means of transport is important with regard to understanding and influencing transport and travel behavior, as well as attending to commuters' welfare.

  2. Catastrophe model of the accident process, safety climate, and anxiety.

    PubMed

    Guastello, Stephen J; Lynn, Mark

    2014-04-01

    This study aimed (a) to address the evidence for situational specificity in the connection between safety climate to occupational accidents, (b) to resolve similar issues between anxiety and accidents, (c) to expand and develop the concept of safety climate to include a wider range of organizational constructs, (d) to assess a cusp catastrophe model for occupational accidents where safety climate and anxiety are treated as bifurcation variables, and environ-mental hazards are asymmetry variables. Bifurcation, or trigger variables can have a positive or negative effect on outcomes, depending on the levels of asymmetry, or background variables. The participants were 1262 production employees of two steel manufacturing facilities who completed a survey that measured safety management, anxiety, subjective danger, dysregulation, stressors and hazards. Nonlinear regression analyses showed, for this industry, that the accident process was explained by a cusp catastrophe model in which safety management and anxiety were bifurcation variables, and hazards, age and experience were asymmetry variables. The accuracy of the cusp model (R2 = .72) exceeded that of the next best log-linear model (R2 = .08) composed from the same survey variables. The results are thought to generalize to any industry where serious injuries could occur, although situationally specific effects should be anticipated as well.

  3. The modelling of fuel volatilisation in accident conditions

    NASA Astrophysics Data System (ADS)

    Manenc, H.; Mason, P. K.; Kissane, M. P.

    2001-04-01

    For oxidising conditions, at high temperatures, the pressure of uranium vapour species at the fuel surface is predicted to be high. These vapour species can be transported away from the fuel surface, giving rise to significant amounts of volatilised fuel, as has been observed during small-scale experiments and taken into account in different models. Hence, fuel volatilisation must be taken into account in the conduct of a simulated severe accident such as the Phebus FPT-4 experiment. A large-scale in-pile test is designed to investigate the release of fission products and actinides from irradiated UO 2 fuel in a debris bed and molten pool configuration. Best estimate predictions for fuel volatilisation were performed before the test. This analysis was used to assess the maximum possible loading of filters collecting emissions and the consequences for the filter-change schedule. Following successful completion of the experiment, blind post-test analysis is being performed; boundary conditions for the calculations are based on the preliminary post-test analysis with the core degradation code ICARE2 [J.C. Crestia, G. Repetto, S. Ederli, in: Proceedings of the Fourth Technical Seminar on the PHEBUS FP Programme, Marseille, France, 20-22 March 2000]. The general modelling approach is presented here and then illustrated by the analysis of fuel volatilisation in Phebus FPT4 (for which results are not yet available). Effort was made to reduce uncertainties in the calculations by improving the understanding of controlling physical processes and by using critically assessed thermodynamic data to determine uranium vapour pressures. The analysis presented here constitutes a preliminary, blind, post-test estimate of fuel volatilised during the test.

  4. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect

    Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  5. Advanced accident sequence precursor analysis level 2 models

    SciTech Connect

    Galyean, W.J.; Brownson, D.A.; Rempe, J.L.

    1996-03-01

    The U.S. Nuclear Regulatory Commission Accident Sequence Precursor program pursues the ultimate objective of performing risk significant evaluations on operational events (precursors) occurring in commercial nuclear power plants. To achieve this objective, the Office of Nuclear Regulatory Research is supporting the development of simple probabilistic risk assessment models for all commercial nuclear power plants (NPP) in the U.S. Presently, only simple Level 1 plant models have been developed which estimate core damage frequencies. In order to provide a true risk perspective, the consequences associated with postulated core damage accidents also need to be considered. With the objective of performing risk evaluations in an integrated and consistent manner, a linked event tree approach which propagates the front end results to back end was developed. This approach utilizes simple plant models that analyze the response of the NPP containment structure in the context of a core damage accident, estimate the magnitude and timing of a radioactive release to the environment, and calculate the consequences for a given release. Detailed models and results from previous studies, such as the NUREG-1150 study, are used to quantify these simple models. These simple models are then linked to the existing Level 1 models, and are evaluated using the SAPHIRE code. To demonstrate the approach, prototypic models have been developed for a boiling water reactor, Peach Bottom, and a pressurized water reactor, Zion.

  6. Accident sequence precursor analysis level 2/3 model development

    SciTech Connect

    Lui, C.H.; Galyean, W.J.; Brownson, D.A.

    1997-02-01

    The US Nuclear Regulatory Commission`s Accident Sequence Precursor (ASP) program currently uses simple Level 1 models to assess the conditional core damage probability for operational events occurring in commercial nuclear power plants (NPP). Since not all accident sequences leading to core damage will result in the same radiological consequences, it is necessary to develop simple Level 2/3 models that can be used to analyze the response of the NPP containment structure in the context of a core damage accident, estimate the magnitude of the resulting radioactive releases to the environment, and calculate the consequences associated with these releases. The simple Level 2/3 model development work was initiated in 1995, and several prototype models have been completed. Once developed, these simple Level 2/3 models are linked to the simple Level 1 models to provide risk perspectives for operational events. This paper describes the methods implemented for the development of these simple Level 2/3 ASP models, and the linkage process to the existing Level 1 models.

  7. A catastrophe-theory model for simulating behavioral accidents

    SciTech Connect

    Souder, W.E.

    1988-01-01

    Based on a comprehensive data base of mining accidents, a computerized catastrophe model has been developed by the Bureau of Mines. This model systematically links individual psychological, group behavioral, and mine environmental variables with other accident causing factors. It answers several longstanding questions about why some normally safe behaving persons may spontaneously engage in unsafe acts that have high risks of serious injury. Field tests with the model indicate that it has three important uses: It can be used as an effective training aid for increasing employee safety consciousness; it can be used as a management laboratory for testing decision alternatives and policies; and it can be used to help design the most effective work teams.

  8. An application of probabilistic safety assessment methods to model aircraft systems and accidents

    SciTech Connect

    Martinez-Guridi, G.; Hall, R.E.; Fullwood, R.R.

    1998-08-01

    A case study modeling the thrust reverser system (TRS) in the context of the fatal accident of a Boeing 767 is presented to illustrate the application of Probabilistic Safety Assessment methods. A simplified risk model consisting of an event tree with supporting fault trees was developed to represent the progression of the accident, taking into account the interaction between the TRS and the operating crew during the accident, and the findings of the accident investigation. A feasible sequence of events leading to the fatal accident was identified. Several insights about the TRS and the accident were obtained by applying PSA methods. Changes proposed for the TRS also are discussed.

  9. Effects of quenched randomness induced by car accidents on traffic flow in a cellular automata model.

    PubMed

    Yang, Xian-Qing; Ma, Yu-Qiang; Zhao, Yue-Min

    2004-10-01

    In this paper we numerically study the impact of quenched disorder induced by car accidents on traffic flow in the Nagel-Schreckenberg (NS) model. Car accidents occur when the necessary conditions proposed by [J. Phys. A 30, 3329 (1997)

  10. System analysis with improved thermo-mechanical fuel rod models for modeling current and advanced LWR materials in accident scenarios

    NASA Astrophysics Data System (ADS)

    Porter, Ian Edward

    A nuclear reactor systems code has the ability to model the system response in an accident scenario based on known initial conditions at the onset of the transient. However, there has been a tendency for these codes to lack the detailed thermo-mechanical fuel rod response models needed for accurate prediction of fuel rod failure. This proposed work will couple today's most widely used steady-state (FRAPCON) and transient (FRAPTRAN) fuel rod models with a systems code TRACE for best-estimate modeling of system response in accident scenarios such as a loss of coolant accident (LOCA). In doing so, code modifications will be made to model gamma heating in LWRs during steady-state and accident conditions and to improve fuel rod thermal/mechanical analysis by allowing axial nodalization of burnup-dependent phenomena such as swelling, cladding creep and oxidation. With the ability to model both burnup-dependent parameters and transient fuel rod response, a fuel dispersal study will be conducted using a hypothetical accident scenario under both PWR and BWR conditions to determine the amount of fuel dispersed under varying conditions. Due to the fuel fragmentation size and internal rod pressure both being dependent on burnup, this analysis will be conducted at beginning, middle and end of cycle to examine the effects that cycle time can play on fuel rod failure and dispersal. Current fuel rod and system codes used by the Nuclear Regulatory Commission (NRC) are compilations of legacy codes with only commonly used light water reactor materials, Uranium Dioxide (UO2), Mixed Oxide (U/PuO 2) and zirconium alloys. However, the events at Fukushima Daiichi and Three Mile Island accident have shown the need for exploration into advanced materials possessing improved accident tolerance. This work looks to further modify the NRC codes to include silicon carbide (SiC), an advanced cladding material proposed by current DOE funded research on accident tolerant fuels (ATF). Several

  11. Modelling the oil spill track from Prestige-Nassau accident

    NASA Astrophysics Data System (ADS)

    Montero, P.; Leitao, P.; Penabad, E.; Balseiro, C. F.; Carracedo, P.; Braunschweig, F.; Fernandes, R.; Gomez, B.; Perez-Munuzuri, V.; Neves, R.

    2003-04-01

    On November 13th 2002, the tank ship Prestige-Nassau sent a SOS signal. The hull of the ship was damaged producing an oil spill in front of the Galician coast (NW Spain). The damaged ship took north direction spilling more fuel and affecting the western Galician coast. After this, it changed its track to south. At this first stage of the accident, the ship spilt around 10000 Tm in 19th at the Galician Bank, at 133 NM of Galician coast. From the very beginning, a monitoring and forecasting of the first slick was developed. Afterwards, since southwesternly winds are frequent in wintertime, the slick from the initial spill started to move towards the Galician coast. This drift movement was followed by overflights. With the aim of forecasting the place and arriving date to the coast, some simulations with two different models were developed. The first one was a very simple drift model forced with the surface winds generated by ARPS operational model (1) at MeteoGalicia (regional weather forecast service). The second one was a more complex hydrodynamic model, MOHID2000 (2,3), developed by MARETEC GROUP (Instituto Superior Técnico de Lisboa) in collaboration with GFNL (Grupo de Física Non Lineal, Universidade de Santiago de Compostela). On November 28th, some tarballs appeared at south of main slick. This observations could be explained taking into account the below surface water movement following Ekman dynamic. Some new simulations with the aim of understanding better the physic underlying these observations were performed. Agreed between observations and simulations was achieved. We performed simulations with and without slope current previously calculated by other authors, showing that this current can only introduce subtle differences in the slick's arriving point to the coast and introducing wind as the primary forcing. (1) A two-dimensional particle tracking model for pollution dispersion in A Coruña and Vigo Rias (NW Spain). M. Gómez-Gesteira, P. Montero, R

  12. WHEN MODEL MEETS REALITY – A REVIEW OF SPAR LEVEL 2 MODEL AGAINST FUKUSHIMA ACCIDENT

    SciTech Connect

    Zhegang Ma

    2013-09-01

    The Standardized Plant Analysis Risk (SPAR) models are a set of probabilistic risk assessment (PRA) models used by the Nuclear Regulatory Commission (NRC) to evaluate the risk of operations at U.S. nuclear power plants and provide inputs to risk informed regulatory process. A small number of SPAR Level 2 models have been developed mostly for feasibility study purpose. They extend the Level 1 models to include containment systems, group plant damage states, and model containment phenomenology and accident progression in containment event trees. A severe earthquake and tsunami hit the eastern coast of Japan in March 2011 and caused significant damages on the reactors in Fukushima Daiichi site. Station blackout (SBO), core damage, containment damage, hydrogen explosion, and intensive radioactivity release, which have been previous analyzed and assumed as postulated accident progression in PRA models, now occurred with various degrees in the multi-units Fukushima Daiichi site. This paper reviews and compares a typical BWR SPAR Level 2 model with the “real” accident progressions and sequences occurred in Fukushima Daiichi Units 1, 2, and 3. It shows that the SPAR Level 2 model is a robust PRA model that could very reasonably describe the accident progression for a real and complicated nuclear accident in the world. On the other hand, the comparison shows that the SPAR model could be enhanced by incorporating some accident characteristics for better representation of severe accident progression.

  13. Dynamic modelling of radionuclide uptake by marine biota: application to the Fukushima nuclear power plant accident.

    PubMed

    Vives i Batlle, Jordi

    2016-01-01

    The dynamic model D-DAT was developed to study the dynamics of radionuclide uptake and turnover in biota and sediments in the immediate aftermath of the Fukushima accident. This dynamics is determined by the interplay between the residence time of radionuclides in seawater/sediments and the biological half-lives of elimination by the biota. The model calculates time-variable activity concentration of (131)I, (134)Cs, (137)Cs and (90)Sr in seabed sediment, fish, crustaceans, molluscs and macroalgae from surrounding activity concentrations in seawater, with which to derive internal and external dose rates. A central element of the model is the inclusion of dynamic transfer of radionuclides to/from sediments by factorising the depletion of radionuclides adsorbed onto suspended particulates, molecular diffusion, pore water mixing and bioturbation, represented by a simple set of differential equations coupled with the biological uptake/turnover processes. In this way, the model is capable of reproducing activity concentration in sediment more realistically. The model was used to assess the radiological impact of the Fukushima accident on marine biota in the acute phase of the accident. Sediment and biota activity concentrations are within the wide range of actual monitoring data. Activity concentrations in marine biota are thus shown to be better calculated by a dynamic model than with the simpler equilibrium approach based on concentration factors, which tends to overestimate for the acute accident period. Modelled dose rates from external exposure from sediment are also significantly below equilibrium predictions. The model calculations confirm previous studies showing that radioactivity levels in marine biota have been generally below the levels necessary to cause a measurable effect on populations. The model was used in mass-balance mode to calculate total integrated releases of 103, 30 and 3 PBq for (131)I, (137)Cs and (90)Sr, reasonably in line with previous

  14. Markov Model of Severe Accident Progression and Management

    SciTech Connect

    Bari, R.A.; Cheng, L.; Cuadra,A.; Ginsberg,T.; Lehner,J.; Martinez-Guridi,G.; Mubayi,V.; Pratt,W.T.; Yue, M.

    2012-06-25

    The earthquake and tsunami that hit the nuclear power plants at the Fukushima Daiichi site in March 2011 led to extensive fuel damage, including possible fuel melting, slumping, and relocation at the affected reactors. A so-called feed-and-bleed mode of reactor cooling was initially established to remove decay heat. The plan was to eventually switch over to a recirculation cooling system. Failure of feed and bleed was a possibility during the interim period. Furthermore, even if recirculation was established, there was a possibility of its subsequent failure. Decay heat has to be sufficiently removed to prevent further core degradation. To understand the possible evolution of the accident conditions and to have a tool for potential future hypothetical evaluations of accidents at other nuclear facilities, a Markov model of the state of the reactors was constructed in the immediate aftermath of the accident and was executed under different assumptions of potential future challenges. This work was performed at the request of the U.S. Department of Energy to explore 'what-if' scenarios in the immediate aftermath of the accident. The work began in mid-March and continued until mid-May 2011. The analysis had the following goals: (1) To provide an overall framework for describing possible future states of the damaged reactors; (2) To permit an impact analysis of 'what-if' scenarios that could lead to more severe outcomes; (3) To determine approximate probabilities of alternative end-states under various assumptions about failure and repair times of cooling systems; (4) To infer the reliability requirements of closed loop cooling systems needed to achieve stable core end-states and (5) To establish the importance for the results of the various cooling system and physical phenomenological parameters via sensitivity calculations.

  15. Car accidents in cellular automata models for one-lane traffic flow

    NASA Astrophysics Data System (ADS)

    Moussa, Najem

    2003-09-01

    Conditions for the occurrence of car accidents are introduced in the Nagel-Schreckenberg model. These conditions are based on the thought that a real accident depends on several parameters: an unexpected action of the car ahead (sudden stop or abrupt deceleration), the gap between the two cars, the velocity of the successor car and its delayed reaction time. We discuss then the effect of this delayed reaction time on the probability of traffic accidents. We find that these conditions for the occurrence of car accidents are necessary for modeling realistic accidents.

  16. Advanced accident sequence precursor analysis level 1 models

    SciTech Connect

    Sattison, M.B.; Thatcher, T.A.; Knudsen, J.K.; Schroeder, J.A.; Siu, N.O.

    1996-03-01

    INEL has been involved in the development of plant-specific Accident Sequence Precursor (ASP) models for the past two years. These models were developed for use with the SAPHIRE suite of PRA computer codes. They contained event tree/linked fault tree Level 1 risk models for the following initiating events: general transient, loss-of-offsite-power, steam generator tube rupture, small loss-of-coolant-accident, and anticipated transient without scram. Early in 1995 the ASP models were revised based on review comments from the NRC and an independent peer review. These models were released as Revision 1. The Office of Nuclear Regulatory Research has sponsored several projects at the INEL this fiscal year to further enhance the capabilities of the ASP models. Revision 2 models incorporates more detailed plant information into the models concerning plant response to station blackout conditions, information on battery life, and other unique features gleaned from an Office of Nuclear Reactor Regulation quick review of the Individual Plant Examination submittals. These models are currently being delivered to the NRC as they are completed. A related project is a feasibility study and model development of low power/shutdown (LP/SD) and external event extensions to the ASP models. This project will establish criteria for selection of LP/SD and external initiator operational events for analysis within the ASP program. Prototype models for each pertinent initiating event (loss of shutdown cooling, loss of inventory control, fire, flood, seismic, etc.) will be developed. A third project concerns development of enhancements to SAPHIRE. In relation to the ASP program, a new SAPHIRE module, GEM, was developed as a specific user interface for performing ASP evaluations. This module greatly simplifies the analysis process for determining the conditional core damage probability for a given combination of initiating events and equipment failures or degradations.

  17. Markov Model of Accident Progression at Fukushima Daiichi

    SciTech Connect

    Cuadra A.; Bari R.; Cheng, L-Y; Ginsberg, T.; Lehner, J.; Martinez-Guridi, G.; Mubayi, V.; Pratt, T.; Yue, M.

    2012-11-11

    On March 11, 2011, a magnitude 9.0 earthquake followed by a tsunami caused loss of offsite power and disabled the emergency diesel generators, leading to a prolonged station blackout at the Fukushima Daiichi site. After successful reactor trip for all operating reactors, the inability to remove decay heat over an extended period led to boil-off of the water inventory and fuel uncovery in Units 1-3. A significant amount of metal-water reaction occurred, as evidenced by the quantities of hydrogen generated that led to hydrogen explosions in the auxiliary buildings of the Units 1 & 3, and in the de-fuelled Unit 4. Although it was assumed that extensive fuel damage, including fuel melting, slumping, and relocation was likely to have occurred in the core of the affected reactors, the status of the fuel, vessel, and drywell was uncertain. To understand the possible evolution of the accident conditions at Fukushima Daiichi, a Markov model of the likely state of one of the reactors was constructed and executed under different assumptions regarding system performance and reliability. The Markov approach was selected for several reasons: It is a probabilistic model that provides flexibility in scenario construction and incorporates time dependence of different model states. It also readily allows for sensitivity and uncertainty analyses of different failure and repair rates of cooling systems. While the analysis was motivated by a need to gain insight on the course of events for the damaged units at Fukushima Daiichi, the work reported here provides a more general analytical basis for studying and evaluating severe accident evolution over extended periods of time. This work was performed at the request of the U.S. Department of Energy to explore 'what-if' scenarios in the immediate aftermath of the accidents.

  18. Predictive modeling of complications.

    PubMed

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  19. Uncertainty and sensitivity analysis of food pathway results with the MACCS Reactor Accident Consequence Model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the food pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 87 imprecisely-known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, milk growing season dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, area dependent cost, crop disposal cost, milk disposal cost, condemnation area, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: fraction of cesium deposition on grain fields that is retained on plant surfaces and transferred directly to grain, maximum allowable ground concentrations of Cs-137 and Sr-90 for production of crops, ground concentrations of Cs-134, Cs-137 and I-131 at which the disposal of milk will be initiated due to accidents that occur during the growing season, ground concentrations of Cs-134, I-131 and Sr-90 at which the disposal of crops will be initiated due to accidents that occur during the growing season, rate of depletion of Cs-137 and Sr-90 from the root zone, transfer of Sr-90 from soil to legumes, transfer of Cs-137 from soil to pasture, transfer of cesium from animal feed to meat, and the transfer of cesium, iodine and strontium from animal feed to milk.

  20. Development of hydrogeological modelling approaches for assessment of consequences of hazardous accidents at nuclear power plants

    SciTech Connect

    Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A.

    1994-07-01

    This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs.

  1. ATMOSPHERIC MODELING IN SUPPORT OF A ROADWAY ACCIDENT

    SciTech Connect

    Buckley, R.; Hunter, C.

    2010-10-21

    The United States Forest Service-Savannah River (USFS) routinely performs prescribed fires at the Savannah River Site (SRS), a Department of Energy (DOE) facility located in southwest South Carolina. This facility covers {approx}800 square kilometers and is mainly wooded except for scattered industrial areas containing facilities used in managing nuclear materials for national defense and waste processing. Prescribed fires of forest undergrowth are necessary to reduce the risk of inadvertent wild fires which have the potential to destroy large areas and threaten nuclear facility operations. This paper discusses meteorological observations and numerical model simulations from a period in early 2002 of an incident involving an early-morning multicar accident caused by poor visibility along a major roadway on the northern border of the SRS. At the time of the accident, it was not clear if the limited visibility was due solely to fog or whether smoke from a prescribed burn conducted the previous day just to the northwest of the crash site had contributed to the visibility. Through use of available meteorological information and detailed modeling, it was determined that the primary reason for the low visibility on this night was fog induced by meteorological conditions.

  2. Analysis of traffic accident size for Korean highway using structural equation models.

    PubMed

    Lee, Ju-Yeon; Chung, Jin-Hyuk; Son, Bongsoo

    2008-11-01

    Accident size can be expressed as the number of involved vehicles, the number of damaged vehicles, the number of deaths and/or the number of injured. Accident size is the one of the important indices to measure the level of safety of transportation facilities. Factors such as road geometric condition, driver characteristic and vehicle type may be related to traffic accident size. However, all these factors interact in complicate ways so that the interrelationships among the variables are not easily identified. A structural equation model is adopted to capture the complex relationships among variables because the model can handle complex relationships among endogenous and exogenous variables simultaneously and furthermore it can include latent variables in the model. In this study, we use 2649 accident data occurred on highways in Korea and estimate relationship among exogenous factors and traffic accident size. The model suggests that road factors, driver factors and environment factors are strongly related to the accident size.

  3. Low predictive power of peritraumatic dissociation for PTSD symptoms in accident survivors.

    PubMed

    Wittmann, Lutz; Moergeli, Hanspeter; Schnyder, Ulrich

    2006-10-01

    To test the predictive power of peritraumatic dissociation for the development of psychopathology, the authors assessed symptoms of peritraumatic dissociation (Peritraumatic Dissociative Experiences Questionnaire; PDEQ), posttraumatic stress disorder (Clinician-Administered PTSD Scale; CAPS), anxiety and depression (Hospital Anxiety and Depression Scale; HADS) in a sample of 214 accident victims 5 days postaccident (T1). Six months later (T2), CAPS and HADS were administered again. Acute stress disorder (ASD) and PTSD symptom levels were surprisingly low. In sequential regression analyses, initial reexperiencing and hyperarousal significantly predicted PTSD symptom level (T2) over several possibly confounding variables controlled for. Peritraumatic dissociation explained less than 3% of variance. For PTSD scores, 38% overall variance explanation was obtained; the variance for HADS scores was low. Possible explanations for the low-predictive power of peritraumatic dissociation for posttraumatic psychopathology in the sample are discussed.

  4. Simulation Study of Traffic Accidents in Bidirectional Traffic Models

    NASA Astrophysics Data System (ADS)

    Moussa, Najem

    Conditions for the occurrence of bidirectional collisions are developed based on the Simon-Gutowitz bidirectional traffic model. Three types of dangerous situations can occur in this model. We analyze those corresponding to head-on collision; rear-end collision and lane-changing collision. Using Monte Carlo simulations, we compute the probability of the occurrence of these collisions for different values of the oncoming cars' density. It is found that the risk of collisions is important when the density of cars in one lane is small and that of the other lane is high enough. The influence of different proportions of heavy vehicles is also studied. We found that heavy vehicles cause an important reduction of traffic flow on the home lane and provoke an increase of the risk of car accidents.

  5. Wind power prediction models

    NASA Technical Reports Server (NTRS)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  6. Uncertainty and sensitivity analysis of early exposure results with the MACCS Reactor Accident Consequence Model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; McKay, M.D.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 34 imprecisely known input variables on the following reactor accident consequences are studied: number of early fatalities, number of cases of prodromal vomiting, population dose within 10 mi of the reactor, population dose within 1000 mi of the reactor, individual early fatality probability within 1 mi of the reactor, and maximum early fatality distance. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: scaling factor for horizontal dispersion, dry deposition velocity, inhalation protection factor for nonevacuees, groundshine shielding factor for nonevacuees, early fatality hazard function alpha value for bone marrow exposure, and scaling factor for vertical dispersion.

  7. Model Valid Prediction Period

    NASA Astrophysics Data System (ADS)

    Chu, P. C.

    2002-12-01

    A new concept, valid prediction period (VPP), is presented here to evaluate model predictability. VPP is defined as the time period when the prediction error first exceeds a pre-determined criterion (i.e., the tolerance level). It depends not only on the instantaneous error growth, but also on the noise level, the initial error, and tolerance level. The model predictability skill is then represented by a single scalar, VPP. The longer the VPP, the higher the model predictability skill is. A theoretical framework on the base of the backward Fokker-Planck equation is developed to determine the probability density function (pdf) of VPP. Verification of a Gulf of Mexico nowcast/forecast model is used as an example to demonstrate the usefulness of VPP. Power law scaling is found in the mean square error of displacement between drifting buoy and model trajectories (both at 50 m depth). The pdf of VPP is asymmetric with a long and broad tail on the higher value side, which suggests long-term predictability. The calculations demonstrate that the long-term (extreme long such as 50-60 day) predictability is not an "outlier" and shares the same statistical properties as the short-term predictions. References Chu P. C., L. M. Ivanov, and C.W. Fan, Backward Fokker-Plank equation for determining model predictability with unknown initial error distribution. J. Geophys. Res., in press, 2002. Chu P.C., L.M.Ivanov, T.M. Margolina, and O.V.Melnichenko, 2002b: On probabilistic stability of an atmospheric model to various amplitude perturbations. J. Atmos. Sci., in press Chu P.C., L.M. Ivanov, L. Kantha, O.V. Melnichenko and Y.A. Poberezhny, 2002c: The long-term correlations and power decay law in model prediction skill. Geophys. Res. Let., in press.

  8. Health effects model for nuclear power plant accident consequence analysis. Part I. Introduction, integration, and summary. Part II. Scientific basis for health effects models

    SciTech Connect

    Evans, J.S.; Moeller, D.W.; Cooper, D.W.

    1985-07-01

    Analysis of the radiological health effects of nuclear power plant accidents requires models for predicting early health effects, cancers and benign thyroid nodules, and genetic effects. Since the publication of the Reactor Safety Study, additional information on radiological health effects has become available. This report summarizes the efforts of a program designed to provide revised health effects models for nuclear power plant accident consequence modeling. The new models for early effects address four causes of mortality and nine categories of morbidity. The models for early effects are based upon two parameter Weibull functions. They permit evaluation of the influence of dose protraction and address the issue of variation in radiosensitivity among the population. The piecewise-linear dose-response models used in the Reactor Safety Study to predict cancers and thyroid nodules have been replaced by linear and linear-quadratic models. The new models reflect the most recently reported results of the follow-up of the survivors of the bombings of Hiroshima and Nagasaki and permit analysis of both morbidity and mortality. The new models for genetic effects allow prediction of genetic risks in each of the first five generations after an accident and include information on the relative severity of various classes of genetic effects. The uncertainty in modeloling radiological health risks is addressed by providing central, upper, and lower estimates of risks. An approach is outlined for summarizing the health consequences of nuclear power plant accidents. 298 refs., 9 figs., 49 tabs.

  9. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    SciTech Connect

    Carbajo, J.J.

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  10. Uncertainty and sensitivity analysis of chronic exposure results with the MACCS reactor accident consequence model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the chronic exposure pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 75 imprecisely known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, water ingestion dose, milk growing season dose, long-term groundshine dose, long-term inhalation dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, total latent cancer fatalities, area-dependent cost, crop disposal cost, milk disposal cost, population-dependent cost, total economic cost, condemnation area, condemnation population, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: dry deposition velocity, transfer of cesium from animal feed to milk, transfer of cesium from animal feed to meat, ground concentration of Cs-134 at which the disposal of milk products will be initiated, transfer of Sr-90 from soil to legumes, maximum allowable ground concentration of Sr-90 for production of crops, fraction of cesium entering surface water that is consumed in drinking water, groundshine shielding factor, scale factor defining resuspension, dose reduction associated with decontamination, and ground concentration of 1-131 at which disposal of crops will be initiated due to accidents that occur during the growing season.

  11. Another Look at the Relationship Between Accident- and Encroachment-Based Approaches to Run-Off-the-Road Accidents Modeling

    SciTech Connect

    Miaou, Shaw-Pin

    1997-08-01

    The purpose of this study was to look for ways to combine the strengths of both approaches in roadside safety research. The specific objectives were (1) to present the encroachment-based approach in a more systematic and coherent way so that its limitations and strengths can be better understood from both statistical and engineering standpoints, and (2) to apply the analytical and engineering strengths of the encroachment-based thinking to the formulation of mean functions in accident-based models.

  12. Simulation on Poisson and negative binomial models of count road accident modeling

    NASA Astrophysics Data System (ADS)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  13. Principles of Predictive Modeling

    NASA Astrophysics Data System (ADS)

    Delignette-Muller, Marie Laure

    Mathematical models were first used in food microbiology in the early 20th century to describe the thermal destruction of pathogens in food, but the concept of predictive microbiology really emerged in the 1980 s. This concept was first developed and extensively discussed by McMeekin and his colleagues at the University of Tasmania (Ratkowsky, Olley, McMeekin, & Ball, 1982; McMeekin, Olley, Ross, & Ratkowsky, 1993; McMeekin, Olley, Ratkowsky, & Ross, 2002). Now predictive microbiology or predictive modeling in foods may be considered as a subdiscipline of food microbiology, with its international meetings (5th conference on “Predictive Modelling in Foods” in 2007) gathering a scientific community from all over the world.

  14. [Integration of hospital social services in the rehabilitation of accident patients by the statutory accident insurance. Results of a one-year model project].

    PubMed

    Lukasczik, M; Geyer, S; Neuderth, S; Gerlich, C; Weis, I; Raiber, I; Weber-Falkensammer, H; Vogel, H

    2008-02-01

    In accident patient care, there is a substantial overlap between the scope of duties of hospital social services and tasks fulfilled by the German statutory accident insurances' visiting staff that regularly takes care of accident patients. Therefore, a project on the integration of hospital social services into the organizational structures of the German statutory accident insurance was initiated which aimed at optimising communication and realising synergy effects. A formative evaluation of the project was conducted that provided process- and outcome-related data for a comprehensive evaluation of the strengths and potentials of the project. Report forms containing patient-related information were completed by hospital social services. Forms were evaluated in terms of their utility for case management by accident insurance administrators using a checklist. Project implementation and procedures were documented and evaluated using semi-structured interviews with social services staff and accident insurance employees. Through the model, a comprehensive care for accident patients could be reached. In one third of all cases reviewed, rehabilitation management could be improved by including hospital social services. Moreover, in one third of all cases, care-related activities initiated by accident insurance funds could be reduced by involving local hospital social services. The report form used by hospital social services was evaluated as a useful tool in the context of patient care and rehabilitation management. The model was evaluated by interview participants as a highly targeted approach in accident patients' care management. Implications of the study for improving health care are discussed.

  15. An aggregate accident model based on pooled, regional time-series data.

    PubMed

    Fridstrøm, L; Ingebrigtsen, S

    1991-10-01

    The determinants of personal injury road accidents and their severity are studied by means of generalized Poisson regression models estimated on the basis of combined cross-section/time-series data. Monthly data have been assembled for 18 Norwegian counties (every county but one), covering the period from January 1974 until December 1986. A rather wide range of potential explanatory factors are taken into account, including road use (exposure), weather, daylight, traffic density, road investment and maintenance expenditure, accident reporting routines, vehicle inspection, law enforcement, seat belt usage, proportion of inexperienced drivers, and alcohol sales. Separate probability models are estimated for the number of personal injury accidents, fatal accidents, injury victims, death victims, car occupants injured, and bicyclists and pedestrians injured. The fraction of personal injury accidents that are fatal is interpreted as an average severity measure and studied by means of a binomial logit model.

  16. Predictive models in urology.

    PubMed

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  17. Transport and fate modeling of nitrobenzene in groundwater after the Songhua River pollution accident.

    PubMed

    Zhang, Wenjing; Lin, Xueyu; Su, Xiaosi

    2010-11-01

    In 2005 a pollution accident occurred in the Songhua River, which is geographically located next to groundwater supply plants. This caused public concern about the transport and fate of nitrobenzene (NB) in the groundwater. This paper discusses the mechanisms and effects of the transport and fate of NB in groundwater based on pilot scale experiments conducted in the laboratory, including a simulation experiment, bench-scale batch tests and a one-dimensional numerical model. Parallel batch tests showed that the adsorption of NB to the clay and sand followed the Langmuir-type isotherm, and clay had a greater NB adsorption capacity than sand. NB biodegradation in different conditions was well fitted by the Monod equation and the q(max) values varied from 0.018 to 0.046 h(-1). Results indicated that NB's biodegradation was not affected by the initial NB concentration. Numerical modeling results indicated a good match between computed and observed data, and in the prediction model NB entered the groundwater after the pollution accident. However, the highest concentration of NB was much lower than the allowable limit set by the national standard (0.017 mg/L).

  18. Atmospheric prediction model survey

    NASA Technical Reports Server (NTRS)

    Wellck, R. E.

    1976-01-01

    As part of the SEASAT Satellite program of NASA, a survey of representative primitive equation atmospheric prediction models that exist in the world today was written for the Jet Propulsion Laboratory. Seventeen models developed by eleven different operational and research centers throughout the world are included in the survey. The surveys are tutorial in nature describing the features of the various models in a systematic manner.

  19. Modelling transport and deposition of caesium and iodine from the Chernobyl accident using the DREAM model

    NASA Astrophysics Data System (ADS)

    Brandt, J.; Christensen, J. H.; Frohn, L. M.

    2002-06-01

    A tracer model, DREAM (the Danish Rimpuff and Eulerian Accidental release Model), has been developed for modelling transport, dispersion and deposition (wet and dry) of radioactive material from accidental releases, as the Chernobyl accident. The model is a combination of a Lagrangian model, that includes the near source dispersion, and an Eulerian model describing the long-range transport. The performance of the transport model has previously been tested within the European Tracer Experiment, ETEX, which included transport and dispersion of an inert, non-depositing tracer from a controlled release. The focus of this paper is the model performance with respect to the deposition of 137Cs, 134Cs and 131I from the Chernobyl accident, using different relatively simple and comprehensive parameterizations. The performance, compared to measurements, of different combinations of parameterizations of wet and dry deposition schemes has been evaluated, using different statistical tests.

  20. VICTORIA: A mechanistic model of radionuclide behavior in the reactor coolant system under severe accident conditions. Revision 1

    SciTech Connect

    Heams, T J; Williams, D A; Johns, N A; Mason, A; Bixler, N E; Grimley, A J; Wheatley, C J; Dickson, L W; Osborn-Lee, I; Domagala, P; Zawadzki, S; Rest, J; Alexander, C A; Lee, R Y

    1992-12-01

    The VICTORIA model of radionuclide behavior in the reactor coolant system (RCS) of a light water reactor during a severe accident is described. It has been developed by the USNRC to define the radionuclide phenomena and processes that must be considered in systems-level models used for integrated analyses of severe accident source terms. The VICTORIA code, based upon this model, predicts fission product release from the fuel, chemical reactions involving fission products, vapor and aerosol behavior, and fission product decay heating. Also included is a detailed description of how the model is implemented in VICTORIA, the numerical algorithms used, and the correlations and thermochemical data necessary for determining a solution. A description of the code structure, input and output, and a sample problem are provided.

  1. Modeling and Prediction Overview

    SciTech Connect

    Ermak, D L

    2002-10-18

    Effective preparation for and response to the release of toxic materials into the atmosphere hinges on accurate predictions of the dispersion pathway, concentration, and ultimate fate of the chemical or biological agent. Of particular interest is the threat to civilian populations within major urban areas, which are likely targets for potential attacks. The goals of the CBNP Modeling and Prediction area are: (1) Development of a suite of validated, multi-scale, atmospheric transport and fate modeling capabilities for chemical and biological agent releases within the complex urban environment; (2) Integration of these models and related user tools into operational emergency response systems. Existing transport and fate models are being adapted to treat the complex atmospheric flows within and around structures (e.g., buildings, subway systems, urban areas) and over terrain. Relevant source terms and the chemical and physical behavior of gas- and particle-phase species (e.g., losses due to deposition, bio-agent viability, degradation) are also being developed and incorporated into the models. Model validation is performed using both laboratory and field data. CBNP is producing and testing a suite of models with differing levels of complexity and fidelity to address the full range of user needs and applications. Lumped-parameter transport models are being developed for subway systems and building interiors, supplemented by the use of computational fluid dynamics (CFD) models to describe the circulation within large, open spaces such as auditoriums. Both sophisticated CFD transport models and simpler fast-response models are under development to treat the complex flow around individual structures and arrays of buildings. Urban parameterizations are being incorporated into regional-scale weather forecast, meteorological data assimilation, and dispersion models for problems involving larger-scale urban and suburban areas. Source term and dose response models are being

  2. Evaluation models and their influence on radiological consequences of hypothetical accidents in FFTF

    SciTech Connect

    Stepnewski, D.D.; Hale, J.P.; Martin, H.C.; Peak, R.D.; Franz, G.R.

    1980-04-01

    The influence of radiological evaluation models and assumptions on the off-site consequences of hypothetical core disruptive accidents is examined. The effects of initial source term, time of containment venting, meteorology, biological dose model, and aerosol fallout have been included. The analyses were based on two postulated scenarios of a severe hypothetical reactor vessel melt-through accident for 400 MW(t) fast reactor. Within each accident scenario, the results show that, although other variables are significant, radiological consequences are strongly affected by the amount of aerosol fallout computed to occur in the incident.

  3. Highway accident severities and the mixed logit model: an exploratory empirical analysis.

    PubMed

    Milton, John C; Shankar, Venky N; Mannering, Fred L

    2008-01-01

    Many transportation agencies use accident frequencies, and statistical models of accidents frequencies, as a basis for prioritizing highway safety improvements. However, the use of accident severities in safety programming has been often been limited to the locational assessment of accident fatalities, with little or no emphasis being placed on the full severity distribution of accidents (property damage only, possible injury, injury)-which is needed to fully assess the benefits of competing safety-improvement projects. In this paper we demonstrate a modeling approach that can be used to better understand the injury-severity distributions of accidents on highway segments, and the effect that traffic, highway and weather characteristics have on these distributions. The approach we use allows for the possibility that estimated model parameters can vary randomly across roadway segments to account for unobserved effects potentially relating to roadway characteristics, environmental factors, and driver behavior. Using highway-injury data from Washington State, a mixed (random parameters) logit model is estimated. Estimation findings indicate that volume-related variables such as average daily traffic per lane, average daily truck traffic, truck percentage, interchanges per mile and weather effects such as snowfall are best modeled as random-parameters-while roadway characteristics such as the number of horizontal curves, number of grade breaks per mile and pavement friction are best modeled as fixed parameters. Our results show that the mixed logit model has considerable promise as a methodological tool in highway safety programming.

  4. Innovative approach to modeling accident response of Gravel Gerties

    SciTech Connect

    Kramer, M.; McClure, P.; Sullivan, H.

    1997-08-01

    Recent safety analyses at nuclear explosive facilities have renewed interest in the accident phenomenology associated with explosions in nuclear explosive cells, which are commonly referred to as {open_quotes}Gravel Gerties.{close_quotes} The cells are used for the assembly and disassembly of nuclear explosives and are located in the Device Assembly Facility (DAF) at the Nevada Test Site (NTS) and at the Pantex facility. The cells are designed to mitigate the release of special nuclear material to the environment in the event of a detonation of high explosive within the Gravel Gertie. Although there are some subtle differences between the cells of DAF and Pantex, their general design, geometry, and configuration are similar. The cells consist of a round room approximately 10.4 m in diameter and 5.2 m high enclosed by 0.3-m-thick concrete. Each cell has a wire-rope cantenary roof overlain with gravel. The gravel is approximately 6.9 m deep at the center of the roof and decreases toward the outer edge of the cell. The cell is connected to a corridor and subsequent rooms through an interlocking blast door. In the event of a accidental explosion involving significant amounts of high explosive, the roof structure is lifted by the force of the explosion, the supporting cables break, the gravel is lifted by the blast (resulting in rapid venting of the cell), and the gravel roof collapses, filling the cell. The lifting and subsequent collapse of the gravel, which acts much like a piston, is very challenging to model.

  5. CFD modeling of debris melting phenomena during late phase Candu 6 severe accident

    SciTech Connect

    Nicolici, S.; Dupleac, D.; Prisecaru, I.

    2012-07-01

    The objective of this paper was to study the phase change of the debris formed on the Candu 6 calandria bottom in a postulated accident sequence. The molten pool and crust formation were studied employing the Ansys-Fluent code. The 3D model using Large Eddy Simulation (LES) predicts the conjugate, radiative and convective heat transfer inside and from the corium pool. LES simulations require a very fine grid to capture the crust formation and the free convection flow. This aspect (fine mesh requirement) correlated with the long transient has imposed the use of a slice from the 3D calandria geometry in order not to exceed the computing resources. The preliminary results include heat transfer coefficients, temperature profiles and heat fluxes through calandria wall. From the safety point of view it is very important to maintain a heat flux through the wall below the CHF assuring the integrity of the calandria vessel. This can be achieved by proper cooling of the tank water which contains the vessel. Also, transient duration can be estimated being important in developing guidelines for severe accidents management. The debris physical structure and material properties have large uncertainties in the temperature range of interest. Thus, further sensitivity studies should be carried out in order to better understand the influence of these parameters on this complex phenomenon. (authors)

  6. Modeling fault among accident--involved pedestrians and motorists in Hawaii.

    PubMed

    Kim, Karl; Brunner, I Made; Yamashita, Eric

    2008-11-01

    Using a comprehensive database of police-reported accidents in Hawaii, we describe the nature of pedestrian accidents over the period 2002-2005. Approximately 36% of the accidents occur in residential areas, while another 34% occur in business areas. Only 41.7% of the pedestrian accidents occur at intersections. More pedestrian crashes occur at non-intersection locations-including midblock locations, driveways, parking lots, and other off roadway locations. Approximately 38.2% of the crashes occur at crosswalk locations, while proportionately more (61.8%) of the pedestrian accidents occur at non-crosswalk locations. Using this database the human, temporal, roadway, and environmental factors associated with being "at-fault" for both pedestrians and drivers are also examined. Using techniques of logistic regression, several different explanatory models are constructed, to identify the factors associated with crashes producing fatalities and serious injuries. Finally, two pedestrian models (drunk males and young boys) and one driver model (male commuters) are developed to provide further understanding of pedestrian accident causation. Drunk male pedestrians who were jaywalking were in excess of 10x more likely than other groups to be at-fault in pedestrian accidents. Young boys in residential areas were also more likely to be at-fault. Male commuters in business areas in the morning were also found to have higher odds of being classified at-fault when involved in pedestrian accidents. The results of this study indicate that there should be a combination of enforcement and educational programs implemented for both the pedestrian and drivers to show those at-fault the consequences of their actions, and to reduce the overall number of accidents.

  7. Cellular automata model simulating traffic car accidents in the on-ramp system

    NASA Astrophysics Data System (ADS)

    Echab, H.; Lakouari, N.; Ez-Zahraouy, H.; Benyoussef, A.

    2015-01-01

    In this paper, using Nagel-Schreckenberg model we study the on-ramp system under the expanded open boundary condition. The phase diagram of the two-lane on-ramp system is computed. It is found that the expanded left boundary insertion strategy enhances the flow in the on-ramp lane. Furthermore, we have studied the probability of the occurrence of car accidents. We distinguish two types of car accidents: the accident at the on-ramp site (Prc) and the rear-end accident in the main road (Pac). It is shown that car accidents at the on-ramp site are more likely to occur when traffic is free on road A. However, the rear-end accidents begin to occur above a critical injecting rate αc1. The influence of the on-ramp length (LB) and position (xC0) on the car accidents probabilities is studied. We found that large LB or xC0 causes an important decrease of the probability Prc. However, only large xC0 provokes an increase of the probability Pac. The effect of the stochastic randomization is also computed.

  8. Predictive Surface Complexation Modeling

    SciTech Connect

    Sverjensky, Dimitri A.

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  9. Modeling of the TMI-2 (Three Mile Island Unit-2) accident with MELPROG/TRAC and calculation results for Phases 1 and 2

    SciTech Connect

    Motley, F.E.; Jenks, R.P.

    1988-01-01

    Work has been performed to develop a Three Mile Island Unit-2 (TMI-2) simulation model for MELPROG/TRAC capable of predicting the observed plant behavior that took place during the accident of March 1979. A description of the TMI-2 plant model is presented and calculation results through 174 min of the accident are discussed. Using the ICBC boundary conditions, the calculation predicts pressurizer draining and core recovering prior to fuel-rod damage. A parametric calculation (reduced makeup flow) is currently underway and is in better agreement with the observed plant behavior. Efforts are underway to resolve current discrepancies and proceed with an accurate simulation through Phases 3 and 4 of the accident (174-227 min and 227-300 min, respectively). 13 refs., 11 figs., 2 tabs.

  10. A study of factors affecting highway accident rates using the random-parameters tobit model.

    PubMed

    Anastasopoulos, Panagiotis Ch; Mannering, Fred L; Shankar, Venky N; Haddock, John E

    2012-03-01

    A large body of previous literature has used a variety of count-data modeling techniques to study factors that affect the frequency of highway accidents over some time period on roadway segments of a specified length. An alternative approach to this problem views vehicle accident rates (accidents per mile driven) directly instead of their frequencies. Viewing the problem as continuous data instead of count data creates a problem in that roadway segments that do not have any observed accidents over the identified time period create continuous data that are left-censored at zero. Past research has appropriately applied a tobit regression model to address this censoring problem, but this research has been limited in accounting for unobserved heterogeneity because it has been assumed that the parameter estimates are fixed over roadway-segment observations. Using 9-year data from urban interstates in Indiana, this paper employs a random-parameters tobit regression to account for unobserved heterogeneity in the study of motor-vehicle accident rates. The empirical results show that the random-parameters tobit model outperforms its fixed-parameters counterpart and has the potential to provide a fuller understanding of the factors determining accident rates on specific roadway segments.

  11. Quantifying the risk of extreme aviation accidents

    NASA Astrophysics Data System (ADS)

    Das, Kumer Pial; Dey, Asim Kumer

    2016-12-01

    Air travel is considered a safe means of transportation. But when aviation accidents do occur they often result in fatalities. Fortunately, the most extreme accidents occur rarely. However, 2014 was the deadliest year in the past decade causing 111 plane crashes, and among them worst four crashes cause 298, 239, 162 and 116 deaths. In this study, we want to assess the risk of the catastrophic aviation accidents by studying historical aviation accidents. Applying a generalized Pareto model we predict the maximum fatalities from an aviation accident in future. The fitted model is compared with some of its competitive models. The uncertainty in the inferences are quantified using simulated aviation accident series, generated by bootstrap resampling and Monte Carlo simulations.

  12. Development of comprehensive accident models for two-lane rural highways using exposure, geometry, consistency and context variables.

    PubMed

    Cafiso, Salvatore; Di Graziano, Alessandro; Di Silvestro, Giacomo; La Cava, Grazia; Persaud, Bhagwant

    2010-07-01

    In Europe, approximately 60% of road accident fatalities occur on two-lane rural roads. Thus, research to develop and enhance explanatory and predictive models for this road type continues to be of interest in mitigating these accidents. To this end, this paper describes a novel and extensive data collection and modeling effort to define accident models for two-lane road sections based on a unique combination of exposure, geometry, consistency and context variables directly related to the safety performance. The first part of the paper documents how these were identified for the segmentation of highways into homogeneous sections. Next, is a description of the extensive data collection effort that utilized differential cinematic GPS surveys to define the horizontal alignment variables, and road safety inspections (RSIs) to quantify the other road characteristics related to safety. The final part of the paper focuses on the calibration of models for estimating the expected number of accidents on homogeneous sections that can be characterized by constant values of the explanatory variables. Several candidate models were considered for calibration using the Generalized Linear Modeling (GLM) approach. After considering the statistical significance of the parameters related to exposure, geometry, consistency and context factors, and goodness of fit statistics, 19 models were ranked and three were selected as the recommended models. The first of the three is a base model, with length and traffic as the only predictor variables; since these variables are the only ones likely to be available network-wide, this base model can be used in an empirical Bayesian calculation to conduct network screening for ranking "sites with promise" of safety improvement. The other two models represent the best statistical fits with different combinations of significant variables related to exposure, geometry, consistency and context factors. These multiple variable models can be used, with

  13. Effects of a type of quenched randomness on car accidents in a cellular automaton model

    NASA Astrophysics Data System (ADS)

    Yang, Xian-Qing; Zhang, Wei; Qiu, Kang; Zhao, Yue-Min

    2006-01-01

    In this paper we numerically study the probability Pac of the occurrence of car accidents in the Nagel-Schreckenberg (NS) model with a defect. In the deterministic NS model, numerical results show that there exists a critical value of car density below which no car accident happens. The critical density ρc1 is not related only to the maximum speed of cars, but also to the braking probability at the defect. The braking probability at a defect can enhance, not suppress, the occurrence of car accidents when its value is small. Only the braking probability at the defect is very large, car accidents can be reduced by the bottleneck. In the nondeterministic NS model, the probability Pac exhibits the same behaviors with that in the deterministic model except the case of vmax=1 under which the probability Pac is only reduced by the defect. The defect also induces the inhomogeneous distribution of car accidents over the whole road. Theoretical analyses give an agreement with numerical results in the deterministic NS model and in the nondeterministic NS model with vmax=1 in the case of large defect braking probability.

  14. A Statistical Approach to Predict the Failure Enthalpy and Reliability of Irradiated PWR Fuel Rods During Reactivity-Initiated Accidents

    SciTech Connect

    Nam, Cheol; Jeong, Yong-Hwan; Jung, Youn-Ho

    2001-11-15

    During the last decade, the failure behavior of high-burnup fuel rods under a reactivity-initiated accident (RIA) condition has been a serious concern since fuel rod failures at low enthalpy have been observed. This has resulted in the reassessment of existing licensing criteria and failure-mode study. To address the issue, a statistics-based methodology is suggested to predict failure probability of irradiated fuel rods under an RIA. Based on RIA simulation results in the literature, a failure enthalpy correlation for an irradiated fuel rod is constructed as a function of oxide thickness, fuel burnup, and pulse width. Using the failure enthalpy correlation, a new concept of ''equivalent enthalpy'' is introduced to reflect the effects of the three primary factors as well as peak fuel enthalpy into a single damage parameter. Moreover, the failure distribution function with equivalent enthalpy is derived, applying a two-parameter Weibull statistical model. Finally, the sensitivity analysis is carried out to estimate the effects of burnup, corrosion, peak fuel enthalpy, pulse width, and cladding materials used.

  15. Object-Oriented Bayesian Networks (OOBN) for Aviation Accident Modeling and Technology Portfolio Impact Assessment

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Ancel, Ersin; Jones, Sharon M.

    2012-01-01

    The concern for reducing aviation safety risk is rising as the National Airspace System in the United States transforms to the Next Generation Air Transportation System (NextGen). The NASA Aviation Safety Program is committed to developing an effective aviation safety technology portfolio to meet the challenges of this transformation and to mitigate relevant safety risks. The paper focuses on the reasoning of selecting Object-Oriented Bayesian Networks (OOBN) as the technique and commercial software for the accident modeling and portfolio assessment. To illustrate the benefits of OOBN in a large and complex aviation accident model, the in-flight Loss-of-Control Accident Framework (LOCAF) constructed as an influence diagram is presented. An OOBN approach not only simplifies construction and maintenance of complex causal networks for the modelers, but also offers a well-organized hierarchical network that is easier for decision makers to exploit the model examining the effectiveness of risk mitigation strategies through technology insertions.

  16. Effects of quenched randomness induced by car accidents on traffic flow in a cellular automata model

    NASA Astrophysics Data System (ADS)

    Yang, Xian-Qing; Ma, Yu-Qiang; Zhao, Yue-Min

    2004-10-01

    In this paper we numerically study the impact of quenched disorder induced by car accidents on traffic flow in the Nagel-Schreckenberg (NS) model. Car accidents occur when the necessary conditions proposed by [Boccara J. Phys. A 30, 3329 (1997)] are satisfied. Two realistic situations of cars involved in car accidents have been considered. Model A is presented to consider that the accident cars become temporarily stuck. Our studies exhibit the “inverse- λ form” or the metastable state for traffic flow in the fundamental diagram and wide-moving waves of jams in the space-time pattern. Model B is proposed to take into account that the “wrecked” cars stay there forever and the cars behind will pass through the sites occupied by the “wrecked” cars with a transmission rate. Four-stage transitions from a maximum flow through a sharp decrease phase and a density-independent phase to a high-density jamming phase for traffic flow have been observed. The density profiles and the effects of transmission rate and probability of the occurrence of car accidents in model B are also discussed.

  17. Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices

    SciTech Connect

    Not Available

    1988-12-15

    This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

  18. Final safety analysis report for the Galileo Mission: Volume 2: Book 1, Accident model document

    SciTech Connect

    Not Available

    1988-12-15

    The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source. The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence. 27 figs., 11 tabs.

  19. Severe accident modeling of a PWR core with different cladding materials

    SciTech Connect

    Johnson, S. C.; Henry, R. E.; Paik, C. Y.

    2012-07-01

    The MAAP v.4 software has been used to model two severe accident scenarios in nuclear power reactors with three different materials as fuel cladding. The TMI-2 severe accident was modeled with Zircaloy-2 and SiC as clad material and a SBO accident in a Zion-like, 4-loop, Westinghouse PWR was modeled with Zircaloy-2, SiC, and 304 stainless steel as clad material. TMI-2 modeling results indicate that lower peak core temperatures, less H 2 (g) produced, and a smaller mass of molten material would result if SiC was substituted for Zircaloy-2 as cladding. SBO modeling results indicate that the calculated time to RCS rupture would increase by approximately 20 minutes if SiC was substituted for Zircaloy-2. Additionally, when an extended SBO accident (RCS creep rupture failure disabled) was modeled, significantly lower peak core temperatures, less H 2 (g) produced, and a smaller mass of molten material would be generated by substituting SiC for Zircaloy-2 or stainless steel cladding. Because the rate of SiC oxidation reaction with elevated temperature H{sub 2}O (g) was set to 0 for this work, these results should be considered preliminary. However, the benefits of SiC as a more accident tolerant clad material have been shown and additional investigation of SiC as an LWR core material are warranted, specifically investigations of the oxidation kinetics of SiC in H{sub 2}O (g) over the range of temperatures and pressures relevant to severe accidents in LWR 's. (authors)

  20. Collective responsibility for freeway rear-ending accidents? An application of probabilistic casual models.

    PubMed

    Davis, Gary A; Swenson, Tait

    2006-07-01

    Determining whether or not an event was a cause of a road accident often involves determining the truth of a counterfactual conditional, where what happened is compared to what would have happened had the supposed cause been absent. Using structural causal models, Pearl and his associates have recently developed a rigorous method for posing and answering causal questions, and this approach is especially well suited to the reconstruction and analysis of road accidents. Here, we applied these methods to three freeway rear-end collisions. Starting with video recordings of the accidents, trajectory information for a platoon of vehicles involved in and preceding the collision was extracted from the video record, and this information was used to estimate each driver's initial speed, following distance, reaction time, and braking rate. Using Brill's model of rear-end accidents, it was then possible to simulate what would have happened, other things being equal, had certain driver actions been other than they were. In each of the three accidents we found evidence that: (1) short following headways by the colliding drivers were probable causal factors for the collisions, (2) for each collision, at least one driver ahead of the colliding vehicles probably had a reaction time that was longer than his or her following headway, and (3) had that driver's reaction time been equal to his or her following headway, the rear-end collision probably would not have happened.

  1. [Model of Analysis and Prevention of Accidents - MAPA: tool for operational health surveillance].

    PubMed

    de Almeida, Ildeberto Muniz; Vilela, Rodolfo Andrade de Gouveia; da Silva, Alessandro José Nunes; Beltran, Sandra Lorena

    2014-12-01

    The analysis of work-related accidents is important for accident surveillance and prevention. Current methods of analysis seek to overcome reductionist views that see these occurrences as simple events explained by operator error. The objective of this paper is to analyze the Model of Analysis and Prevention of Accidents (MAPA) and its use in monitoring interventions, duly highlighting aspects experienced in the use of the tool. The descriptive analytical method was used, introducing the steps of the model. To illustrate contributions and or difficulties, cases where the tool was used in the context of service were selected. MAPA integrates theoretical approaches that have already been tried in studies of accidents by providing useful conceptual support from the data collection stage until conclusion and intervention stages. Besides revealing weaknesses of the traditional approach, it helps identify organizational determinants, such as management failings, system design and safety management involved in the accident. The main challenges lie in the grasp of concepts by users, in exploring organizational aspects upstream in the chain of decisions or at higher levels of the hierarchy, as well as the intervention to change the determinants of these events.

  2. Computer program predicts thermal and flow transients experienced in a reactor loss- of-flow accident

    NASA Technical Reports Server (NTRS)

    Hale, C. J.

    1967-01-01

    Program analyzes the consequences of a loss-of-flow accident in the primary cooling system of a heterogeneous light-water moderated and cooled nuclear reactor. It produces a temperature matrix 36 x 41 /x,y/ which includes fuel surface temperatures relative to the time the pump power was lost.

  3. Prediction models in cancer care.

    PubMed

    Vickers, Andrew J

    2011-01-01

    Prediction is ubiquitous across the spectrum of cancer care from screening to hospice. Indeed, oncology is often primarily a prediction problem; many of the early stage cancers cause no symptoms, and treatment is recommended because of a prediction that tumor progression would ultimately threaten a patient's quality of life or survival. Recent years have seen attempts to formalize risk prediction in cancer care. In place of qualitative and implicit prediction algorithms, such as cancer stage, researchers have developed statistical prediction tools that provide a quantitative estimate of the probability of a specific event for an individual patient. Prediction models generally have greater accuracy than reliance on stage or risk groupings, can incorporate novel predictors such as genomic data, and can be used more rationally to make treatment decisions. Several prediction models are now widely used in clinical practice, including the Gail model for breast cancer incidence or the Adjuvant! Online prediction model for breast cancer recurrence. Given the burgeoning complexity of diagnostic and prognostic information, there is simply no realistic alternative to incorporating multiple variables into a single prediction model. As such, the question should not be whether but how prediction models should be used to aid decision-making. Key issues will be integration of models into the electronic health record and more careful evaluation of models, particularly with respect to their effects on clinical outcomes.

  4. Radiological assessment by compartment model POSEIDON-R of radioactivity released in the ocean following Fukushima Daiichi accident

    NASA Astrophysics Data System (ADS)

    Bezhenar, Roman; Maderich, Vladimir; Heling, Rudie; Jung, Kyung Tae; Myoung, Jung-Goo

    2013-04-01

    The modified compartment model POSEIDON-R (Lepicard et al, 2004), was applied to the North-Western Pacific and adjacent seas. It is for the first time, that a compartment model was used in this region, where 25 Nuclear Power Plants (NPP) are operated. The aim of this study is to perform a radiological assessment of the releases of radioactivity due to the Fukushima Daiichi accident. The model predicts the dispersion of radioactivity in water column and in the sediments, and the transfer of radionuclides throughout the marine food web, and the subsequent doses to the population due to the consumption of fishery products. A generic predictive dynamical food-chain model is used instead of concentration factor (CF) approach. The radionuclide uptake model for fish has as central feature the accumulation of radionuclides in the target tissue. Three layer structure of the water column makes it possible to describe deep-water transport adequately. In total 175 boxes cover the Northwestern Pacific, the East China Sea, and the Yellow Sea and East/Japan Sea. Water fluxes between boxes were calculated by averaging three-dimensional currents obtained by hydrodynamic model ROMS over a 10-years period. Tidal mixing between boxes was parameterized. The model was validated on observation data on the Cs-137 in water for the period 1945-2004. The source terms from nuclear weapon tests are regional source term from the bomb tests on Atoll Enewetak and Atoll Bikini and global deposition from weapons tests. The correlation coefficient between predicted and observed concentrations of Cs-137 in the surface water is 0.925 and RMSE=1.43 Bq/m3. A local-scale coastal box was used according POSEIDON's methodology to describe local processes of activity transport, deposition and food web around the Fukushima Daiichi NPP. The source term to the ocean from the Fukushima accident includes a 10-days release of Cs-134 (5 PBq) and Cs-137 (4 PBq) directly into the ocean and 6 and 5 PBq of Cs-134 and

  5. Modelling transport and deposition of caesium and iodine from the Chernobyl accident using the DREAM model

    NASA Astrophysics Data System (ADS)

    Brandt, J.; Christensen, J. H.; Frohn, L. M.

    2002-12-01

    A tracer model, DREAM (the Danish Rimpuff and Eulerian Accidental release Model), has been developed for modelling transport, dispersion and deposition (wet and dry) of radioactive material from accidental releases, as the Chernobyl accident. The model is a combination of a Lagrangian model, that includes the near source dispersion, and an Eulerian model describing the long-range transport. The performance of the transport model has previously been tested within the European Tracer Experiment, ETEX, which included transport and dispersion of an inert, non-depositing tracer from a controlled release. The focus of this paper is the model performance with respect to the total deposition of 137Cs, 134Cs and 131I from the Chernobyl accident, using different relatively simple and comprehensive parameterizations for dry- and wet deposition. The performance, compared to measurements, of using different combinations of two different wet deposition parameterizations and three different parameterizations of dry deposition has been evaluated, using different statistical tests. The best model performance, compared to measurements, is obtained when parameterizing the total deposition combined of a simple method for dry deposition and a subgrid-scale averaging scheme for wet deposition based on relative humidities. The same major conclusion is obtained for all the three different radioactive isotopes and using two different deposition measurement databases. Large differences are seen in the results obtained by using the two different parameterizations of wet deposition based on precipitation rates and relative humidities, respectively. The parameterization based on subgrid-scale averaging is, in all cases, performing better than the parameterization based on precipitation rates. This indicates that the in-cloud scavenging process is more important than the below cloud scavenging process for the submicron particles and that the precipitation rates are relatively uncertain in the

  6. A graph model for preventing railway accidents based on the maximal information coefficient

    NASA Astrophysics Data System (ADS)

    Shao, Fubo; Li, Keping

    2017-01-01

    A number of factors influences railway safety. It is an important work to identify important influencing factors and to build the relationship between railway accident and its influencing factors. The maximal information coefficient (MIC) is a good measure of dependence for two-variable relationships which can capture a wide range of associations. Employing MIC, a graph model is proposed for preventing railway accidents which avoids complex mathematical computation. In the graph, nodes denote influencing factors of railway accidents and edges represent dependence of the two linked factors. With the increasing of dependence level, the graph changes from a globally coupled graph to isolated points. Moreover, the important influencing factors are identified from many factors which are the monitor key. Then the relationship between railway accident and important influencing factors is obtained by employing the artificial neural networks. With the relationship, a warning mechanism is built by giving the dangerous zone. If the related factors fall into the dangerous zone in railway operations, the warning level should be raised. The built warning mechanism can prevent railway accidents and can promote railway safety.

  7. A statistical model for predicting muscle performance

    NASA Astrophysics Data System (ADS)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  8. Using meteorological ensembles for atmospheric dispersion modelling of the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Périllat, Raphaël; Korsakissok, Irène; Mallet, Vivien; Mathieu, Anne; Sekiyama, Thomas; Didier, Damien; Kajino, Mizuo; Igarashi, Yasuhito; Adachi, Kouji

    2016-04-01

    Dispersion models are used in response to an accidental release of radionuclides of the atmosphere, to infer mitigation actions, and complement field measurements for the assessment of short and long term environmental and sanitary impacts. However, the predictions of these models are subject to important uncertainties, especially due to input data, such as meteorological fields or source term. This is still the case more than four years after the Fukushima disaster (Korsakissok et al., 2012, Girard et al., 2014). In the framework of the SAKURA project, an MRI-IRSN collaboration, a meteorological ensemble of 20 members designed by MRI (Sekiyama et al. 2013) was used with IRSN's atmospheric dispersion models. Another ensemble, retrieved from ECMWF and comprising 50 members, was also used for comparison. The MRI ensemble is 3-hour assimilated, with a 3-kilometers resolution, designed to reduce the meteorological uncertainty in the Fukushima case. The ECMWF is a 24-hour forecast with a coarser grid, representative of the uncertainty of the data available in a crisis context. First, it was necessary to assess the quality of the ensembles for our purpose, to ensure that their spread was representative of the uncertainty of meteorological fields. Using meteorological observations allowed characterizing the ensembles' spread, with tools such as Talagrand diagrams. Then, the uncertainty was propagated through atmospheric dispersion models. The underlying question is whether the output spread is larger than the input spread, that is, whether small uncertainties in meteorological fields can produce large differences in atmospheric dispersion results. Here again, the use of field observations was crucial, in order to characterize the spread of the ensemble of atmospheric dispersion simulations. In the case of the Fukushima accident, gamma dose rates, air activities and deposition data were available. Based on these data, selection criteria for the ensemble members were

  9. MELCOR analysis of the TMI-2 accident

    SciTech Connect

    Boucheron, E.A.

    1990-01-01

    This paper describes the analysis of the Three Mile Island-2 (TMI-2) standard problem that was performed with MELCOR. The MELCOR computer code is being developed by Sandia National Laboratories for the Nuclear Regulatory Commission for the purpose of analyzing severe accident in nuclear power plants. The primary role of MELCOR is to provide realistic predictions of severe accident phenomena and the radiological source team. The analysis of the TMI-2 standard problem allowed for comparison of the model predictions in MELCOR to plant data and to the results of more mechanistic analyses. This exercise was, therefore valuable for verifying and assessing the models in the code. The major trends in the TMI-2 accident are reasonably well predicted with MELCOR, even with its simplified modeling. Comparison of the calculated and measured results is presented and, based on this comparison, conclusions can be drawn concerning the applicability of MELCOR to severe accident analysis. 5 refs., 10 figs., 3 tabs.

  10. Traffic accidents in a cellular automaton model with a speed limit zone

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Yang, Xian-qing; Sun, Da-peng; Qiu, Kang; Xia, Hui

    2006-07-01

    In this paper, we numerically study the probability Pac of the occurrence of car accidents in the Nagel-Schreckenberg (NS) model with a speed limit zone. Numerical results show that the probability for car accidents to occur Pac is determined by the maximum speed v'max of the speed limit zone, but is independent of the length Lv of the speed limit zone in the deterministic NS model. However in the nondeterministic NS model, the probability of the occurrence of car accidents Pac is determined not only by the maximum speed v'max, but also the length Lv. The probability Pac increases accordingly with the increase of the maximum speed of the speed limit zone, but decreases with the increase of the length of the speed limit zone, in the low-density region. However in the case of v'max = 1, the probability Pac increases with the increase of the length in the low-density region, but decreases in the interval between the low-density and high-density regions. The speed limit zone also causes an inhomogeneous distribution of car accidents over the whole road. Theoretical analyses give an agreement with numerical results in the nondeterministic NS model with v'max = 1 and vmax = 5.

  11. A dynamic model to estimate the activity concentration and whole body dose rate of marine biota as consequences of a nuclear accident.

    PubMed

    Keum, Dong-Kwon; Jun, In; Kim, Byeong-Ho; Lim, Kwang-Muk; Choi, Yong-Ho

    2015-02-01

    This paper describes a dynamic compartment model (K-BIOTA-DYN-M) to assess the activity concentration and whole body dose rate of marine biota as a result of a nuclear accident. The model considers the transport of radioactivity between the marine biota through the food chain, and applies the first order kinetic model for the sedimentation of radionuclides from seawater onto sediment. A set of ordinary differential equations representing the model are simultaneously solved to calculate the activity concentration of the biota and the sediment, and subsequently the dose rates, given the seawater activity concentration. The model was applied to investigate the long-term effect of the Fukushima nuclear accident on the marine biota using (131)I, (134)Cs, and, (137)Cs activity concentrations of seawater measured for up to about 2.5 years after the accident at two locations in the port of the Fukushima Daiichi Nuclear Power Station (FDNPS) which was the most highly contaminated area. The predicted results showed that the accumulated dose for 3 months after the accident was about 4-4.5Gy, indicating the possibility of occurrence of an acute radiation effect in the early phase after the Fukushima accident; however, the total dose rate for most organisms studied was usually below the UNSCEAR (United Nations Scientific Committee on the Effects of Atomic Radiation)'s bench mark level for chronic exposure except for the initial phase of the accident, suggesting a very limited radiological effect on the marine biota at the population level. The predicted Cs sediment activity by the first-order kinetic model for the sedimentation was in a good agreement with the measured activity concentration. By varying the ecological parameter values, the present model was able to predict the very scattered (137)Cs activity concentrations of fishes measured in the port of FDNPS. Conclusively, the present dynamic model can be usefully applied to estimate the activity concentration and whole

  12. Input-output model for MACCS nuclear accident impacts estimation¹

    SciTech Connect

    Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  13. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    NASA Technical Reports Server (NTRS)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  14. MODELING OF 2LIBH4 PLUS MGH2 HYDROGEN STORAGE SYSTEM ACCIDENT SCENARIOS USING EMPIRICAL AND THEORETICAL THERMODYNAMICS

    SciTech Connect

    James, C; David Tamburello, D; Joshua Gray, J; Kyle Brinkman, K; Bruce Hardy, B; Donald Anton, D

    2009-04-01

    It is important to understand and quantify the potential risk resulting from accidental environmental exposure of condensed phase hydrogen storage materials under differing environmental exposure scenarios. This paper describes a modeling and experimental study with the aim of predicting consequences of the accidental release of 2LiBH{sub 4}+MgH{sub 2} from hydrogen storage systems. The methodology and results developed in this work are directly applicable to any solid hydride material and/or accident scenario using appropriate boundary conditions and empirical data. The ability to predict hydride behavior for hypothesized accident scenarios facilitates an assessment of the of risk associated with the utilization of a particular hydride. To this end, an idealized finite volume model was developed to represent the behavior of dispersed hydride from a breached system. Semiempirical thermodynamic calculations and substantiating calorimetric experiments were performed in order to quantify the energy released, energy release rates and to quantify the reaction products resulting from water and air exposure of a lithium borohydride and magnesium hydride combination. The hydrides, LiBH{sub 4} and MgH{sub 2}, were studied individually in the as-received form and in the 2:1 'destabilized' mixture. Liquid water hydrolysis reactions were performed in a Calvet calorimeter equipped with a mixing cell using neutral water. Water vapor and oxygen gas phase reactivity measurements were performed at varying relative humidities and temperatures by modifying the calorimeter and utilizing a gas circulating flow cell apparatus. The results of these calorimetric measurements were compared with standardized United Nations (UN) based test results for air and water reactivity and used to develop quantitative kinetic expressions for hydrolysis and air oxidation in these systems. Thermodynamic parameters obtained from these tests were then inputted into a computational fluid dynamics model to

  15. Accident Sequence Precursor Program Large Early Release Frequency Model Development

    SciTech Connect

    Brown, T.D.; Brownson, D.A.; Duran, F.A.; Gregory, J.J.; Rodrick, E.G.

    1999-01-04

    The objectives for the ASP large early release frequency (LERF) model development work is to build a Level 2 containment response model that would capture all of the events necessary to define LERF as outlined in Regulatory Guide 1.174, can be directly interfaced with the existing Level 1 models, is technically correct, can be readily modified to incorporate new information or to represent another plant, and can be executed in SAPHIRE. The ASP LERF models being developed will meet these objectives while providing the NRC with the capability to independently assess the risk impact of plant-specific changes proposed by the utilities that change the nuclear power plants' licensing basis. Together with the ASP Level 1 models, the ASP LERF models provide the NRC with the capability of performing equipment and event assessments to determine their impact on a plant's LERF for internal events during power operation. In addition, the ASP LERF models are capable of being updated to reflect changes in information regarding the system operations and phenomenological events, and of being updated to assess the potential for early fatalities for each LERF sequence. As the ASP Level 1 models evolve to include more analysis capabilities, the LERF models will also be refined to reflect the appropriate level of detail needed to demonstrate the new capabilities. An approach was formulated for the development of detailed LERF models using the NUREG-1150 APET models as a guide. The modifications to the SAPHIRE computer code have allowed the development of these detailed models and the ability to analyze these models in a reasonable time. Ten reference LERF plant models, including six PWR models and four BWR models, which cover a wide variety of containment and nuclear steam supply systems designs, will be complete in 1999. These reference models will be used as the starting point for developing the LERF models for the remaining nuclear power plants.

  16. Underreporting in traffic accident data, bias in parameters and the structure of injury severity models.

    PubMed

    Yamamoto, Toshiyuki; Hashiji, Junpei; Shankar, Venkataraman N

    2008-07-01

    Injury severities in traffic accidents are usually recorded on ordinal scales, and statistical models have been applied to investigate the effects of driver factors, vehicle characteristics, road geometrics and environmental conditions on injury severity. The unknown parameters in the models are in general estimated assuming random sampling from the population. Traffic accident data however suffer from underreporting effects, especially for lower injury severities. As a result, traffic accident data can be regarded as outcome-based samples with unknown population shares of the injury severities. An outcome-based sample is overrepresented by accidents of higher severities. As a result, outcome-based samples result in biased parameters which skew our inferences on the effect of key safety variables such as safety belt usage. The pseudo-likelihood function for the case with unknown population shares, which is the same as the conditional maximum likelihood for the case with known population shares, is applied in this study to examine the effects of severity underreporting on the parameter estimates. Sequential binary probit models and ordered-response probit models of injury severity are developed and compared in this study. Sequential binary probit models assume that the factors determining the severity change according to the level of the severity itself, while ordered-response probit models assume that the same factors correlate across all levels of severity. Estimation results suggest that the sequential binary probit models outperform the ordered-response probit models, and that the coefficient estimates for lap and shoulder belt use are biased if underreporting is not considered. Mean parameter bias due to underreporting can be significant. The findings show that underreporting on the outcome dimension may induce bias in inferences on a variety of factors. In particular, if underreporting is not accounted for, the marginal impacts of a variety of factors appear

  17. Real-time EEG-based detection of fatigue driving danger for accident prediction.

    PubMed

    Wang, Hong; Zhang, Chi; Shi, Tianwei; Wang, Fuwang; Ma, Shujun

    2015-03-01

    This paper proposes a real-time electroencephalogram (EEG)-based detection method of the potential danger during fatigue driving. To determine driver fatigue in real time, wavelet entropy with a sliding window and pulse coupled neural network (PCNN) were used to process the EEG signals in the visual area (the main information input route). To detect the fatigue danger, the neural mechanism of driver fatigue was analyzed. The functional brain networks were employed to track the fatigue impact on processing capacity of brain. The results show the overall functional connectivity of the subjects is weakened after long time driving tasks. The regularity is summarized as the fatigue convergence phenomenon. Based on the fatigue convergence phenomenon, we combined both the input and global synchronizations of brain together to calculate the residual amount of the information processing capacity of brain to obtain the dangerous points in real time. Finally, the danger detection system of the driver fatigue based on the neural mechanism was validated using accident EEG. The time distributions of the output danger points of the system have a good agreement with those of the real accident points.

  18. A flammability and combustion model for integrated accident analysis. [Advanced light water reactors

    SciTech Connect

    Plys, M.G.; Astleford, R.D.; Epstein, M. )

    1988-01-01

    A model for flammability characteristics and combustion of hydrogen and carbon monoxide mixtures is presented for application to severe accident analysis of Advanced Light Water Reactors (ALWR's). Flammability of general mixtures for thermodynamic conditions anticipated during a severe accident is quantified with a new correlation technique applied to data for several fuel and inertant mixtures and using accepted methods for combining these data. Combustion behavior is quantified by a mechanistic model consisting of a continuity and momentum balance for the burned gases, and considering an uncertainty parameter to match the idealized process to experiment. Benchmarks against experiment demonstrate the validity of this approach for a single recommended value of the flame flux multiplier parameter. The models presented here are equally applicable to analysis of current LWR's. 21 refs., 16 figs., 6 tabs.

  19. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    SciTech Connect

    Collin, Blaise P.

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document

  20. A Time Series Model for Assessing the Trend and Forecasting the Road Traffic Accident Mortality

    PubMed Central

    Yousefzadeh-Chabok, Shahrokh; Ranjbar-Taklimie, Fatemeh; Malekpouri, Reza; Razzaghi, Alireza

    2016-01-01

    Background Road traffic accident (RTA) is one of the main causes of trauma and known as a growing public health concern worldwide, especially in developing countries. Assessing the trend of fatalities in the past years and forecasting it enables us to make the appropriate planning for prevention and control. Objectives This study aimed to assess the trend of RTAs and forecast it in the next years by using time series modeling. Materials and Methods In this historical analytical study, the RTA mortalities in Zanjan Province, Iran, were evaluated during 2007 - 2013. The time series analyses including Box-Jenkins models were used to assess the trend of accident fatalities in previous years and forecast it for the next 4 years. Results The mean age of the victims was 37.22 years (SD = 20.01). From a total of 2571 deaths, 77.5% (n = 1992) were males and 22.5% (n = 579) were females. The study models showed a descending trend of fatalities in the study years. The SARIMA (1, 1, 3) (0, 1, 0) 12 model was recognized as a best fit model in forecasting the trend of fatalities. Forecasting model also showed a descending trend of traffic accident mortalities in the next 4 years. Conclusions There was a decreasing trend in the study and the future years. It seems that implementation of some interventions in the recent decade has had a positive effect on the decline of RTA fatalities. Nevertheless, there is still a need to pay more attention in order to prevent the occurrence and the mortalities related to traffic accidents. PMID:27800467

  1. Time series count data models: an empirical application to traffic accidents.

    PubMed

    Quddus, Mohammed A

    2008-09-01

    Count data are primarily categorised as cross-sectional, time series, and panel. Over the past decade, Poisson and Negative Binomial (NB) models have been used widely to analyse cross-sectional and time series count data, and random effect and fixed effect Poisson and NB models have been used to analyse panel count data. However, recent literature suggests that although the underlying distributional assumptions of these models are appropriate for cross-sectional count data, they are not capable of taking into account the effect of serial correlation often found in pure time series count data. Real-valued time series models, such as the autoregressive integrated moving average (ARIMA) model, introduced by Box and Jenkins have been used in many applications over the last few decades. However, when modelling non-negative integer-valued data such as traffic accidents at a junction over time, Box and Jenkins models may be inappropriate. This is mainly due to the normality assumption of errors in the ARIMA model. Over the last few years, a new class of time series models known as integer-valued autoregressive (INAR) Poisson models, has been studied by many authors. This class of models is particularly applicable to the analysis of time series count data as these models hold the properties of Poisson regression and able to deal with serial correlation, and therefore offers an alternative to the real-valued time series models. The primary objective of this paper is to introduce the class of INAR models for the time series analysis of traffic accidents in Great Britain. Different types of time series count data are considered: aggregated time series data where both the spatial and temporal units of observation are relatively large (e.g., Great Britain and years) and disaggregated time series data where both the spatial and temporal units are relatively small (e.g., congestion charging zone and months). The performance of the INAR models is compared with the class of Box and

  2. Melanoma Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Chernobyl and Fukushima nuclear accidents: what has changed in the use of atmospheric dispersion modeling?

    PubMed

    Benamrane, Y; Wybo, J-L; Armand, P

    2013-12-01

    The threat of a major accidental or deliberate event that would lead to hazardous materials emission in the atmosphere is a great cause of concern to societies. This is due to the potential large scale of casualties and damages that could result from the release of explosive, flammable or toxic gases from industrial plants or transport accidents, radioactive material from nuclear power plants (NPPs), and chemical, biological, radiological or nuclear (CBRN) terrorist attacks. In order to respond efficiently to such events, emergency services and authorities resort to appropriate planning and organizational patterns. This paper focuses on the use of atmospheric dispersion modeling (ADM) as a support tool for emergency planning and response, to assess the propagation of the hazardous cloud and thereby, take adequate counter measures. This paper intends to illustrate the noticeable evolution in the operational use of ADM tools over 25 y and especially in emergency situations. This study is based on data available in scientific publications and exemplified using the two most severe nuclear accidents: Chernobyl (1986) and Fukushima (2011). It appears that during the Chernobyl accident, ADM were used few days after the beginning of the accident mainly in a diagnosis approach trying to reconstruct what happened, whereas 25 y later, ADM was also used during the first days and weeks of the Fukushima accident to anticipate the potentially threatened areas. We argue that the recent developments in ADM tools play an increasing role in emergencies and crises management, by supporting stakeholders in anticipating, monitoring and assessing post-event damages. However, despite technological evolutions, its prognostic and diagnostic use in emergency situations still arise many issues.

  4. Predictive Models and Computational Embryology

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  5. Proton Fluence Prediction Models

    NASA Technical Reports Server (NTRS)

    Feynman, Joan

    1996-01-01

    Many spacecraft anomalies are caused by positively charged high energy particles impinging on the vehicle and its component parts. Here we review the current knowledge of the interplanetary particle environment in the energy ranges that are most important for these effects, 10 to 100 MeV/amu. The emphasis is on the particle environment at 1 AU. State-of-the-art engineering models are briefly described along with comments on the future work required in this field.

  6. Predictive Modeling in Race Walking

    PubMed Central

    Wiktorowicz, Krzysztof; Przednowek, Krzysztof; Lassota, Lesław; Krzeszowski, Tomasz

    2015-01-01

    This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers' training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors. PMID:26339230

  7. Predictive models of battle dynamics

    NASA Astrophysics Data System (ADS)

    Jelinek, Jan

    2001-09-01

    The application of control and game theories to improve battle planning and execution requires models, which allow military strategists and commanders to reliably predict the expected outcomes of various alternatives over a long horizon into the future. We have developed probabilistic battle dynamics models, whose building blocks in the form of Markov chains are derived from the first principles, and applied them successfully in the design of the Model Predictive Task Commander package. This paper introduces basic concepts of our modeling approach and explains the probability distributions needed to compute the transition probabilities of the Markov chains.

  8. Initial VHTR accident scenario classification: models and data.

    SciTech Connect

    Vilim, R. B.; Feldman, E. E.; Pointer, W. D.; Wei, T. Y. C.; Nuclear Engineering Division

    2005-09-30

    Nuclear systems codes are being prepared for use as computational tools for conducting performance/safety analyses of the Very High Temperature Reactor. The thermal-hydraulic codes are RELAP5/ATHENA for one-dimensional systems modeling and FLUENT and/or Star-CD for three-dimensional modeling. We describe a formal qualification framework, the development of Phenomena Identification and Ranking Tables (PIRTs), the initial filtering of the experiment databases, and a preliminary screening of these codes for use in the performance/safety analyses. In the second year of this project we focused on development of PIRTS. Two events that result in maximum fuel and vessel temperatures, the Pressurized Conduction Cooldown (PCC) event and the Depressurized Conduction Cooldown (DCC) event, were selected for PIRT generation. A third event that may result in significant thermal stresses, the Load Change event, is also selected for PIRT generation. Gas reactor design experience and engineering judgment were used to identify the important phenomena in the primary system for these events. Sensitivity calculations performed with the RELAP5 code were used as an aid to rank the phenomena in order of importance with respect to the approach of plant response to safety limits. The overall code qualification methodology was illustrated by focusing on the Reactor Cavity Cooling System (RCCS). The mixed convection mode of heat transfer and pressure drop is identified as an important phenomenon for Reactor Cavity Cooling System (RCCS) operation. Scaling studies showed that the mixed convection mode is likely to occur in the RCCS air duct during normal operation and during conduction cooldown events. The RELAP5/ATHENA code was found to not adequately treat the mixed convection regime. Readying the code will require adding models for the turbulent mixed convection regime while possibly performing new experiments for the laminar mixed convection regime. Candidate correlations for the turbulent

  9. Phase-Change Modelling in Severe Nuclear Accidents

    NASA Astrophysics Data System (ADS)

    Pain, Christopher; Pavlidis, Dimitrios; Xie, Zhihua; Percival, James; Gomes, Jefferson; Matar, Omar; Moatamedi, Moji; Tehrani, Ali; Jones, Alan; Smith, Paul

    2014-11-01

    This paper describes progress on a consistent approach for multi-phase flow modelling with phase-change. Although, the developed methods are general purpose the applications presented here cover core melt phenomena at the lower vessel head. These include corium pool formation, coolability and solidification. With respect to external cooling, comparison with the LIVE experiments (from Karlsruhe) is undertaken. Preliminary re-flooding simulation results are also presented. These include water injection into porous media (debris bed) and boiling. Numerical simulations follow IRSN's PEARL experimental programme on quenching/re-flooding. The authors wish to thank Prof. Timothy Haste of IRSN. Dr. D. Pavlidis is funded by EPSRC Consortium ``Computational Modelling for Advanced Nuclear Plants,'' Grant Number EP/I003010/1.

  10. Application of a predictive Bayesian model to environmental accounting.

    PubMed

    Anex, R P; Englehardt, J D

    2001-03-30

    Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.

  11. Simulation Modeling Requirements for Loss-of-Control Accident Prevention of Turboprop Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Crider, Dennis; Foster, John V.

    2012-01-01

    In-flight loss of control remains the leading contributor to aviation accident fatalities, with stall upsets being the leading causal factor. The February 12, 2009. Colgan Air, Inc., Continental Express flight 3407 accident outside Buffalo, New York, brought this issue to the forefront of public consciousness and resulted in recommendations from the National Transportation Safety Board to conduct training that incorporates stalls that are fully developed and develop simulator standards to support such training. In 2010, Congress responded to this accident with Public Law 11-216 (Section 208), which mandates full stall training for Part 121 flight operations. Efforts are currently in progress to develop recommendations on implementation of stall training for airline pilots. The International Committee on Aviation Training in Extended Envelopes (ICATEE) is currently defining simulator fidelity standards that will be necessary for effective stall training. These recommendations will apply to all civil transport aircraft including straight-wing turboprop aircraft. Government-funded research over the previous decade provides a strong foundation for stall/post-stall simulation for swept-wing, conventional tail jets to respond to this mandate, but turboprops present additional and unique modeling challenges. First among these challenges is the effect of power, which can provide enhanced flow attachment behind the propellers. Furthermore, turboprops tend to operate for longer periods in an environment more susceptible to ice. As a result, there have been a significant number of turboprop accidents as a result of the early (lower angle of attack) stalls in icing. The vulnerability of turboprop configurations to icing has led to studies on ice accumulation and the resulting effects on flight behavior. Piloted simulations of these effects have highlighted the important training needs for recognition and mitigation of icing effects, including the reduction of stall margins

  12. Model aids cuttings transport prediction

    SciTech Connect

    Gavignet, A.A. ); Sobey, I.J. )

    1989-09-01

    Drilling of highly deviated wells can be complicated by the formation of a thick bed of cuttings at low flow rates. The model proposed in this paper shows what mechanisms control the thickness of such a bed, and the model predictions are compared with experimental results.

  13. Light-Weight Radioisotope Heater Unit final safety analysis report (LWRHU-FSAR): Volume 2: Accident Model Document (AMD)

    SciTech Connect

    Johnson, E.W.

    1988-10-01

    The purpose of this volume of the LWRHU SAR, the Accident Model Document (AMD), are to: Identify all malfunctions, both singular and multiple, which can occur during the complete mission profile that could lead to release outside the clad of the radioisotopic material contained therein; Provide estimates of occurrence probabilities associated with these various accidents; Evaluate the response of the LWRHU (or its components) to the resultant accident environments; and Associate the potential event history with test data or analysis to determine the potential interaction of the released radionuclides with the biosphere.

  14. A Complex Network Model for Analyzing Railway Accidents Based on the Maximal Information Coefficient

    NASA Astrophysics Data System (ADS)

    Shao, Fu-Bo; Li, Ke-Ping

    2016-10-01

    It is an important issue to identify important influencing factors in railway accident analysis. In this paper, employing the good measure of dependence for two-variable relationships, the maximal information coefficient (MIC), which can capture a wide range of associations, a complex network model for railway accident analysis is designed in which nodes denote factors of railway accidents and edges are generated between two factors of which MIC values are larger than or equal to the dependent criterion. The variety of network structure is studied. As the increasing of the dependent criterion, the network becomes to an approximate scale-free network. Moreover, employing the proposed network, important influencing factors are identified. And we find that the annual track density-gross tonnage factor is an important factor which is a cut vertex when the dependent criterion is equal to 0.3. From the network, it is found that the railway development is unbalanced for different states which is consistent with the fact. Supported by the Fundamental Research Funds for the Central Universities under Grant No. 2016YJS087, the National Natural Science Foundation of China under Grant No. U1434209, and the Research Foundation of State Key Laboratory of Railway Traffic Control and Safety, Beijing Jiaotong University under Grant No. RCS2016ZJ001

  15. Hydrometeorological model for streamflow prediction

    USGS Publications Warehouse

    Tangborn, Wendell V.

    1979-01-01

    The hydrometeorological model described in this manual was developed to predict seasonal streamflow from water in storage in a basin using streamflow and precipitation data. The model, as described, applies specifically to the Skokomish, Nisqually, and Cowlitz Rivers, in Washington State, and more generally to streams in other regions that derive seasonal runoff from melting snow. Thus the techniques demonstrated for these three drainage basins can be used as a guide for applying this method to other streams. Input to the computer program consists of daily averages of gaged runoff of these streams, and daily values of precipitation collected at Longmire, Kid Valley, and Cushman Dam. Predictions are based on estimates of the absolute storage of water, predominately as snow: storage is approximately equal to basin precipitation less observed runoff. A pre-forecast test season is used to revise the storage estimate and improve the prediction accuracy. To obtain maximum prediction accuracy for operational applications with this model , a systematic evaluation of several hydrologic and meteorologic variables is first necessary. Six input options to the computer program that control prediction accuracy are developed and demonstrated. Predictions of streamflow can be made at any time and for any length of season, although accuracy is usually poor for early-season predictions (before December 1) or for short seasons (less than 15 days). The coefficient of prediction (CP), the chief measure of accuracy used in this manual, approaches zero during the late autumn and early winter seasons and reaches a maximum of about 0.85 during the spring snowmelt season. (Kosco-USGS)

  16. Validation of advanced NSSS simulator model for loss-of-coolant accidents

    SciTech Connect

    Kao, S.P.; Chang, S.K.; Huang, H.C.

    1995-09-01

    The replacement of the NSSS (Nuclear Steam Supply System) model on the Millstone 2 full-scope simulator has significantly increased its fidelity to simulate adverse conditions in the RCS. The new simulator NSSS model is a real-time derivative of the Nuclear Plant Analyzer by ABB. The thermal-hydraulic model is a five-equation, non-homogeneous model for water, steam, and non-condensible gases. The neutronic model is a three-dimensional nodal diffusion model. In order to certify the new NSSS model for operator training, an extensive validation effort has been performed by benchmarking the model performance against RELAP5/MOD2. This paper presents the validation results for the cases of small-and large-break loss-of-coolant accidents (LOCA). Detailed comparisons in the phenomena of reflux-condensation, phase separation, and two-phase natural circulation are discussed.

  17. Predictive models of forest dynamics.

    PubMed

    Purves, Drew; Pacala, Stephen

    2008-06-13

    Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.

  18. Analysis 320 coal mine accidents using structural equation modeling with unsafe conditions of the rules and regulations as exogenous variables.

    PubMed

    Zhang, Yingyu; Shao, Wei; Zhang, Mengjia; Li, Hejun; Yin, Shijiu; Xu, Yingjun

    2016-07-01

    Mining has been historically considered as a naturally high-risk industry worldwide. Deaths caused by coal mine accidents are more than the sum of all other accidents in China. Statistics of 320 coal mine accidents in Shandong province show that all accidents contain indicators of "unsafe conditions of the rules and regulations" with a frequency of 1590, accounting for 74.3% of the total frequency of 2140. "Unsafe behaviors of the operator" is another important contributory factor, which mainly includes "operator error" and "venturing into dangerous places." A systems analysis approach was applied by using structural equation modeling (SEM) to examine the interactions between the contributory factors of coal mine accidents. The analysis of results leads to three conclusions. (i) "Unsafe conditions of the rules and regulations," affect the "unsafe behaviors of the operator," "unsafe conditions of the equipment," and "unsafe conditions of the environment." (ii) The three influencing factors of coal mine accidents (with the frequency of effect relation in descending order) are "lack of safety education and training," "rules and regulations of safety production responsibility," and "rules and regulations of supervision and inspection." (iii) The three influenced factors (with the frequency in descending order) of coal mine accidents are "venturing into dangerous places," "poor workplace environment," and "operator error."

  19. What do saliency models predict?

    PubMed Central

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  20. PREDICTIVE MODELS. Enhanced Oil Recovery Model

    SciTech Connect

    Ray, R.M.

    1992-02-26

    PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1 chemical flooding, where soap-like surfactants are injected into the reservoir to wash out the oil; 2 carbon dioxide miscible flooding, where carbon dioxide mixes with the lighter hydrocarbons making the oil easier to displace; 3 in-situ combustion, which uses the heat from burning some of the underground oil to thin the product; 4 polymer flooding, where thick, cohesive material is pumped into a reservoir to push the oil through the underground rock; and 5 steamflood, where pressurized steam is injected underground to thin the oil. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes.

  1. A model for the release, dispersion and environmental impact of a postulated reactor accident from a submerged commercial nuclear power plant

    NASA Astrophysics Data System (ADS)

    Bertch, Timothy Creston

    1998-12-01

    Nuclear power plants are inherently suitable for submerged applications and could provide power to the shore power grid or support future underwater applications. The technology exists today and the construction of a submerged commercial nuclear power plant may become desirable. A submerged reactor is safer to humans because the infinite supply of water for heat removal, particulate retention in the water column, sedimentation to the ocean floor and inherent shielding of the aquatic environment would significantly mitigate the effects of a reactor accident. A better understanding of reactor operation in this new environment is required to quantify the radioecological impact and to determine the suitability of this concept. The impact of release to the environment from a severe reactor accident is a new aspect of the field of marine radioecology. Current efforts have been centered on radioecological impacts of nuclear waste disposal, nuclear weapons testing fallout and shore nuclear plant discharges. This dissertation examines the environmental impact of a severe reactor accident in a submerged commercial nuclear power plant, modeling a postulated site on the Atlantic continental shelf adjacent to the United States. This effort models the effects of geography, decay, particle transport/dispersion, bioaccumulation and elimination with associated dose commitment. The use of a source term equivalent to the release from Chernobyl allows comparison between the impacts of that accident and the postulated submerged commercial reactor plant accident. All input parameters are evaluated using sensitivity analysis. The effect of the release on marine biota is determined. Study of the pathways to humans from gaseous radionuclides, consumption of contaminated marine biota and direct exposure as contaminated water reaches the shoreline is conducted. The model developed by this effort predicts a significant mitigation of the radioecological impact of the reactor accident release

  2. Predictive Models of Liver Cancer

    EPA Science Inventory

    Predictive models of chemical-induced liver cancer face the challenge of bridging causative molecular mechanisms to adverse clinical outcomes. The latent sequence of intervening events from chemical insult to toxicity are poorly understood because they span multiple levels of bio...

  3. Estimation of the time-dependent radioactive source-term from the Fukushima nuclear power plant accident using atmospheric transport modelling

    NASA Astrophysics Data System (ADS)

    Schoeppner, M.; Plastino, W.; Budano, A.; De Vincenzi, M.; Ruggieri, F.

    2012-04-01

    Several nuclear reactors at the Fukushima Dai-ichi power plant have been severely damaged from the Tōhoku earthquake and the subsequent tsunami in March 2011. Due to the extremely difficult on-site situation it has been not been possible to directly determine the emissions of radioactive material. However, during the following days and weeks radionuclides of 137-Caesium and 131-Iodine (amongst others) were detected at monitoring stations throughout the world. Atmospheric transport models are able to simulate the worldwide dispersion of particles accordant to location, time and meteorological conditions following the release. The Lagrangian atmospheric transport model Flexpart is used by many authorities and has been proven to make valid predictions in this regard. The Flexpart software has first has been ported to a local cluster computer at the Grid Lab of INFN and Department of Physics of University of Roma Tre (Rome, Italy) and subsequently also to the European Mediterranean Grid (EUMEDGRID). Due to this computing power being available it has been possible to simulate the transport of particles originating from the Fukushima Dai-ichi plant site. Using the time series of the sampled concentration data and the assumption that the Fukushima accident was the only source of these radionuclides, it has been possible to estimate the time-dependent source-term for fourteen days following the accident using the atmospheric transport model. A reasonable agreement has been obtained between the modelling results and the estimated radionuclide release rates from the Fukushima accident.

  4. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    SciTech Connect

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.

  5. Systems thinking, the Swiss Cheese Model and accident analysis: a comparative systemic analysis of the Grayrigg train derailment using the ATSB, AcciMap and STAMP models.

    PubMed

    Underwood, Peter; Waterson, Patrick

    2014-07-01

    The Swiss Cheese Model (SCM) is the most popular accident causation model and is widely used throughout various industries. A debate exists in the research literature over whether the SCM remains a viable tool for accident analysis. Critics of the model suggest that it provides a sequential, oversimplified view of accidents. Conversely, proponents suggest that it embodies the concepts of systems theory, as per the contemporary systemic analysis techniques. The aim of this paper was to consider whether the SCM can provide a systems thinking approach and remain a viable option for accident analysis. To achieve this, the train derailment at Grayrigg was analysed with an SCM-based model (the ATSB accident investigation model) and two systemic accident analysis methods (AcciMap and STAMP). The analysis outputs and usage of the techniques were compared. The findings of the study showed that each model applied the systems thinking approach. However, the ATSB model and AcciMap graphically presented their findings in a more succinct manner, whereas STAMP more clearly embodied the concepts of systems theory. The study suggests that, whilst the selection of an analysis method is subject to trade-offs that practitioners and researchers must make, the SCM remains a viable model for accident analysis.

  6. Bayesian spatial and ecological models for small-area accident and injury analysis.

    PubMed

    MacNab, Ying C

    2004-11-01

    In this article, recently developed Bayesian spatial and ecological regression models are applied to analyse small-area variation in accident and injury. This study serves to demonstrate how Bayesian modelling techniques can be implemented to assess potential risk factors measured at group (e.g. area) level. Presented here is a unified modelling framework that enables thorough investigations into associations between injury rates and regional characteristics, residual variation and spatial autocorrelation. Using hospital separation data for 83 local health areas in British Columbia (BC), Canada, in 1990-1999, we explore and examine ecological/contextual determinants of motor vehicle accident injury (MVAI) among male children and youth aged 0-24 and for those of six age groups (<1, 1-4, 5-9, 10-14, 15-19 and 20-24). Eighteen local health area characteristics are studied. They include a broad spectrum of socio-economic indicators, residential environment indicators (roads and parks), medical services availability and utilisation, population health, proportion of recent immigrants, crime rates, rates of speeding charge and rates of seatbelt violation. Our study indicates a large regional variation in MVAI in males aged 0-24 in British Columbia, Canada, in 1990-1999, and that adjusting for appropriate risk factors eliminates nearly all the variation observed. Socio-economic influence on MVAI was profoundly apparent in young males of all ages with the injury being more common in communities of lower socio-economic status. High adult male crime rates were significantly associated with high injury rates of boys aged 1-14. Seatbelt violations and excess speeding charges were found to be positively associated with the injury rates of young men aged 20-24. This and similar ecological studies shed light on reasons for regional variations in accident occurrence as well as in the resulting injuries and hospital utilisation. Thereby they are potentially useful in identifying

  7. COMPARING SAFE VS. AT-RISK BEHAVIORAL DATA TO PREDICT ACCIDENTS

    SciTech Connect

    Jeffrey C. Joe

    2001-11-01

    The Safety Observations Achieve Results (SOAR) program at the Idaho National Laboratory (INL) encourages employees to perform in-field observations of each other’s behaviors. One purpose for performing these observations is that it gives the observers the opportunity to correct, if needed, their co-worker’s at-risk work practices and habits (i.e., behaviors). The underlying premise of doing this is that major injuries (e.g., OSHA-recordable events) are prevented from occurring because the lower level at-risk behaviors are identified and corrected before they can propagate into culturally accepted unsafe behaviors that result in injuries or fatalities. However, unlike other observation programs, SOAR also emphasizes positive reinforcement for safe behaviors observed. The underlying premise of doing this is that positive reinforcement of safe behaviors helps establish a strong positive safety culture. Since the SOAR program collects both safe and at-risk leading indicator data, this provides a unique opportunity to assess and compare the two kinds of data in terms of their ability to predict future adverse safety events. This paper describes the results of analyses performed on SOAR data to assess their relative predictive ability. Implications are discussed.

  8. Low-power and shutdown models for the accident sequence precursor (ASP) program

    SciTech Connect

    Sattison, M.B.; Thatcher, T.A.; Knudsen, J.K.

    1997-02-01

    The US Nuclear Regulatory Commission (NRC) has been using full-power. Level 1, limited-scope risk models for the Accident Sequence Precursor (ASP) program for over fifteen years. These models have evolved and matured over the years, as have probabilistic risk assessment (PRA) and computer technologies. Significant upgrading activities have been undertaken over the past three years, with involvement from the Offices of Nuclear Reactor Regulation (NRR), Analysis and Evaluation of Operational Data (AEOD), and Nuclear Regulatory Research (RES), and several national laboratories. Part of these activities was an RES-sponsored feasibility study investigating the ability to extend the ASP models to include contributors to core damage from events initiated with the reactor at low power or shutdown (LP/SD), both internal events and external events. This paper presents only the LP/SD internal event modeling efforts.

  9. Assessment and prediction of the nearshore wave propagation in the case of mv prestige accident

    NASA Astrophysics Data System (ADS)

    Rusu, E.; Silva, R.; Pinto, J.; Rusu, L.; Soares, C.; Vitorino, J.

    2003-04-01

    Integrated into the process of implementing the SWAN spectral model on the Portuguese coast of the Atlantic Ocean, was devised an interface with capacities both for pre and post processing. This tool, developed using the Matlab environment, allows either a rapid implementation of the model in a specific area, as well as, a better evaluation of its output. The Prestige breakdown in November 2002, close to the NW Spanish coast, was an unhappy opportunity to test the viability and effectiveness of such a system based on numerical wave models able to provide the nearshore wave forecast, as well as, the utility of the computational environment developed. In this case SWAN was implemented using the spherical coordinates first in a coarse area covering the entire coastal environment in the NW of the Iberian Peninsula. Inside this domain, were nested two high-resolution areas, concerning respectively the western coast (between Figueira da Foz and Santiago de Compostela) and the NW side (located in the vicinity of A Coruña). For the coarse area the SWAN model was nested in WW3, being used also the NOGAPS wind field as a forcing factor. In the high-resolution simulations, when the boundary conditions were generated by the previous SWAN runs, for the wind forcing was given as input the high resolution Aladdin data field (provided by the Portuguese Institute of Meteorology). It was also given as input the current data field as a result of simulations with the HOPS model. The SWAN simulations were performed in the non-stationary mode and some results will be made available on the Internet in the web page of Instituto Hidrografico http://www.ih.marinha.pt/hidrografico/ (related to the project MOCASSIM). Finally was made also a systematic comparison with the measurements coming from two buoys (Silleiro and Leixões both located on the western coast of the Iberian Peninsula) that gave in general a good agreement.

  10. A Flexible Hierarchical Bayesian Modeling Technique for Risk Analysis of Major Accidents.

    PubMed

    Yu, Hongyang; Khan, Faisal; Veitch, Brian

    2017-02-28

    Safety analysis of rare events with potentially catastrophic consequences is challenged by data scarcity and uncertainty. Traditional causation-based approaches, such as fault tree and event tree (used to model rare event), suffer from a number of weaknesses. These include the static structure of the event causation, lack of event occurrence data, and need for reliable prior information. In this study, a new hierarchical Bayesian modeling based technique is proposed to overcome these drawbacks. The proposed technique can be used as a flexible technique for risk analysis of major accidents. It enables both forward and backward analysis in quantitative reasoning and the treatment of interdependence among the model parameters. Source-to-source variability in data sources is also taken into account through a robust probabilistic safety analysis. The applicability of the proposed technique has been demonstrated through a case study in marine and offshore industry.

  11. Influence of the meteorological input on the atmospheric transport modelling with FLEXPART of radionuclides from the Fukushima Daiichi nuclear accident.

    PubMed

    Arnold, D; Maurer, C; Wotawa, G; Draxler, R; Saito, K; Seibert, P

    2015-01-01

    In the present paper the role of precipitation as FLEXPART model input is investigated for one possible release scenario of the Fukushima Daiichi accident. Precipitation data from the European Center for Medium-Range Weather Forecast (ECMWF), the NOAA's National Center for Environmental Prediction (NCEP), the Japan Meteorological Agency's (JMA) mesoscale analysis and a JMA radar-rain gauge precipitation analysis product were utilized. The accident of Fukushima in March 2011 and the following observations enable us to assess the impact of these precipitation products at least for this single case. As expected the differences in the statistical scores are visible but not large. Increasing the ECMWF resolution of all the fields from 0.5° to 0.2° rises the correlation from 0.71 to 0.80 and an overall rank from 3.38 to 3.44. Substituting ECMWF precipitation, while the rest of the variables remains unmodified, by the JMA mesoscale precipitation analysis and the JMA radar gauge precipitation data yield the best results on a regional scale, specially when a new and more robust wet deposition scheme is introduced. The best results are obtained with a combination of ECMWF 0.2° data with precipitation from JMA mesoscale analyses and the modified wet deposition with a correlation of 0.83 and an overall rank of 3.58. NCEP-based results with the same source term are generally poorer, giving correlations around 0.66, and comparatively large negative biases and an overall rank of 3.05 that worsens when regional precipitation data is introduced.

  12. Mars solar conjunction prediction modeling

    NASA Astrophysics Data System (ADS)

    Srivastava, Vineet K.; Kumar, Jai; Kulshrestha, Shivali; Kushvah, Badam Singh

    2016-01-01

    During the Mars solar conjunction, telecommunication and tracking between the spacecraft and the Earth degrades significantly. The radio signal degradation depends on the angular separation between the Sun, Earth and probe (SEP), the signal frequency band and the solar activity. All radiometric tracking data types display increased noise and signatures for smaller SEP angles. Due to scintillation, telemetry frame errors increase significantly when solar elongation becomes small enough. This degradation in telemetry data return starts at solar elongation angles of around 5° at S-band, around 2° at X-band and about 1° at Ka-band. This paper presents a mathematical model for predicting Mars superior solar conjunction for any Mars orbiting spacecraft. The described model is simulated for the Mars Orbiter Mission which experienced Mars solar conjunction during May-July 2015. Such a model may be useful to flight projects and design engineers in the planning of Mars solar conjunction operational scenarios.

  13. Applying STAMP in Accident Analysis

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy; Daouk, Mirna; Dulac, Nicolas; Marais, Karen

    2003-01-01

    Accident models play a critical role in accident investigation and analysis. Most traditional models are based on an underlying chain of events. These models, however, have serious limitations when used for complex, socio-technical systems. Previously, Leveson proposed a new accident model (STAMP) based on system theory. In STAMP, the basic concept is not an event but a constraint. This paper shows how STAMP can be applied to accident analysis using three different views or models of the accident process and proposes a notation for describing this process.

  14. A model for nonvolatile fission product release during reactor accident conditions

    SciTech Connect

    Lewis, B.J.; Andre, B.; Ducros, G.; Maro, D.

    1996-10-01

    An analytical model has been developed to describe the release kinetics of nonvolatile fission products (e.g., molybdenum, cerium, ruthenium, and barium) from uranium dioxide fuel under severe reactor accident conditions. This treatment considers the rate-controlling process of release in accordance with diffusional transport in the fuel matrix and fission product vaporization from the fuel surface into the surrounding gas atmosphere. The effect of the oxygen potential in the gas atmosphere on the chemical form and volatility of the fission product is considered. A correlation is also developed to account for the trapping effects of antimony and tellurium in the Zircaloy cladding. This model interprets the release behavior of fission products observed in Commissariat a l`Energie Atomique experiments conducted in the HEVA/VERCORS facility at high temperature in a hydrogen and steam atmosphere.

  15. Regional long-term model of radioactivity dispersion and fate in the Northwestern Pacific and adjacent seas: application to the Fukushima Dai-ichi accident.

    PubMed

    Maderich, V; Bezhenar, R; Heling, R; de With, G; Jung, K T; Myoung, J G; Cho, Y-K; Qiao, F; Robertson, L

    2014-05-01

    The compartment model POSEIDON-R was modified and applied to the Northwestern Pacific and adjacent seas to simulate the transport and fate of radioactivity in the period 1945-2010, and to perform a radiological assessment on the releases of radioactivity due to the Fukushima Dai-ichi accident for the period 2011-2040. The model predicts the dispersion of radioactivity in the water column and in sediments, the transfer of radionuclides throughout the marine food web, and subsequent doses to humans due to the consumption of marine products. A generic predictive dynamic food-chain model is used instead of the biological concentration factor (BCF) approach. The radionuclide uptake model for fish has as a central feature the accumulation of radionuclides in the target tissue. The three layer structure of the water column makes it possible to describe the vertical structure of radioactivity in deep waters. In total 175 compartments cover the Northwestern Pacific, the East China and Yellow Seas and the East/Japan Sea. The model was validated from (137)Cs data for the period 1945-2010. Calculated concentrations of (137)Cs in water, bottom sediments and marine organisms in the coastal compartment, before and after the accident, are in close agreement with measurements from the Japanese agencies. The agreement for water is achieved when an additional continuous flux of 3.6 TBq y(-1) is used for underground leakage of contaminated water from the Fukushima Dai-ichi NPP, during the three years following the accident. The dynamic food web model predicts that due to the delay of the transfer throughout the food web, the concentration of (137)Cs for piscivorous fishes returns to background level only in 2016. For the year 2011, the calculated individual dose rate for Fukushima Prefecture due to consumption of fishery products is 3.6 μSv y(-1). Following the Fukushima Dai-ichi accident the collective dose due to ingestion of marine products for Japan increased in 2011 by a

  16. Nuclear fuel in a reactor accident.

    PubMed

    Burns, Peter C; Ewing, Rodney C; Navrotsky, Alexandra

    2012-03-09

    Nuclear accidents that lead to melting of a reactor core create heterogeneous materials containing hundreds of radionuclides, many with short half-lives. The long-lived fission products and transuranium elements within damaged fuel remain a concern for millennia. Currently, accurate fundamental models for the prediction of release rates of radionuclides from fuel, especially in contact with water, after an accident remain limited. Relatively little is known about fuel corrosion and radionuclide release under the extreme chemical, radiation, and thermal conditions during and subsequent to a nuclear accident. We review the current understanding of nuclear fuel interactions with the environment, including studies over the relatively narrow range of geochemical, hydrological, and radiation environments relevant to geological repository performance, and discuss priorities for research needed to develop future predictive models.

  17. Climate Modeling and Prediction at NSIPP

    NASA Technical Reports Server (NTRS)

    Suarez, Max; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The talk will review modeling and prediction efforts undertaken as part of NASA's Seasonal to Interannual Prediction Project (NSIPP). The focus will be on atmospheric model results, including its use for experimental seasonal prediction and the diagnostic analysis of climate anomalies. The model's performance in coupled experiments with land and atmosphere models will also be discussed.

  18. The five-factor model and driving behavior: personality and involvement in vehicular accidents.

    PubMed

    Cellar, D F; Nelson, Z C; Yorke, C M

    2000-04-01

    Participants completed both the NEO-PI-R personality measure and measures of prior involvement in driving accidents. Significant negative correlations were found between the factor of Agreeableness and the total number of driving tickets received as well as the sum of combined at-fault accidents, not-at-fault accidents, and driving tickets received by participants. Implications and potential future directions for research are discussed.

  19. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl NPP accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-03-01

    The coupled model LMDzORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5°×1.25°, and the same grid stretched over Europe to reach a resolution of 0.45°×0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels, respectively, extending up to mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 vertical levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The best choice for the model validation was the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986. This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. However, the best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to Atlas), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of

  20. Modeling operator actions during a small break loss-of-coolant accident in a Babcock and Wilcox nuclear power plant

    SciTech Connect

    Ghan, L.S.; Ortiz, M.G.

    1991-01-01

    A small break loss-of-accident (SBLOCA) in a typical Babcock and Wilcox (B W) nuclear power plant was modeled using RELAP5/MOD3. This work was performed as part of the United States Regulatory Commission's (USNRC) Code, Scaling, Applicability and Uncertainty (CSAU) study. The break was initiated by severing one high pressure injection (HPI) line at the cold leg. Thus, the small break was further aggravated by reduced HPI flow. Comparisons between scoping runs with minimal operator action, and full operator action, clearly showed that the operator plays a key role in recovering the plant. Operator actions were modeled based on the emergency operating procedures (EOPs) and the Technical Bases Document for the EOPs. The sequence of operator actions modeled here is only one of several possibilities. Different sequences of operator actions are possible for a given accident because of the subjective decisions the operator must make when determining the status of the plant, hence, which branch of the EOP to follow. To assess the credibility of the modeled operator actions, these actions and results of the simulated accident scenario were presented to operator examiners who are familiar with B W nuclear power plants. They agreed that, in general, the modeled operator actions conform to the requirements set forth in the EOPs and are therefore plausible. This paper presents the method for modeling the operator actions and discusses the simulated accident scenario from the viewpoint of operator actions.

  1. Modeling operator actions during a small break loss-of-coolant accident in a Babcock and Wilcox nuclear power plant

    SciTech Connect

    Ghan, L.S.; Ortiz, M.G.

    1991-12-31

    A small break loss-of-accident (SBLOCA) in a typical Babcock and Wilcox (B&W) nuclear power plant was modeled using RELAP5/MOD3. This work was performed as part of the United States Regulatory Commission`s (USNRC) Code, Scaling, Applicability and Uncertainty (CSAU) study. The break was initiated by severing one high pressure injection (HPI) line at the cold leg. Thus, the small break was further aggravated by reduced HPI flow. Comparisons between scoping runs with minimal operator action, and full operator action, clearly showed that the operator plays a key role in recovering the plant. Operator actions were modeled based on the emergency operating procedures (EOPs) and the Technical Bases Document for the EOPs. The sequence of operator actions modeled here is only one of several possibilities. Different sequences of operator actions are possible for a given accident because of the subjective decisions the operator must make when determining the status of the plant, hence, which branch of the EOP to follow. To assess the credibility of the modeled operator actions, these actions and results of the simulated accident scenario were presented to operator examiners who are familiar with B&W nuclear power plants. They agreed that, in general, the modeled operator actions conform to the requirements set forth in the EOPs and are therefore plausible. This paper presents the method for modeling the operator actions and discusses the simulated accident scenario from the viewpoint of operator actions.

  2. Progresses in tritium accident modelling in the frame of IAEA EMRAS II

    SciTech Connect

    Galeriu, D.; Melintescu, A.

    2015-03-15

    The assessment of the environmental impact of tritium release from nuclear facilities is a topic of interest in many countries. In the IAEA's Environmental Modelling for Radiation Safety (EMRAS I) programme, progresses for routine releases were done and in the EMRAS II programme a dedicated working group (WG 7 - Tritium Accidents) focused on the potential accidental releases (liquid and atmospheric pathways). The progresses achieved in WG 7 were included in a complex report - a technical document of IAEA covering both liquid and atmospheric accidental release consequences. A brief description of the progresses achieved in the frame of EMRAS II WG 7 is presented. Important results have been obtained concerning washout rate, the deposition on the soil of HTO and HT, the HTO uptake by leaves and the subsequent conversion to OBT (organically bound tritium) during daylight. Further needs of the processes understanding and the experimental efforts are emphasised.

  3. United States Department of Energy severe accident research following the Fukushima Daiichi accidents

    SciTech Connect

    Farmer, M. T.; Corradini, M.; Rempe, J.; Reister, R.; Peko, D.

    2016-11-02

    The U.S. Department of Energy (DOE) has played a major role in the U.S. response to the events at Fukushima Daiichi. During the first several weeks following the accident, U.S. assistance efforts were guided by results from a significant and diverse set of analyses. In the months that followed, a coordinated analysis activity aimed at gaining a more thorough understanding of the accident sequence was completed using laboratory-developed, system-level best-estimate accident analysis codes, while a parallel analysis was conducted by U.S. industry. A comparison of predictions for Unit 1 from these two studies indicated significant differences between MAAP and MELCOR results for key plant parameters, such as in-core hydrogen production. On that basis, a crosswalk was completed to determine the key modeling variations that led to these differences. In parallel with these activities, it became clear that there was a need to perform a technology gap evaluation on accident-tolerant components and severe accident analysis methodologies with the goal of identifying any data and/or knowledge gaps that may exist given the current state of light water reactor (LWR) severe accident research and augmented by insights from Fukushima. In addition, there is growing international recognition that data from Fukushima could significantly reduce uncertainties related to severe accident progression, particularly for boiling water reactors. On these bases, a group of U. S. experts in LWR safety and plant operations was convened by the DOE Office of Nuclear Energy (DOE-NE) to complete technology gap analysis and Fukushima forensics data needs identification activities. The results from these activities were used as the basis for refining DOE-NE's severe accident research and development (R&D) plan. Finally, this paper provides a high-level review of DOE-sponsored R&D efforts in these areas, including planned activities on accident-tolerant components and accident analysis methods.

  4. United States Department of Energy severe accident research following the Fukushima Daiichi accidents

    DOE PAGES

    Farmer, M. T.; Corradini, M.; Rempe, J.; ...

    2016-11-02

    The U.S. Department of Energy (DOE) has played a major role in the U.S. response to the events at Fukushima Daiichi. During the first several weeks following the accident, U.S. assistance efforts were guided by results from a significant and diverse set of analyses. In the months that followed, a coordinated analysis activity aimed at gaining a more thorough understanding of the accident sequence was completed using laboratory-developed, system-level best-estimate accident analysis codes, while a parallel analysis was conducted by U.S. industry. A comparison of predictions for Unit 1 from these two studies indicated significant differences between MAAP and MELCORmore » results for key plant parameters, such as in-core hydrogen production. On that basis, a crosswalk was completed to determine the key modeling variations that led to these differences. In parallel with these activities, it became clear that there was a need to perform a technology gap evaluation on accident-tolerant components and severe accident analysis methodologies with the goal of identifying any data and/or knowledge gaps that may exist given the current state of light water reactor (LWR) severe accident research and augmented by insights from Fukushima. In addition, there is growing international recognition that data from Fukushima could significantly reduce uncertainties related to severe accident progression, particularly for boiling water reactors. On these bases, a group of U. S. experts in LWR safety and plant operations was convened by the DOE Office of Nuclear Energy (DOE-NE) to complete technology gap analysis and Fukushima forensics data needs identification activities. The results from these activities were used as the basis for refining DOE-NE's severe accident research and development (R&D) plan. Finally, this paper provides a high-level review of DOE-sponsored R&D efforts in these areas, including planned activities on accident-tolerant components and accident analysis methods.« less

  5. Bus accident analysis of routes with/without bus priority.

    PubMed

    Goh, Kelvin Chun Keong; Currie, Graham; Sarvi, Majid; Logan, David

    2014-04-01

    This paper summarises findings on road safety performance and bus-involved accidents in Melbourne along roads where bus priority measures had been applied. Results from an empirical analysis of the accident types revealed significant reduction in the proportion of accidents involving buses hitting stationary objects and vehicles, which suggests the effect of bus priority in addressing manoeuvrability issues for buses. A mixed-effects negative binomial (MENB) regression and back-propagation neural network (BPNN) modelling of bus accidents considering wider influences on accident rates at a route section level also revealed significant safety benefits when bus priority is provided. Sensitivity analyses done on the BPNN model showed general agreement in the predicted accident frequency between both models. The slightly better performance recorded by the MENB model results suggests merits in adopting a mixed effects modelling approach for accident count prediction in practice given its capability to account for unobserved location and time-specific factors. A major implication of this research is that bus priority in Melbourne's context acts to improve road safety and should be a major consideration for road management agencies when implementing bus priority and road schemes.

  6. Health effects models for nuclear power plant accident consequence analysis: Low LET radiation

    SciTech Connect

    Evans, J.S. . School of Public Health)

    1990-01-01

    This report describes dose-response models intended to be used in estimating the radiological health effects of nuclear power plant accidents. Models of early and continuing effects, cancers and thyroid nodules, and genetic effects are provided. Weibull dose-response functions are recommended for evaluating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary, and gastrointestinal syndromes -- are considered. In addition, models are included for assessing the risks of several nonlethal early and continuing effects -- including prodromal vomiting and diarrhea, hypothyroidism and radiation thyroiditis, skin burns, reproductive effects, and pregnancy losses. Linear and linear-quadratic models are recommended for estimating cancer risks. Parameters are given for analyzing the risks of seven types of cancer in adults -- leukemia, bone, lung, breast, gastrointestinal, thyroid, and other.'' The category, other'' cancers, is intended to reflect the combined risks of multiple myeloma, lymphoma, and cancers of the bladder, kidney, brain, ovary, uterus and cervix. Models of childhood cancers due to in utero exposure are also developed. For most cancers, both incidence and mortality are addressed. The models of cancer risk are derived largely from information summarized in BEIR III -- with some adjustment to reflect more recent studies. 64 refs., 18 figs., 46 tabs.

  7. Predictive Modeling of Cardiac Ischemia

    NASA Technical Reports Server (NTRS)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  8. Predictive Modeling of Tokamak Configurations*

    NASA Astrophysics Data System (ADS)

    Casper, T. A.; Lodestro, L. L.; Pearlstein, L. D.; Bulmer, R. H.; Jong, R. A.; Kaiser, T. B.; Moller, J. M.

    2001-10-01

    The Corsica code provides comprehensive toroidal plasma simulation and design capabilities with current applications [1] to tokamak, reversed field pinch (RFP) and spheromak configurations. It calculates fixed and free boundary equilibria coupled to Ohm's law, sources, transport models and MHD stability modules. We are exploring operations scenarios for both the DIII-D and KSTAR tokamaks. We will present simulations of the effects of electron cyclotron heating (ECH) and current drive (ECCD) relevant to the Quiescent Double Barrier (QDB) regime on DIII-D exploring long pulse operation issues. KSTAR simulations using ECH/ECCD in negative central shear configurations explore evolution to steady state while shape evolution studies during current ramp up using a hyper-resistivity model investigate startup scenarios and limitations. Studies of high bootstrap fraction operation stimulated by recent ECH/ECCD experiments on DIIID will also be presented. [1] Pearlstein, L.D., et al, Predictive Modeling of Axisymmetric Toroidal Configurations, 28th EPS Conference on Controlled Fusion and Plasma Physics, Madeira, Portugal, June 18-22, 2001. * Work performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

  9. TRACE/PARCS Core Modeling of a BWR/5 for Accident Analysis of ATWS Events

    SciTech Connect

    Cuadra A.; Baek J.; Cheng, L.; Aronson, A.; Diamond, D.; Yarsky, P.

    2013-11-10

    The TRACE/PARCS computational package [1, 2] isdesigned to be applicable to the analysis of light water reactor operational transients and accidents where the coupling between the neutron kinetics (PARCS) and the thermal-hydraulics and thermal-mechanics (TRACE) is important. TRACE/PARCS has been assessed for itsapplicability to anticipated transients without scram(ATWS) [3]. The challenge, addressed in this study, is to develop a sufficiently rigorous input model that would be acceptable for use in ATWS analysis. Two types of ATWS events were of interest, a turbine trip and a closure of main steam isolation valves (MSIVs). In the first type, initiated by turbine trip, the concern is that the core will become unstable and large power oscillations will occur. In the second type,initiated by MSIV closure,, the concern is the amount of energy being placed into containment and the resulting emergency depressurization. Two separate TRACE/PARCS models of a BWR/5 were developed to analyze these ATWS events at MELLLA+ (maximum extended load line limit plus)operating conditions. One model [4] was used for analysis of ATWS events leading to instability (ATWS-I);the other [5] for ATWS events leading to emergency depressurization (ATWS-ED). Both models included a large portion of the nuclear steam supply system and controls, and a detailed core model, presented henceforth.

  10. GASFLOW: A computational model to analyze accidents in nuclear containment and facility buildings

    SciTech Connect

    Travis, J.R. ); Nichols, B.D.; Wilson, T.L.; Lam, K.L.; Spore, J.W.; Niederauer, G.F. )

    1993-01-01

    GASFLOW is a finite-volume computer code that solves the time-dependent, compressible Navier-Stokes equations for multiple gas species. The fluid-dynamics algorithm is coupled to the chemical kinetics of combusting liquids or gases to simulate diffusion or propagating flames in complex geometries of nuclear containment or confinement and facilities' buildings. Fluid turbulence is calculated to enhance the transport and mixing of gases in rooms and volumes that may be connected by a ventilation system. The ventilation system may consist of extensive ductwork, filters, dampers or valves, and fans. Condensation and heat transfer to walls, floors, ceilings, and internal structures are calculated to model the appropriate energy sinks. Solid and liquid aerosol behavior is simulated to give the time and space inventory of radionuclides. The solution procedure of the governing equations is a modified Los Alamos ICE'd-ALE methodology. Complex facilities can be represented by separate computational domains (multiblocks) that communicate through overlapping boundary conditions. The ventilation system is superimposed throughout the multiblock mesh. Gas mixtures and aerosols are transported through the free three-dimensional volumes and the restricted one-dimensional ventilation components as the accident and fluid flow fields evolve. Combustion may occur if sufficient fuel and reactant or oxidizer are present and have an ignition source. Pressure and thermal loads on the building, structural components, and safety-related equipment can be determined for specific accident scenarios. GASFLOW calculations have been compared with large oil-pool fire tests in the 1986 HDR containment test T52.14, which is a 3000-kW fire experiment. The computed results are in good agreement with the observed data.

  11. Sensitivity study of the wet deposition schemes in the modelling of the Fukushima accident.

    NASA Astrophysics Data System (ADS)

    Quérel, Arnaud; Quélo, Denis; Roustan, Yelva; Mathieu, Anne; Kajino, Mizuo; Sekiyama, Thomas; Adachi, Kouji; Didier, Damien; Igarashi, Yasuhito

    2016-04-01

    The Fukushima-Daiichi release of radioactivity is a relevant event to study the atmospheric dispersion modelling of radionuclides. Actually, the atmospheric deposition onto the ground may be studied through the map of measured Cs-137 established consecutively to the accident. The limits of detection were low enough to make the measurements possible as far as 250km from the nuclear power plant. This large scale deposition has been modelled with the Eulerian model ldX. However, several weeks of emissions in multiple weather conditions make it a real challenge. Besides, these measurements are accumulated deposition of Cs-137 over the whole period and do not inform of deposition mechanisms involved: in-cloud, below-cloud, dry deposition. A comprehensive sensitivity analysis is performed in order to understand wet deposition mechanisms. It has been shown in a previous study (Quérel et al, 2016) that the choice of the wet deposition scheme has a strong impact on the assessment of the deposition patterns. Nevertheless, a "best" scheme could not be highlighted as it depends on the selected criteria: the ranking differs according to the statistical indicators considered (correlation, figure of merit in space and factor 2). A possibility to explain the difficulty to discriminate between several schemes was the uncertainties in the modelling, resulting from the meteorological data for instance. Since the move of the plume is not properly modelled, the deposition processes are applied with an inaccurate activity in the air. In the framework of the SAKURA project, an MRI-IRSN collaboration, new meteorological fields at higher resolution (Sekiyama et al., 2013) were provided and allows to reconsider the previous study. An updated study including these new meteorology data is presented. In addition, a focus on several releases causing deposition in located areas during known period was done. This helps to better understand the mechanisms of deposition involved following the

  12. Oil Spill Detection and Modelling: Preliminary Results for the Cercal Accident

    NASA Astrophysics Data System (ADS)

    da Costa, R. T.; Azevedo, A.; da Silva, J. C. B.; Oliveira, A.

    2013-03-01

    Oil spill research has significantly increased mainly as a result of the severe consequences experienced from industry accidents. Oil spill models are currently able to simulate the processes that determine the fate of oil slicks, playing an important role in disaster prevention, control and mitigation, generating valuable information for decision makers and the population in general. On the other hand, satellite Synthetic Aperture Radar (SAR) imagery has demonstrated significant potential in accidental oil spill detection, when they are accurately differentiated from look-alikes. The combination of both tools can lead to breakthroughs, particularly in the development of Early Warning Systems (EWS). This paper presents a hindcast simulation of the oil slick resulting from the Motor Tanker (MT) Cercal oil spill, listed by the Portuguese Navy as one of the major oil spills in the Portuguese Atlantic Coast. The accident took place nearby Leix˜oes Harbour, North of the Douro River, Porto (Portugal) on the 2nd of October 1994. The oil slick was segmented from available European Remote Sensing (ERS) satellite SAR images, using an algorithm based on a simplified version of the K-means clustering formulation. The image-acquired information, added to the initial conditions and forcings, provided the necessary inputs for the oil spill model. Simulations were made considering the tri-dimensional hydrodynamics in a crossscale domain, from the interior of the Douro River Estuary to the open-ocean on the Iberian Atlantic shelf. Atmospheric forcings (from ECMWF - the European Centre for Medium-Range Weather Forecasts and NOAA - the National Oceanic and Atmospheric Administration), river forcings (from SNIRH - the Portuguese National Information System of the Hydric Resources) and tidal forcings (from LNEC - the National Laboratory for Civil Engineering), including baroclinic gradients (NOAA), were considered. The lack of data for validation purposes only allowed the use of the

  13. Predictive Model Assessment for Count Data

    DTIC Science & Technology

    2007-09-05

    critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002

  14. Piloted Simulation of a Model-Predictive Automated Recovery System

    NASA Technical Reports Server (NTRS)

    Liu, James (Yuan); Litt, Jonathan; Sowers, T. Shane; Owens, A. Karl; Guo, Ten-Huei

    2014-01-01

    This presentation describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  15. World Meteorological Organization's model simulations of the radionuclide dispersion and deposition from the Fukushima Daiichi nuclear power plant accident.

    PubMed

    Draxler, Roland; Arnold, Dèlia; Chino, Masamichi; Galmarini, Stefano; Hort, Matthew; Jones, Andrew; Leadbetter, Susan; Malo, Alain; Maurer, Christian; Rolph, Glenn; Saito, Kazuo; Servranckx, René; Shimbori, Toshiki; Solazzo, Efisio; Wotawa, Gerhard

    2015-01-01

    Five different atmospheric transport and dispersion model's (ATDM) deposition and air concentration results for atmospheric releases from the Fukushima Daiichi nuclear power plant accident were evaluated over Japan using regional (137)Cs deposition measurements and (137)Cs and (131)I air concentration time series at one location about 110 km from the plant. Some of the ATDMs used the same and others different meteorological data consistent with their normal operating practices. There were four global meteorological analyses data sets available and two regional high-resolution analyses. Not all of the ATDMs were able to use all of the meteorological data combinations. The ATDMs were configured identically as much as possible with respect to the release duration, release height, concentration grid size, and averaging time. However, each ATDM retained its unique treatment of the vertical velocity field and the wet and dry deposition, one of the largest uncertainties in these calculations. There were 18 ATDM-meteorology combinations available for evaluation. The deposition results showed that even when using the same meteorological analysis, each ATDM can produce quite different deposition patterns. The better calculations in terms of both deposition and air concentration were associated with the smoother ATDM deposition patterns. The best model with respect to the deposition was not always the best model with respect to air concentrations. The use of high-resolution mesoscale analyses improved ATDM performance; however, high-resolution precipitation analyses did not improve ATDM predictions. Although some ATDMs could be identified as better performers for either deposition or air concentration calculations, overall, the ensemble mean of a subset of better performing members provided more consistent results for both types of calculations.

  16. Estimating Loss-of-Coolant Accident Frequencies for the Standardized Plant Analysis Risk Models

    SciTech Connect

    S. A. Eide; D. M. Rasmuson; C. L. Atwood

    2008-09-01

    The U.S. Nuclear Regulatory Commission maintains a set of risk models covering the U.S. commercial nuclear power plants. These standardized plant analysis risk (SPAR) models include several loss-of-coolant accident (LOCA) initiating events such as small (SLOCA), medium (MLOCA), and large (LLOCA). All of these events involve a loss of coolant inventory from the reactor coolant system. In order to maintain a level of consistency across these models, initiating event frequencies generally are based on plant-type average performance, where the plant types are boiling water reactors and pressurized water reactors. For certain risk analyses, these plant-type initiating event frequencies may be replaced by plant-specific estimates. Frequencies for SPAR LOCA initiating events previously were based on results presented in NUREG/CR-5750, but the newest models use results documented in NUREG/CR-6928. The estimates in NUREG/CR-6928 are based on historical data from the initiating events database for pressurized water reactor SLOCA or an interpretation of results presented in the draft version of NUREG-1829. The information in NUREG-1829 can be used several ways, resulting in different estimates for the various LOCA frequencies. Various ways NUREG-1829 information can be used to estimate LOCA frequencies were investigated and this paper presents two methods for the SPAR model standard inputs, which differ from the method used in NUREG/CR-6928. In addition, results obtained from NUREG-1829 are compared with actual operating experience as contained in the initiating events database.

  17. Prediction of groundwater contamination with 137Cs and 131I from the Fukushima nuclear accident in the Kanto district.

    PubMed

    Ohta, Tomoko; Mahara, Yasunori; Kubota, Takumi; Fukutani, Satoshi; Fujiwara, Keiko; Takamiya, Koichi; Yoshinaga, Hisao; Mizuochi, Hiroyuki; Igarashi, Toshifumi

    2012-09-01

    We measured the concentrations of (131)I, (134)Cs, and (137)Cs released from the Fukushima nuclear accident in soil and rainwater samples collected March 30-31, 2011, in Ibaraki Prefecture, Kanto district, bordering Fukushima Prefecture to the south. Column experiments revealed that all (131)I in rainwater samples was adsorbed onto an anion-exchange resin. However, 30% of (131)I was not retained by the resin after it passed through a soil layer, suggesting that a portion of (131)I became bound to organic matter from the soil. The (137)Cs migration rate was estimated to be approximately 0.6 mm/y in the Kanto area, which indicates that contamination of groundwater by (137)Cs is not likely to occur in rainwater infiltrating into the surface soil after the Fukushima accident.

  18. Using Numerical Models in the Development of Software Tools for Risk Management of Accidents with Oil and Inert Spills

    NASA Astrophysics Data System (ADS)

    Fernandes, R.; Leitão, P. C.; Braunschweig, F.; Lourenço, F.; Galvão, P.; Neves, R.

    2012-04-01

    The increasing ship traffic and maritime transport of dangerous substances make it more difficult to significantly reduce the environmental, economic and social risks posed by potential spills, although the security rules are becoming more restrictive (ships with double hull, etc.) and the surveillance systems are becoming more developed (VTS, AIS). In fact, the problematic associated to spills is and will always be a main topic: spill events are continuously happening, most of them unknown for the general public because of their small scale impact, but with some of them (in a much smaller number) becoming authentic media phenomena in this information era, due to their large dimensions and environmental and social-economic impacts on ecosystems and local communities, and also due to some spectacular or shocking pictures generated. Hence, the adverse consequences posed by these type of accidents, increase the preoccupation of avoiding them in the future, or minimize their impacts, using not only surveillance and monitoring tools, but also increasing the capacity to predict the fate and behaviour of bodies, objects, or substances in the following hours after the accident - numerical models can have now a leading role in operational oceanography applied to safety and pollution response in the ocean because of their predictive potential. Search and rescue operation, oil, inert (ship debris, or floating containers), and HNS (hazardous and noxious substances) spills risk analysis are the main areas where models can be used. Model applications have been widely used in emergency or planning issues associated to pollution risks, and contingency and mitigation measures. Before a spill, in the planning stage, modelling simulations are used in environmental impact studies, or risk maps, using historical data, reference situations, and typical scenarios. After a spill, the use of fast and simple modelling applications allow to understand the fate and behaviour of the spilt

  19. Comparing the Hematopoetic Syndrome Time Course in the NHP Animal Model to Radiation Accident Cases From the Database Search.

    PubMed

    Graessle, Dieter H; Dörr, Harald; Bennett, Alexander; Shapiro, Alla; Farese, Ann M; MacVittie, Thomas J; Meineke, Viktor

    2015-11-01

    Since controlled clinical studies on drug administration for the acute radiation syndrome are lacking, clinical data of human radiation accident victims as well as experimental animal models are the main sources of information. This leads to the question of how to compare and link clinical observations collected after human radiation accidents with experimental observations in non-human primate (NHP) models. Using the example of granulocyte counts in the peripheral blood following radiation exposure, approaches for adaptation between NHP and patient databases on data comparison and transformation are introduced. As a substitute for studying the effects of administration of granulocyte-colony stimulating factor (G-CSF) in human clinical trials, the method of mathematical modeling is suggested using the example of G-CSF administration to NHP after total body irradiation.

  20. Curve Estimation of Number of People Killed in Traffic Accidents in Turkey

    NASA Astrophysics Data System (ADS)

    Berkhan Akalin, Kadir; Karacasu, Murat; Altin, Arzu Yavuz; Ergül, Bariş

    2016-10-01

    One or more than one vehicle in motion on the highway involving death, injury and loss events which have resulted are called accidents. As a result of increasing population and traffic density, traffic accidents continue to increase and this leads to both human losses and harm to the economy. In addition, also leads to social problems. As a result of increasing population and traffic density, traffic accidents continue to increase and this leads to both human losses and harm to the economy. In addition to this, it also leads to social problems. As a result of traffic accidents, millions of people die year by year. A great majority of these accidents occur in developing countries. One of the most important tasks of transportation engineers is to reduce traffic accidents by creating a specific system. For that reason, statistical information about traffic accidents which occur in the past years should be organized by versed people. Factors affecting the traffic accidents are analyzed in various ways. In this study, modelling the number of people killed in traffic accidents in Turkey is determined. The dead people were modelled using curve fitting method with the number of people killed in traffic accidents in Turkey dataset between 1990 and 2014. It was also predicted the number of dead people by using various models for the future. It is decided that linear model is suitable for the estimates.

  1. Future missions studies: Combining Schatten's solar activity prediction model with a chaotic prediction model

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.

    1991-01-01

    K. Schatten (1991) recently developed a method for combining his prediction model with our chaotic model. The philosophy behind this combined model and his method of combination is explained. Because the Schatten solar prediction model (KS) uses a dynamo to mimic solar dynamics, accurate prediction is limited to long-term solar behavior (10 to 20 years). The Chaotic prediction model (SA) uses the recently developed techniques of nonlinear dynamics to predict solar activity. It can be used to predict activity only up to the horizon. In theory, the chaotic prediction should be several orders of magnitude better than statistical predictions up to that horizon; beyond the horizon, chaotic predictions would theoretically be just as good as statistical predictions. Therefore, chaos theory puts a fundamental limit on predictability.

  2. Extracting falsifiable predictions from sloppy models.

    PubMed

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  3. Mesoscale Wind Predictions for Wave Model Evaluation

    DTIC Science & Technology

    2016-06-07

    SEP 1999 2. REPORT TYPE 3. DATES COVERED 00-00-1999 to 00-00-1999 4. TITLE AND SUBTITLE Mesoscale Wind Predictions for Wave Model Evaluation...unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 1 Mesoscale Wind Predictions for Wave Model...resolution (< 10 km) atmospheric wind and surface stress fields produced by an atmospheric mesoscale data assimilation system to the numerical prediction of

  4. Models and numerical methods for the simulation of loss-of-coolant accidents in nuclear reactors

    NASA Astrophysics Data System (ADS)

    Seguin, Nicolas

    2014-05-01

    In view of the simulation of the water flows in pressurized water reactors (PWR), many models are available in the literature and their complexity deeply depends on the required accuracy, see for instance [1]. The loss-of-coolant accident (LOCA) may appear when a pipe is broken through. The coolant is composed by light water in its liquid form at very high temperature and pressure (around 300 °C and 155 bar), it then flashes and becomes instantaneously vapor in case of LOCA. A front of liquid/vapor phase transition appears in the pipes and may propagate towards the critical parts of the PWR. It is crucial to propose accurate models for the whole phenomenon, but also sufficiently robust to obtain relevant numerical results. Due to the application we have in mind, a complete description of the two-phase flow (with all the bubbles, droplets, interfaces…) is out of reach and irrelevant. We investigate averaged models, based on the use of void fractions for each phase, which represent the probability of presence of a phase at a given position and at a given time. The most accurate averaged model, based on the so-called Baer-Nunziato model, describes separately each phase by its own density, velocity and pressure. The two phases are coupled by non-conservative terms due to gradients of the void fractions and by source terms for mechanical relaxation, drag force and mass transfer. With appropriate closure laws, it has been proved [2] that this model complies with all the expected physical requirements: positivity of densities and temperatures, maximum principle for the void fraction, conservation of the mixture quantities, decrease of the global entropy… On the basis of this model, it is possible to derive simpler models, which can be used where the flow is still, see [3]. From the numerical point of view, we develop new Finite Volume schemes in [4], which also satisfy the requirements mentioned above. Since they are based on a partial linearization of the physical

  5. Childhood asthma prediction models: a systematic review.

    PubMed

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  6. Nuclear accidents

    SciTech Connect

    Mobley, J.A.

    1982-05-01

    A nuclear accident with radioactive contamination can happen anywhere in the world. Because expert nuclear emergency teams may take several hours to arrive at the scene, local authorities must have a plan of action for the hours immediately following an accident. The site should be left untouched except to remove casualties. Treatment of victims includes decontamination and meticulous wound debridement. Acute radiation syndrome may be an overwhelming sequela.

  7. Evaluating the Predictive Value of Growth Prediction Models

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  8. Hybrid approaches to physiologic modeling and prediction

    NASA Astrophysics Data System (ADS)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  9. Incorporating uncertainty in predictive species distribution modelling.

    PubMed

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  10. A two-stage optimization model for emergency material reserve layout planning under uncertainty in response to environmental accidents.

    PubMed

    Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Liu, Rentao; Wang, Peng

    2016-06-05

    In the emergency management relevant to pollution accidents, efficiency emergency rescues can be deeply influenced by a reasonable assignment of the available emergency materials to the related risk sources. In this study, a two-stage optimization framework is developed for emergency material reserve layout planning under uncertainty to identify material warehouse locations and emergency material reserve schemes in pre-accident phase coping with potential environmental accidents. This framework is based on an integration of Hierarchical clustering analysis - improved center of gravity (HCA-ICG) model and material warehouse location - emergency material allocation (MWL-EMA) model. First, decision alternatives are generated using HCA-ICG to identify newly-built emergency material warehouses for risk sources which cannot be satisfied by existing ones with a time-effective manner. Second, emergency material reserve planning is obtained using MWL-EMA to make emergency materials be prepared in advance with a cost-effective manner. The optimization framework is then applied to emergency management system planning in Jiangsu province, China. The results demonstrate that the developed framework not only could facilitate material warehouse selection but also effectively provide emergency material for emergency operations in a quick response.

  11. Model predictions and trend analysis

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Individual perturbations in atmospheric models are discussed. These are hypothetical perturbations determined by model computation in which it is assumed that one particular input or set of inputs to the model is changed while all others are held constant. The best estimates of past time dependent variations of globally averaged total ozone, and upper tropospheric and stratospheric ozone were determined along with geographical differences in the variations.

  12. Predicting the long-term (137)Cs distribution in Fukushima after the Fukushima Dai-ichi nuclear power plant accident: a parameter sensitivity analysis.

    PubMed

    Yamaguchi, Masaaki; Kitamura, Akihiro; Oda, Yoshihiro; Onishi, Yasuo

    2014-09-01

    than those of the other rivers. Annual sediment outflows from the Abukuma River and the total from the other 13 river basins were calculated as 3.2 × 10(4)-3.1 × 10(5) and 3.4 × 10(4)-2.1 × 10(5)ty(-1), respectively. The values vary between calculation cases because of the critical shear stress, the rainfall factor, and other differences. On the other hand, contributions of those parameters were relatively small for (137)Cs concentration within transported soil. This indicates that the total amount of (137)Cs outflow into the ocean would mainly be controlled by the amount of soil erosion and transport and the total amount of (137)Cs concentration remaining within the basin. Outflows of (137)Cs from the Abukuma River and the total from the other 13 river basins during the first year after the accident were calculated to be 2.3 × 10(11)-3.7 × 10(12) and 4.6 × 10(11)-6.5 × 10(12)Bqy(-1), respectively. The former results were compared with the field investigation results, and the order of magnitude was matched between the two, but the value of the investigation result was beyond the upper limit of model prediction.

  13. Development of Models for Predicting the Predominant Taste and Odor Compounds in Taihu Lake, China

    PubMed Central

    Sun, Xiaoxue; Deng, Xuwei; Niu, Yuan; Xie, Ping

    2012-01-01

    Taste and odor (T&O) problems, which have adversely affected the quality of water supplied to millions of residents, have repeatedly occurred in Taihu Lake (e.g., a serious odor accident occurred in 2007). Because these accidents are difficult for water resource managers to forecast in a timely manner, there is an urgent need to develop optimum models to predict these T&O problems. For this purpose, various biotic and abiotic environmental parameters were monitored monthly for one year at 30 sites across Taihu Lake. This is the first investigation of this huge lake to sample T&O compounds at the whole-lake level. Certain phytoplankton taxa were important variables in the models; for instance, the concentrations of the particle-bound 2-methylisoborneol (p-MIB) were correlated with the presence of Oscillatoria, whereas those of the p-β-cyclocitral and p-β-ionone were correlated with Microcystis levels. Abiotic factors such as nitrogen (TN, TDN, NO3-N, and NO2-N), pH, DO, COND, COD and Chl-a also contributed significantly to the T&O predictive models. The dissolved (d) T&O compounds were related to both the algal biomass and to certain abiotic environmental factors, whereas the particle-bound (p) T&O compounds were more strongly related to the algal presence. We also tested the validity of these models using an independent data set that was previously collected from Taihu Lake in 2008. In comparing the concentrations of the T&O compounds observed in 2008 with those concentrations predicted from our models, we found that most of the predicted data points fell within the 90% confidence intervals of the observed values. This result supported the validity of these models in the studied system. These models, basing on easily collected environmental data, will be of practical value to the water resource managers of Taihu Lake for evaluating the probability of T&O accidents. PMID:23284835

  14. Predicting freakish sea state with an operational third-generation wave model

    NASA Astrophysics Data System (ADS)

    Waseda, T.; In, K.; Kiyomatsu, K.; Tamura, H.; Miyazawa, Y.; Iyama, K.

    2014-04-01

    The understanding of freak wave generation mechanisms has advanced and the community has reached a consensus that spectral geometry plays an important role. Numerous marine accident cases were studied and revealed that the narrowing of the directional spectrum is a good indicator of dangerous sea. However, the estimation of the directional spectrum depends on the performance of the third-generation wave model. In this work, a well-studied marine accident case in Japan in 1980 (Onomichi-Maru incident) is revisited and the sea states are hindcasted using both the DIA (discrete interaction approximation) and SRIAM (Simplified Research Institute of Applied Mechanics) nonlinear source terms. The result indicates that the temporal evolution of the basic parameters (directional spreading and frequency bandwidth) agree reasonably well between the two schemes and therefore the most commonly used DIA method is qualitatively sufficient to predict freakish sea state. The analyses revealed that in the case of Onomichi-Maru, a moving gale system caused the spectrum to grow in energy with limited downshifting at the accident's site. This conclusion contradicts the marine inquiry report speculating that the two swell systems crossed at the accident's site. The unimodal wave system grew under strong influence of local wind with a peculiar energy transfer.

  15. A Global Model for Bankruptcy Prediction

    PubMed Central

    Alaminos, David; del Castillo, Agustín; Fernández, Manuel Ángel

    2016-01-01

    The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy. PMID:27880810

  16. Radiation risk models for all solid cancers other than those types of cancer requiring individual assessments after a nuclear accident.

    PubMed

    Walsh, Linda; Zhang, Wei

    2016-03-01

    In the assessment of health risks after nuclear accidents, some health consequences require special attention. For example, in their 2013 report on health risk assessment after the Fukushima nuclear accident, the World Health Organisation (WHO) panel of experts considered risks of breast cancer, thyroid cancer and leukaemia. For these specific cancer types, use was made of already published excess relative risk (ERR) and excess absolute risk (EAR) models for radiation-related cancer incidence fitted to the epidemiological data from the Japanese A-bomb Life Span Study (LSS). However, it was also considered important to assess all other types of solid cancer together and the WHO, in their above-mentioned report, stated "No model to calculate the risk for all other solid cancer excluding breast and thyroid cancer risks is available from the LSS data". Applying the LSS models for all solid cancers along with the models for the specific sites means that some cancers have an overlap in the risk evaluations. Thus, calculating the total solid cancer risk plus the breast cancer risk plus the thyroid cancer risk can overestimate the total risk by several per cent. Therefore, the purpose of this paper was to publish the required models for all other solid cancers, i.e. all solid cancers other than those types of cancer requiring special attention after a nuclear accident. The new models presented here have been fitted to the same LSS data set from which the risks provided by the WHO were derived. Although it is known already that the EAR and ERR effect modifications by sex are statistically significant for the outcome "all solid cancer", it is shown here that sex modification is not statistically significant for the outcome "all solid cancer other than thyroid and breast cancer". It is also shown here that the sex-averaged solid cancer risks with and without the sex modification are very similar once breast and thyroid cancers are factored out. Some other notable model

  17. Breast Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Esophageal Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Colorectal Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Prostate Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Pancreatic Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Lung Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Testicular Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Ovarian Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Cervical Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Bladder Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Predictive Modeling in Adult Education

    ERIC Educational Resources Information Center

    Lindner, Charles L.

    2011-01-01

    The current economic crisis, a growing workforce, the increasing lifespan of workers, and demanding, complex jobs have made organizations highly selective in employee recruitment and retention. It is therefore important, to the adult educator, to develop models of learning that better prepare adult learners for the workplace. The purpose of…

  8. Liver Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  9. Posterior Predictive Model Checking in Bayesian Networks

    ERIC Educational Resources Information Center

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  10. Predictive modelling of boiler fouling

    SciTech Connect

    Not Available

    1990-02-01

    The primary objective of this work is the development of a comprehensive numerical model describing the time evolution of fouling under realistic heat exchanger conditions. As fouling is a complex interaction of gas flow, mineral transport and adhesion mechanisms, understanding and subsequently improved controlling of fouling achieved via appropriate manipulation of the various coupled, nonlinear processes in a complex fluid mechanics environment will undoubtedly help reduce the substantial operating costs incurred by the utilities annually, as well as afford greater flexibility in coal selection and reduce the emission of various pollutants. In a more specialized context the numerical model to be developed as part of this activity will be used as a tool to address the interaction of the various mechanisms controlling deposit development in specific regimes or correlative relationships. These should prove of direct use to the coal burning industry.

  11. Predictive modelling of boiler fouling

    SciTech Connect

    Not Available

    1991-01-01

    The primary objective of this work is the development of a comprehensive numerical model describing the time evolution of fouling under realistic heat exchanger conditions. As fouling is a complex interaction of gas flow, mineral transport and adhesion mechanisms, understanding and subsequently improved controlling of fouling achieved via appropriate manipulation of the various coupled, nonlinear processes in a complex fluid mechanics environment will undoubtedly help reduce the substantial operating costs incurred by the utilities annually, as well as afford greater flexibility in coal selection and reduce the emission of various pollutants. In a more specialized context, the numerical model to be developed as part of this activity will be used as a tool to address the interaction of the various mechanisms controlling deposit development in specific regimes or correlative relationships. These should prove of direct use to the coal burning industry.

  12. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  13. A Predictive Model of Group Panic Behavior.

    ERIC Educational Resources Information Center

    Weinberg, Sanford B.

    1978-01-01

    Reports results of a study which tested the following model to predict group panic behavior: that panic reactions are characterized by the exercise of inappropriate leadership behaviors in situations of high stress. (PD)

  14. Irma multisensor predictive signature model

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Flynn, David S.; Wellfare, Michael R.; Richards, Mike; Prestwood, Lee

    1995-06-01

    The Irma synthetic signature model was one of the first high resolution synthetic infrared (IR) target and background signature models to be developed for tactical air-to-surface weapon scenarios. Originally developed in 1980 by the Armament Directorate of the Air Force Wright Laboratory (WL/MN), the Irma model was used exclusively to generate IR scenes for smart weapons research and development. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser channel. This two channel version, Irma 3.0, was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model which supported correlated frame-to-frame imagery. This and other improvements were released in Irma 2.2. Recently, Irma 3.2, a passive IR/millimeter wave (MMW) code, was completed. Currently, upgrades are underway to include an active MMW channel. Designated Irma 4.0, this code will serve as a cornerstone of sensor fusion research in the laboratory from 6.1 concept development to 6.3 technology demonstration programs for precision guided munitions. Several significant milestones have been reached in this development process and are demonstrated. The Irma 4.0 software design has been developed and interim results are available. Irma is being developed to facilitate multi-sensor smart weapons research and development. It is currently in distribution to over 80 agencies within the U.S. Air Force, U.S. Army, U.S. Navy, ARPA, NASA, Department of Transportation, academia, and industry.

  15. Sensitivity analysis of uncertainty in model prediction.

    PubMed

    Russi, Trent; Packard, Andrew; Feeley, Ryan; Frenklach, Michael

    2008-03-27

    Data Collaboration is a framework designed to make inferences from experimental observations in the context of an underlying model. In the prior studies, the methodology was applied to prediction on chemical kinetics models, consistency of a reaction system, and discrimination among competing reaction models. The present work advances Data Collaboration by developing sensitivity analysis of uncertainty in model prediction with respect to uncertainty in experimental observations and model parameters. Evaluation of sensitivity coefficients is performed alongside the solution of the general optimization ansatz of Data Collaboration. The obtained sensitivity coefficients allow one to determine which experiment/parameter uncertainty contributes the most to the uncertainty in model prediction, rank such effects, consider new or even hypothetical experiments to perform, and combine the uncertainty analysis with the cost of uncertainty reduction, thereby providing guidance in selecting an experimental/theoretical strategy for community action.

  16. Irma multisensor predictive signature model

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Wellfare, Michael R.; Chenault, David B.; Talele, Sunjay E.; Blume, Bradley T.; Richards, Mike; Prestwood, Lee

    1997-06-01

    Development of target acquisition and target recognition algorithms in highly cluttered backgrounds in a variety of battlefield conditions demands a flexible, high fidelity capability for synthetic image generation. Cost effective smart weapons research and testing also requires extensive scene generation capability. The Irma software package addresses this need through a first principles, phenomenology based scene generator that enhances research into new algorithms, novel sensors, and sensor fusion approaches. Irma was one of the first high resolution synthetic infrared target and background signature models developed for tactical air-to-surface weapon scenarios. Originally developed in 1980 by the Armament Directorate of the Air Force Wright Laboratory, the Irma model was used exclusively to generate IR scenes for smart weapons research and development. in 1987, Nichols Research Corporation took over the maintenance of Irma and has since added substantial capabilities. The development of Irma has culminated in a program that includes not only passive visible, IR, and millimeter wave (MMW) channels but also active MMW and ladar channels. Each of these channels is co-registered providing the capability to develop algorithms for multi-band sensor fusion concepts and associated algorithms. In this paper, the capabilities of the latest release of Irma, Irma 4.0, will be described. A brief description of the elements of the software that are common to all channels will be provided. Each channel will be described briefly including a summary of the phenomenological effects and the sensor effects modeled in the software. Examples of Irma multi- channel imagery will be presented.

  17. Risk terrain modeling predicts child maltreatment.

    PubMed

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.

  18. A Computerized Prediction Model of Hazardous Inflammatory Platelet Transfusion Outcomes

    PubMed Central

    Nguyen, Kim Anh; Hamzeh-Cognasse, Hind; Sebban, Marc; Fromont, Elisa; Chavarin, Patricia; Absi, Lena; Pozzetto, Bruno; Cognasse, Fabrice; Garraud, Olivier

    2014-01-01

    Background Platelet component (PC) transfusion leads occasionally to inflammatory hazards. Certain BRMs that are secreted by the platelets themselves during storage may have some responsibility. Methodology/Principal Findings First, we identified non-stochastic arrangements of platelet-secreted BRMs in platelet components that led to acute transfusion reactions (ATRs). These data provide formal clinical evidence that platelets generate secretion profiles under both sterile activation and pathological conditions. We next aimed to predict the risk of hazardous outcomes by establishing statistical models based on the associations of BRMs within the incriminated platelet components and using decision trees. We investigated a large (n = 65) series of ATRs after platelet component transfusions reported through a very homogenous system at one university hospital. Herein, we used a combination of clinical observations, ex vivo and in vitro investigations, and mathematical modeling systems. We calculated the statistical association of a large variety (n = 17) of cytokines, chemokines, and physiologically likely factors with acute inflammatory potential in patients presenting with severe hazards. We then generated an accident prediction model that proved to be dependent on the level (amount) of a given cytokine-like platelet product within the indicated component, e.g., soluble CD40-ligand (>289.5 pg/109 platelets), or the presence of another secreted factor (IL-13, >0). We further modeled the risk of the patient presenting either a febrile non-hemolytic transfusion reaction or an atypical allergic transfusion reaction, depending on the amount of the chemokine MIP-1α (<20.4 or >20.4 pg/109 platelets, respectively). Conclusions/Significance This allows the modeling of a policy of risk prevention for severe inflammatory outcomes in PC transfusion. PMID:24830754

  19. Health effects models for nuclear power plant accident consequence analysis. Part 1, Introduction, integration, and summary: Revision 2

    SciTech Connect

    Evans, J.S.; Abrahmson, S.; Bender, M.A.; Boecker, B.B.; Scott, B.R.; Gilbert, E.S.

    1993-10-01

    This report is a revision of NUREG/CR-4214, Rev. 1, Part 1 (1990), Health Effects Models for Nuclear Power Plant Accident Consequence Analysis. This revision has been made to incorporate changes to the Health Effects Models recommended in two addenda to the NUREG/CR-4214, Rev. 1, Part 11, 1989 report. The first of these addenda provided recommended changes to the health effects models for low-LET radiations based on recent reports from UNSCEAR, ICRP and NAS/NRC (BEIR V). The second addendum presented changes needed to incorporate alpha-emitting radionuclides into the accident exposure source term. As in the earlier version of this report, models are provided for early and continuing effects, cancers and thyroid nodules, and genetic effects. Weibull dose-response functions are recommended for evaluating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary, and gastrointestinal syndromes are considered. Linear and linear-quadratic models are recommended for estimating the risks of seven types of cancer in adults - leukemia, bone, lung, breast, gastrointestinal, thyroid, and ``other``. For most cancers, both incidence and mortality are addressed. Five classes of genetic diseases -- dominant, x-linked, aneuploidy, unbalanced translocations, and multifactorial diseases are also considered. Data are provided that should enable analysts to consider the timing and severity of each type of health risk.

  20. Model predictive control: A new approach

    NASA Astrophysics Data System (ADS)

    Nagy, Endre

    2017-01-01

    New methods are proposed in this paper for solution of the model predictive control problem. Nonlinear state space design techniques are also treated. For nonlinear state prediction (state evolution computation) a new predictor given with an operator is introduced and tested. Settling the model predictive control problem may be obtained through application of the principle "direct stochastic optimum tracking" with a simple algorithm, which can be derived from a previously developed optimization procedure. The final result is obtained through iterations. Two examples show the applicability and advantages of the method.

  1. Survival Regression Modeling Strategies in CVD Prediction

    PubMed Central

    Barkhordari, Mahnaz; Padyab, Mojgan; Sardarinia, Mahsa; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza

    2016-01-01

    Background A fundamental part of prevention is prediction. Potential predictors are the sine qua non of prediction models. However, whether incorporating novel predictors to prediction models could be directly translated to added predictive value remains an area of dispute. The difference between the predictive power of a predictive model with (enhanced model) and without (baseline model) a certain predictor is generally regarded as an indicator of the predictive value added by that predictor. Indices such as discrimination and calibration have long been used in this regard. Recently, the use of added predictive value has been suggested while comparing the predictive performances of the predictive models with and without novel biomarkers. Objectives User-friendly statistical software capable of implementing novel statistical procedures is conspicuously lacking. This shortcoming has restricted implementation of such novel model assessment methods. We aimed to construct Stata commands to help researchers obtain the aforementioned statistical indices. Materials and Methods We have written Stata commands that are intended to help researchers obtain the following. 1, Nam-D’Agostino X2 goodness of fit test; 2, Cut point-free and cut point-based net reclassification improvement index (NRI), relative absolute integrated discriminatory improvement index (IDI), and survival-based regression analyses. We applied the commands to real data on women participating in the Tehran lipid and glucose study (TLGS) to examine if information relating to a family history of premature cardiovascular disease (CVD), waist circumference, and fasting plasma glucose can improve predictive performance of Framingham’s general CVD risk algorithm. Results The command is adpredsurv for survival models. Conclusions Herein we have described the Stata package “adpredsurv” for calculation of the Nam-D’Agostino X2 goodness of fit test as well as cut point-free and cut point-based NRI, relative

  2. Status report of advanced cladding modeling work to assess cladding performance under accident conditions

    SciTech Connect

    B.J. Merrill; Shannon M. Bragg-Sitton

    2013-09-01

    Scoping simulations performed using a severe accident code can be applied to investigate the influence of advanced materials on beyond design basis accident progression and to identify any existing code limitations. In 2012 an effort was initiated to develop a numerical capability for understanding the potential safety advantages that might be realized during severe accident conditions by replacing Zircaloy components in light water reactors (LWRs) with silicon carbide (SiC) components. To this end, a version of the MELCOR code, under development at the Sandia National Laboratories in New Mexico (SNL/NM), was modified by replacing Zircaloy for SiC in the MELCOR reactor core oxidation and material properties routines. The modified version of MELCOR was benchmarked against available experimental data to ensure that present SiC oxidation theory in air and steam were correctly implemented in the code. Additional modifications have been implemented in the code in 2013 to improve the specificity in defining components fabricated from non-standard materials. An overview of these modifications and the status of their implementation are summarized below.

  3. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Pilsner, B. H.; Hillery, R. V.; Mcknight, R. L.; Cook, T. S.; Kim, K. S.; Duderstadt, E. C.

    1986-01-01

    The objectives of this program are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system, and then to develop and verify life prediction models accounting for these degradation modes. The program is divided into two phases, each consisting of several tasks. The work in Phase 1 is aimed at identifying the relative importance of the various failure modes, and developing and verifying life prediction model(s) for the predominant model for a thermal barrier coating system. Two possible predominant failure mechanisms being evaluated are bond coat oxidation and bond coat creep. The work in Phase 2 will develop design-capable, causal, life prediction models for thermomechanical and thermochemical failure modes, and for the exceptional conditions of foreign object damage and erosion.

  4. Interpretable Deep Models for ICU Outcome Prediction

    PubMed Central

    Che, Zhengping; Purushotham, Sanjay; Khemani, Robinder; Liu, Yan

    2016-01-01

    Exponential surge in health care data, such as longitudinal data from electronic health records (EHR), sensor data from intensive care unit (ICU), etc., is providing new opportunities to discover meaningful data-driven characteristics and patterns ofdiseases. Recently, deep learning models have been employedfor many computational phenotyping and healthcare prediction tasks to achieve state-of-the-art performance. However, deep models lack interpretability which is crucial for wide adoption in medical research and clinical decision-making. In this paper, we introduce a simple yet powerful knowledge-distillation approach called interpretable mimic learning, which uses gradient boosting trees to learn interpretable models and at the same time achieves strong prediction performance as deep learning models. Experiment results on Pediatric ICU dataset for acute lung injury (ALI) show that our proposed method not only outperforms state-of-the-art approaches for morality and ventilator free days prediction tasks but can also provide interpretable models to clinicians. PMID:28269832

  5. Mapping and modelling of radionuclide distribution on the ground due to the Fukushima accident.

    PubMed

    Saito, Kimiaki

    2014-08-01

    A large-scale environmental monitoring effort, construction of detailed contamination maps based on the monitoring data, studies on radiocaesium migration in natural environments, construction of a prediction model for the air dose rate distribution in the 80 km zone, and construction of a database to preserve and keep open the obtained data have been implemented as national projects. Temporal changes in contamination conditions were analysed. It was found that air dose rates above roads have decreased much faster than those above undisturbed flat fields. Further, the decreasing tendency was found to depend on land uses, magnitudes of initial dose rates and some other factors.

  6. Prediction of PARP Inhibition with Proteochemometric Modelling and Conformal Prediction.

    PubMed

    Cortés-Ciriano, Isidro; Bender, Andreas; Malliavin, Thérèse

    2015-06-01

    Poly(ADP-ribose) polymerases (PARPs) play a key role in DNA damage repair. PARP inhibitors act as chemo- and radio- sensitizers and thus potentiate the cytotoxicity of DNA damaging agents. Although PARP inhibitors are currently investigated as chemotherapeutic agents, their cross-reactivity with other members of the PARP family remains unclear. Here, we apply Proteochemometric Modelling (PCM) to model the activity of 181 compounds on 12 human PARPs. We demonstrate that PCM (R0 (2) test =0.65-0.69; RMSEtest =0.95-1.01 °C) displays higher performance on the test set (interpolation) than Family QSAR and Family QSAM (Tukey's HSD, α 0.05), and outperforms Inductive Transfer knowledge among targets (Tukey's HSD, α 0.05). We benchmark the predictive signal of 8 amino acid and 11 full-protein sequence descriptors, obtaining that all of them (except for SOCN) perform at the same level of statistical significance (Tukey's HSD, α 0.05). The extrapolation power of PCM to new compounds (RMSE=1.02±0.80 °C) and targets (RMSE=1.03±0.50 °C) is comparable to interpolation, although the extrapolation ability is not uniform across the chemical and the target space. For this reason, we also provide confidence intervals calculated with conformal prediction. In addition, we present the R package conformal, which permits the calculation of confidence intervals for regression and classification caret models.

  7. Mathematical model for predicting human vertebral fracture

    NASA Technical Reports Server (NTRS)

    Benedict, J. V.

    1973-01-01

    Mathematical model has been constructed to predict dynamic response of tapered, curved beam columns in as much as human spine closely resembles this form. Model takes into consideration effects of impact force, mass distribution, and material properties. Solutions were verified by dynamic tests on curved, tapered, elastic polyethylene beam.

  8. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Meier, Susan M.; Nissley, David M.; Sheffler, Keith D.; Cruse, Thomas A.

    1991-01-01

    A thermal barrier coated (TBC) turbine component design system, including an accurate TBC life prediction model, is needed to realize the full potential of available TBC engine performance and/or durability benefits. The objective of this work, which was sponsored in part by NASA, was to generate a life prediction model for electron beam - physical vapor deposited (EB-PVD) zirconia TBC. Specific results include EB-PVD zirconia mechanical and physical properties, coating adherence strength measurements, interfacial oxide growth characteristics, quantitative cyclic thermal spallation life data, and a spallation life model.

  9. The R-γ transition prediction model

    NASA Astrophysics Data System (ADS)

    Goldberg, Uriel C.; Batten, Paul; Peroomian, Oshin; Chakravarthy, Sukumar

    2015-01-01

    The Rt turbulence closure (Goldberg 2003) is coupled with an intermittency transport equation, γ, to enable prediction of laminar-to-turbulent flow by-pass transition. The model is not correlation-based and is completely topography-parameter-free, thus ready for use in parallelized Computational Fluid Dynamics (CFD) solvers based on unstructured book-keeping. Several examples compare the R-γ model's performance with experimental data and with predictions by the Langtry-Menter γ-Reθ transition closure (2009). Like the latter, the R-γ model is very sensitive to freestream turbulence levels, limiting its utility for engineering purposes.

  10. TRENDS (Transport and Retention of Nuclides in Dominant Sequences): A code for modeling iodine behavior in containment during severe accidents

    SciTech Connect

    Weber, C.F.; Beahm, E.C.; Kress, T.S.; Daish, S.R.; Shockley, W.E.

    1989-01-01

    The ultimate aim of a description of iodine behavior in severe LWR accidents is a time-dependent accounting of iodine species released into containment and to the environment. Factors involved in the behavior of iodine can be conveniently divided into four general categories: (1) initial release into containment, (2) interaction of iodine species in containment not directly involving water pools, (3) interaction of iodine species in, or with, water pools, and (4) interaction with special systems such as ice condensers or gas treatment systems. To fill the large gaps in knowledge and to provide a means for assaying the iodine source term, this program has proceeded along two paths: (1) Experimental studies of the chemical behavior of iodine under containment conditions. (2) Development of TRENDS (Transport and Retention of Nuclides in Dominant Sequences), a computer code for modeling the behavior of iodine in containment and its release from containment. The main body of this report consists of a description of TRENDS. These two parts to the program are complementary in that models within TRENDS use data that were produced in the experimental program; therefore, these models are supported by experimental evidence that was obtained under conditions expected in severe accidents. 7 refs., 1 fig., 2 tabs.

  11. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.; Mcknight, R. L.; Cook, T. S.; Hartle, M. S.

    1988-01-01

    This report describes work performed to determine the predominat modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consisted of a low pressure plasma sprayed NiCrAlY bond coat, an air plasma sprayed ZrO2-Y2O3 top coat, and a Rene' 80 substrate. The work was divided into 3 technical tasks. The primary failure mode to be addressed was loss of the zirconia layer through spalling. Experiments showed that oxidation of the bond coat is a significant contributor to coating failure. It was evident from the test results that the species of oxide scale initially formed on the bond coat plays a role in coating degradation and failure. It was also shown that elevated temperature creep of the bond coat plays a role in coating failure. An empirical model was developed for predicting the test life of specimens with selected coating, specimen, and test condition variations. In the second task, a coating life prediction model was developed based on the data from Task 1 experiments, results from thermomechanical experiments performed as part of Task 2, and finite element analyses of the TBC system during thermal cycles. The third and final task attempted to verify the validity of the model developed in Task 2. This was done by using the model to predict the test lives of several coating variations and specimen geometries, then comparing these predicted lives to experimentally determined test lives. It was found that the model correctly predicts trends, but that additional refinement is needed to accurately predict coating life.

  12. Failure behavior of internally pressurized flawed and unflawed steam generator tubing at high temperatures -- Experiments and comparison with model predictions

    SciTech Connect

    Majumdar, S.; Shack, W.J.; Diercks, D.R.; Mruk, K.; Franklin, J.; Knoblich, L.

    1998-03-01

    This report summarizes experimental work performed at Argonne National Laboratory on the failure of internally pressurized steam generator tubing at high temperatures ({le} 700 C). A model was developed for predicting failure of flawed and unflawed steam generator tubes under internal pressure and temperature histories postulated to occur during severe accidents. The model was validated by failure tests on specimens with part-through-wall axial and circumferential flaws of various lengths and depths, conducted under various constant and ramped internal pressure and temperature conditions. The failure temperatures predicted by the model for two temperature and pressure histories, calculated for severe accidents initiated by a station blackout, agree very well with tests performed on both flawed and unflawed specimens.

  13. Are animal models predictive for humans?

    PubMed Central

    2009-01-01

    It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics. PMID:19146696

  14. Prediction oriented QSAR modelling of EGFR inhibition.

    PubMed

    Szántai-Kis, C; Kövesdi, I; Eros, D; Bánhegyi, P; Ullrich, A; Kéri, G; Orfi, L

    2006-01-01

    Epidermal Growth Factor Receptor (EGFR) is a high priority target in anticancer drug research. Thousands of very effective EGFR inhibitors have been developed in the last decade. The known inhibitors are originated from a very diverse chemical space but--without exception--all of them act at the Adenosine TriPhosphate (ATP) binding site of the enzyme. We have collected all of the diverse inhibitor structures and the relevant biological data obtained from comparable assays and built prediction oriented Quantitative Structure-Activity Relationship (QSAR) which models the ATP binding pocket's interactive surface from the ligand side. We describe a QSAR method with automatic Variable Subset Selection (VSS) by Genetic Algorithm (GA) and goodness-of-prediction driven QSAR model building, resulting an externally validated EGFR inhibitory model built from pIC50 values of a diverse structural set of 623 EGFR inhibitors. Repeated Trainings/Evaluations (RTE) were used to obtain model fitness values and the effectiveness of VSS is amplified by using predictive ability scores of descriptors. Numerous models were generated by different methods and viable models were collected. Then, intensive RTE were applied to identify ultimate models for external validations. Finally, suitable models were validated by statistical tests. Since we use calculated molecular descriptors in the modeling, these models are suitable for virtual screening for obtaining novel potential EGFR inhibitors.

  15. A High Precision Prediction Model Using Hybrid Grey Dynamic Model

    ERIC Educational Resources Information Center

    Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro

    2008-01-01

    In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…

  16. The "AQUASCOPE" simplified model for predicting 89,90Sr, 131I, and 134,137Cs in surface waters after a large-scale radioactive fallout.

    PubMed

    Smith, J T; Belova, N V; Bulgakov, A A; Comans, R N J; Konoplev, A V; Kudelsky, A V; Madruga, M J; Voitsekhovitch, O V; Zibold, G

    2005-12-01

    Simplified dynamic models have been developed for predicting the concentrations of radiocesium, radiostrontium, and I in surface waters and freshwater fish following a large-scale radioactive fallout. The models are intended to give averaged estimates for radionuclides in water bodies and in fish for all times after a radioactive fallout event. The models are parameterized using empirical data collected for many lakes and rivers in Belarus, Russia, Ukraine, UK, Finland, Italy, The Netherlands, and Germany. These measurements span a long time period after fallout from atmospheric nuclear weapons testing and following the Chernobyl accident. The models thus developed were tested against independent measurements from the Kiev Reservoir and Chernobyl Cooling Pond (Ukraine) and the Sozh River (Belarus) after the Chernobyl accident, from Lake Uruskul (Russia), following the Kyshtym accident in 1957, and from Haweswater Reservoir (UK), following atmospheric nuclear weapons testing. The AQUASCOPE models (implemented in EXCEL spreadsheets) and model documentation are available free of charge from the corresponding author.

  17. A lower baseline glomerular filtration rate predicts high mortality and newly cerebrovascular accidents in acute ischemic stroke patients

    PubMed Central

    Dong, Kai; Huang, Xiaoqin; Zhang, Qian; Yu, Zhipeng; Ding, Jianping; Song, Haiqing

    2017-01-01

    Abstract Chronic kidney disease (CKD) is gradually recognized as an independent risk factor for cardiovascular and cardio-/cerebrovascular disease. This study aimed to examine the association of the estimated glomerular filtration rate (eGFR) and clinical outcomes at 3 months after the onset of ischemic stroke in a hospitalized Chinese population. Totally, 972 patients with acute ischemic stroke were enrolled into this study. Modified of Diet in Renal Disease (MDRD) equations were used to calculate eGFR and define CKD. The site and degree of the stenosis were examined. Patients were followed-up for 3 months. Endpoint events included all-cause death and newly ischemic events. The multivariate logistic model was used to determine the association between renal dysfunction and patients’ outcomes. Of all patients, 130 patients (13.4%) had reduced eGFR (<60 mL/min/1.73 m2), and 556 patients had a normal eGFR (≥90 mL/min/1.73 m2). A total of 694 patients suffered from cerebral artery stenosis, in which 293 patients only had intracranial artery stenosis (ICAS), 110 only with extracranial carotid atherosclerotic stenosis (ECAS), and 301 with both ICAS and ECAS. The patients with eGFR <60 mL/min/1.73m2 had a higher proportion of death and newly ischemic events compared with those with a relatively normal eGFR. Multivariate analysis revealed that a baseline eGFR <60 mL/min/1.73 m2 increased the risk of mortality by 3.089-fold and newly ischemic events by 4.067-fold. In further analysis, a reduced eGFR was associated with increased rates of mortality and newly events both in ICAS patients and ECAS patients. However, only an increased risk of newly events was found as the degree of renal function deteriorated in ICAS patients (odds ratio = 8.169, 95% confidence interval = 2.445–14.127). A low baseline eGFR predicted a high mortality and newly ischemic events at 3 months in ischemic stroke patients. A low baseline eGFR was also a strong independent

  18. Partial least square method for modelling ergonomic risks factors on express bus accidents in the east coast of peninsular west Malaysia

    NASA Astrophysics Data System (ADS)

    Hashim, Yusof bin; Taha, Zahari bin

    2015-02-01

    Public, stake holders and authorities in Malaysian government show great concern towards high numbers of passenger's injuries and passengers fatalities in express bus accident. This paper studies the underlying factors involved in determining ergonomics risk factors towards human error as the reasons in express bus accidents in order to develop an integrated analytical framework. Reliable information about drivers towards bus accident should lead to the design of strategies intended to make the public feel safe in public transport services. In addition there is an analysis of ergonomics risk factors to determine highly ergonomic risk factors which led to accidents. The research was performed in east coast of peninsular Malaysia using variance-based structural equation modeling namely the Partial Least Squares (PLS) regression techniques. A questionnaire survey was carried out at random among 65 express bus drivers operating from the city of Kuantan in Pahang and among 49 express bus drivers operating from the city of Kuala Terengganu in Terengganu to all towns in the east coast of peninsular west Malaysia. The ergonomic risks factors questionnaire is based on demographic information, occupational information, organizational safety climate, ergonomic workplace, physiological factors, stress at workplace, physical fatigue and near miss accidents. The correlation and significant values between latent constructs (near miss accident) were analyzed using SEM SmartPLS, 3M. The finding shows that the correlated ergonomic risks factors (occupational information, t=2.04, stress at workplace, t = 2.81, physiological factor, t=2.08) are significant to physical fatigue and as the mediator to near miss accident at t = 2.14 at p<0.05and T-statistics, t>1.96. The results shows that the effects of physical fatigue due to ergonomic risks factors influence the human error as the reasons in express bus accidents.

  19. Partial least square method for modelling ergonomic risks factors on express bus accidents in the east coast of peninsular west Malaysia

    SciTech Connect

    Hashim, Yusof bin; Taha, Zahari bin

    2015-02-03

    Public, stake holders and authorities in Malaysian government show great concern towards high numbers of passenger’s injuries and passengers fatalities in express bus accident. This paper studies the underlying factors involved in determining ergonomics risk factors towards human error as the reasons in express bus accidents in order to develop an integrated analytical framework. Reliable information about drivers towards bus accident should lead to the design of strategies intended to make the public feel safe in public transport services. In addition there is an analysis of ergonomics risk factors to determine highly ergonomic risk factors which led to accidents. The research was performed in east coast of peninsular Malaysia using variance-based structural equation modeling namely the Partial Least Squares (PLS) regression techniques. A questionnaire survey was carried out at random among 65 express bus drivers operating from the city of Kuantan in Pahang and among 49 express bus drivers operating from the city of Kuala Terengganu in Terengganu to all towns in the east coast of peninsular west Malaysia. The ergonomic risks factors questionnaire is based on demographic information, occupational information, organizational safety climate, ergonomic workplace, physiological factors, stress at workplace, physical fatigue and near miss accidents. The correlation and significant values between latent constructs (near miss accident) were analyzed using SEM SmartPLS, 3M. The finding shows that the correlated ergonomic risks factors (occupational information, t=2.04, stress at workplace, t = 2.81, physiological factor, t=2.08) are significant to physical fatigue and as the mediator to near miss accident at t = 2.14 at p<0.05and T-statistics, t>1.96. The results shows that the effects of physical fatigue due to ergonomic risks factors influence the human error as the reasons in express bus accidents.

  20. Plasma Stabilization Based on Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Sotnikova, Margarita

    The nonlinear model predictive control algorithms for plasma current and shape stabilization are proposed. Such algorithms are quite suitable for the situations when the plant to be controlled has essentially nonlinear dynamics. Besides that, predictive model based control algorithms allow to take into account a lot of requirements and constraints involved both on the controlled and manipulated variables. The significant drawback of the algorithms is that they require a lot of time to compute control input at each sampling instant. In this paper the model predictive control algorithms are demonstrated by the example of plasma vertical stabilization for ITER-FEAT tokamak. The tuning of parameters of algorithms is performed in order to decrease computational load.

  1. Predictive model for segmented poly(urea)

    NASA Astrophysics Data System (ADS)

    Gould, P. J.; Cornish, R.; Frankl, P.; Lewtas, I.

    2012-08-01

    Segmented poly(urea) has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM) - a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  2. Multi-Model Ensemble Wake Vortex Prediction

    NASA Technical Reports Server (NTRS)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  3. Long term radiocesium contamination of fruit trees following the Chernobyl accident

    SciTech Connect

    Antonopoulos-Domis, M.; Clouvas, A.; Gagianas, A.

    1996-12-01

    Radiocesium contamination form the Chernobyl accident of fruits and leaves from various fruit trees was systematically studied form 1990-1995 on two agricultural experimentation farms in Northern Greece. The results are discussed in the framework of a previously published model describing the long-term radiocesium contamination mechanism of deciduous fruit trees after a nuclear accident. The results of the present work qualitatively verify the model predictions. 11 refs., 5 figs., 1 tab.

  4. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Stewart, S. E.; Ortiz, M.

    1988-01-01

    A life prediction model for correlating the spallation life of ceramic thermal barrier coatings is developed which includes both cyclic and time-dependent damage. The cyclic damage is related to the calculated cyclic inelastic strain range, while the time-dependent damage is related to the oxidation kinetics at the bond-ceramic interface. The cyclic inelastic strain range is calculated using a modified form of the Walker viscoplastic material model; calculation of the oxidation kinetics is based on traditional oxidation algorithms using experimentally determined parameters. The correlation between the actual and predicted spallation lives is within a factor of 3.

  5. Modelling the global atmospheric transport and deposition of radionuclides from the Fukushima Dai-ichi nuclear accident.

    NASA Astrophysics Data System (ADS)

    Christoudias, Theodoros; Lelieveld, Jos

    2013-04-01

    We modeled the global atmospheric dispersion and deposition of radionuclides released from the Fukushima Dai-ichi nuclear power plant accident. The EMAC atmospheric chemistry - general circulation model was used, with circulation dynamics nudged towards ERA-Interim reanalysis data. We applied a resolution of approximately 0.5 degrees in latitude and longitude (T255). The model accounts for emissions and transport of the radioactive isotopes 131I and 137Cs, and removal processes through precipitation, particle sedimentation and dry deposition. In addition, we simulated the release of 133Xe, a noble gas that can be regarded as a passive transport tracer of contaminated air. The source terms are based on Chino et al. (2011) and Stohl et al. (2012); especially the emission estimates of 131I are associated with a high degree of uncertainty. The calculated concentrations have been compared to station observations by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO).

  6. Calibrated predictions for multivariate competing risks models.

    PubMed

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  7. Constructing and Validating a Decadal Prediction Model

    NASA Astrophysics Data System (ADS)

    Foss, I.; Woolf, D. K.; Gagnon, A. S.; Merchant, C. J.

    2010-05-01

    For the purpose of identifying potential sources of predictability of Scottish mean air temperature (SMAT), a redundancy analysis (RA) was accomplished to quantitatively assess the predictability of SMAT from North Atlantic SSTs as well as the temporal consistency of this predictability. The RA was performed between the main principal components of North Atlantic SST anomalies and SMAT anomalies for two time periods: 1890-1960 and 1960-2006. The RA models developed using data from the 1890-1960 period were validated using the 1960-2006 period; in a similar way the model developed based on the 1960-2006 period was validated using data from the 1890-1960 period. The results indicate the potential to forecast decadal trends in SMAT for all seasons in 1960-2006 time period and all seasons with the exception of winter for the period 1890-1960 with the best predictability achieved in summer. The statistical models show the best performance when SST anomalies in the European shelf seas (45°N-65°N, 20W-20E) rather than those for the SSTs over the entire North Atlantic (30°N-75°N, 80°W-30°E) were used as a predictor. The results of the RA demonstrated that similar SSTs modes were responsible for predictions in the first and second half of the 20th century, establishing temporal consistency, though with stronger influence in the more recent half. The SST pattern responsible for explaining the largest amount of variance in SMAT was stronger in the second half of the 20th century and showed increasing influence from the area of the North Sea, possibly due to faster sea-surface warming in that region in comparison with the open North Atlantic. The Wavelet Transform (WT), Cross Wavelet Transform (XWT) and Wavelet Coherence (WTC) techniques were used to analyse RA-model-based forecasts of SMAT in the time-frequency domain. Wavelet-based techniques applied to the predicted and observed time series revealed a good performance of RA models to predict the frequency variability

  8. Observation simulation experiments with regional prediction models

    NASA Technical Reports Server (NTRS)

    Diak, George; Perkey, Donald J.; Kalb, Michael; Robertson, Franklin R.; Jedlovec, Gary

    1990-01-01

    Research efforts in FY 1990 included studies employing regional scale numerical models as aids in evaluating potential contributions of specific satellite observing systems (current and future) to numerical prediction. One study involves Observing System Simulation Experiments (OSSEs) which mimic operational initialization/forecast cycles but incorporate simulated Advanced Microwave Sounding Unit (AMSU) radiances as input data. The objective of this and related studies is to anticipate the potential value of data from these satellite systems, and develop applications of remotely sensed data for the benefit of short range forecasts. Techniques are also being used that rely on numerical model-based synthetic satellite radiances to interpret the information content of various types of remotely sensed image and sounding products. With this approach, evolution of simulated channel radiance image features can be directly interpreted in terms of the atmospheric dynamical processes depicted by a model. Progress is being made in a study using the internal consistency of a regional prediction model to simplify the assessment of forced diabatic heating and moisture initialization in reducing model spinup times. Techniques for model initialization are being examined, with focus on implications for potential applications of remote microwave observations, including AMSU and Special Sensor Microwave Imager (SSM/I), in shortening model spinup time for regional prediction.

  9. Combining Modeling and Gaming for Predictive Analytics

    SciTech Connect

    Riensche, Roderick M.; Whitney, Paul D.

    2012-08-22

    Many of our most significant challenges involve people. While human behavior has long been studied, there are recent advances in computational modeling of human behavior. With advances in computational capabilities come increases in the volume and complexity of data that humans must understand in order to make sense of and capitalize on these modeling advances. Ultimately, models represent an encapsulation of human knowledge. One inherent challenge in modeling is efficient and accurate transfer of knowledge from humans to models, and subsequent retrieval. The simulated real-world environment of games presents one avenue for these knowledge transfers. In this paper we describe our approach of combining modeling and gaming disciplines to develop predictive capabilities, using formal models to inform game development, and using games to provide data for modeling.

  10. Advancements in predictive plasma formation modeling

    NASA Astrophysics Data System (ADS)

    Purvis, Michael A.; Schafgans, Alexander; Brown, Daniel J. W.; Fomenkov, Igor; Rafac, Rob; Brown, Josh; Tao, Yezheng; Rokitski, Slava; Abraham, Mathew; Vargas, Mike; Rich, Spencer; Taylor, Ted; Brandt, David; Pirati, Alberto; Fisher, Aaron; Scott, Howard; Koniges, Alice; Eder, David; Wilks, Scott; Link, Anthony; Langer, Steven

    2016-03-01

    We present highlights from plasma simulations performed in collaboration with Lawrence Livermore National Labs. This modeling is performed to advance the rate of learning about optimal EUV generation for laser produced plasmas and to provide insights where experimental results are not currently available. The goal is to identify key physical processes necessary for an accurate and predictive model capable of simulating a wide range of conditions. This modeling will help to drive source performance scaling in support of the EUV Lithography roadmap. The model simulates pre-pulse laser interaction with the tin droplet and follows the droplet expansion into the main pulse target zone. Next, the interaction of the expanded droplet with the main laser pulse is simulated. We demonstrate the predictive nature of the code and provide comparison with experimental results.

  11. Model Predictive Control of Sewer Networks

    NASA Astrophysics Data System (ADS)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.

    2017-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.

  12. Estimation of the duration after methamphetamine injection using a pharmacokinetic model in suspects who caused fatal traffic accidents.

    PubMed

    Matsubara, Kazuo; Asari, Masaru; Suno, Manabu; Awaya, Toshio; Sugawara, Mitsuru; Omura, Tomohiro; Yamamoto, Joe; Maseda, Chikatoshi; Tasaki, Yoshikazu; Shiono, Hiroshi; Shimizu, Keiko

    2012-07-01

    When the population parameters of drug pharmacokinetics in the human body system are known, the time-course of a certain drug in an individual can generally be estimated by pharmacokinetics. In the present two cases where methamphetamine abusers were suspected to have inflicted mortalities in traffic accidents, the time-elapse or duration immediately after methamphetamine injection to the time when the accidents occurred became points of contention. In each case, we estimated the time-course of blood methamphetamine after the self-administration in the suspects using a 2-compartment pharmacokinetic model with known pharmacokinetic parameters from the literatures. If the injected amount can be determined to a certain extent, it is easy to calculate the average time-elapse after injection by referring to reference values. However, there is considerable individual variability in the elimination rate based on genetic polymorphism and a considerably large error range in the estimated time-elapse results. To minimize estimation errors in such cases, we also analyzed genotype of CYP2D6, which influenced methamphetamine metabolism. Estimation based on two time-point blood samples would usefully benefit legal authorities in passing ruling sentences in cases involving similar personalities and circumstances as those involved in the present study.

  13. DKIST Polarization Modeling and Performance Predictions

    NASA Astrophysics Data System (ADS)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  14. Atmospheric dispersion modeling for the worst-case detonation accident at the proposed Advanced Hydrotest Facility

    SciTech Connect

    Bowen, B.M., LLNL

    1996-10-01

    The Atmospheric Release Advisory Capability (ARAC) was requested to estimate credible worst-case dose, air concentration, and deposition of airborne hazardous materials that would result from a worst-case detonation accident at the proposed Advanced Hydrotest Facility (AHT) at the Nevada Test Site (NTS). Consequences were estimated at the closest onsite facility, the Device Assembly Facility (DOFF), and offsite location (intersection of Highway and U.S. 95). The materials considered in this analysis were weapon-grade plutonium, beryllium, and hydrogen fluoride which is a combustion product whose concentration is dependent upon the quantity of high explosives. The analysis compares the calculated results with action guidelines published by the Department of Defense in DoD 5100.52-M (Nuclear Weapon Accident Response Procedures). Results indicate that based on a one kg release of plutonium the whole body radiation dose could be as high as 3 Rem at the DOFF. This level approaches the 5 Rem level for which the Department of Defense requires respiratory protection, recommends sheltering and the consideration of evacuation. Deposition levels at the DOFF could approach 6 uCi/m{sup 2} for which the Department of Defense recommends access on a need-only basis and suggests that a possible controlled evacuation might be required. For a one kg release of plutonium, the dose at the nearest off-site location could reach 0.5. At this level, the Department of Defense suggests that sheltering be considered. For a one kg release of beryllium, the peak 5-minute concentration at the DOFF could be as as 20% of 6xlO{sup -3} mg/m{sup 2} which is the applicable by Response Planning Guideline (ERPG-1). At the nearest offsite location, the beryllium concentrations from a one kg release would be two orders of magnitude less than the same guideline. For the detonation of 100 kg of the explosive LX-17, the concentration of hydrogen fluoride at both the DOFF and the nearest offsite location

  15. Predictive Modeling of the CDRA 4BMS

    NASA Technical Reports Server (NTRS)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  16. Predictive analytics can support the ACO model.

    PubMed

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  17. Predictive performance models and multiple task performance

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  18. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  19. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J. T.; Sheffler, K. D.

    1985-01-01

    The objective is to develop an integrated life prediction model accounting for all potential life-limiting thermal barrier coating (TBC) degradation and failure modes, including spallation resulting from cyclic thermal stress, oxidation degradation, hot corrosion, erosion and foreign object damage.

  20. Nearshore Operational Model for Rip Current Predictions

    NASA Astrophysics Data System (ADS)

    Sembiring, L. E.; Van Dongeren, A. R.; Van Ormondt, M.; Winter, G.; Roelvink, J.

    2012-12-01

    A coastal operational model system can serve as a tool in order to monitor and predict coastal hazards, and to acquire up-to-date information on coastal state indicators. The objective of this research is to develop a nearshore operational model system for the Dutch coast focusing on swimmer safety. For that purpose, an operational model system has been built which can predict conditions up to 48 hours ahead. The model system consists of three different nested model domain covering The North Sea, The Dutch coastline, and one local model which is the area of interest. Three different process-based models are used to simulate physical processes within the system: SWAN to simulate wave propagation, Delft3D-Flow for hydraulics flow simulation, and XBeach for the nearshore models. The SWAN model is forced by wind fields from operational HiRLAM, as well as two dimensional wave spectral data from WaveWatch 3 Global as the ocean boundaries. The Delft3D Flow model is forced by assigning the boundaries with tidal constants for several important astronomical components as well as HiRLAM wind fields. For the local XBeach model, up-to-date bathymetry will be obtained by assimilating model computation and Argus video data observation. A hindcast is carried out on the Continental Shelf Model, covering the North Sea and nearby Atlantic Ocean, for the year 2009. Model skills are represented by several statistical measures such as rms error and bias. In general the results show that the model system exhibits a good agreement with field data. For SWAN results, integral significant wave heights are predicted well by the model for all wave buoys considered, with rms errors ranging from 0.16 m for the month of May with observed mean significant wave height of 1.08 m, up to rms error of 0.39 m for the month of November, with observed mean significant wave height of 1.91 m. However, it is found that the wave model slightly underestimates the observation for the period of June, especially

  1. PREDICTIVE MODELING OF CHOLERA OUTBREAKS IN BANGLADESH

    PubMed Central

    Koepke, Amanda A.; Longini, Ira M.; Halloran, M. Elizabeth; Wakefield, Jon; Minin, Vladimir N.

    2016-01-01

    Despite seasonal cholera outbreaks in Bangladesh, little is known about the relationship between environmental conditions and cholera cases. We seek to develop a predictive model for cholera outbreaks in Bangladesh based on environmental predictors. To do this, we estimate the contribution of environmental variables, such as water depth and water temperature, to cholera outbreaks in the context of a disease transmission model. We implement a method which simultaneously accounts for disease dynamics and environmental variables in a Susceptible-Infected-Recovered-Susceptible (SIRS) model. The entire system is treated as a continuous-time hidden Markov model, where the hidden Markov states are the numbers of people who are susceptible, infected, or recovered at each time point, and the observed states are the numbers of cholera cases reported. We use a Bayesian framework to fit this hidden SIRS model, implementing particle Markov chain Monte Carlo methods to sample from the posterior distribution of the environmental and transmission parameters given the observed data. We test this method using both simulation and data from Mathbaria, Bangladesh. Parameter estimates are used to make short-term predictions that capture the formation and decline of epidemic peaks. We demonstrate that our model can successfully predict an increase in the number of infected individuals in the population weeks before the observed number of cholera cases increases, which could allow for early notification of an epidemic and timely allocation of resources. PMID:27746850

  2. A stepwise model to predict monthly streamflow

    NASA Astrophysics Data System (ADS)

    Mahmood Al-Juboori, Anas; Guven, Aytac

    2016-12-01

    In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.

  3. Disease prediction models and operational readiness.

    PubMed

    Corley, Courtney D; Pullum, Laura L; Hartley, David M; Benedum, Corey; Noonan, Christine; Rabinowitz, Peter M; Lancaster, Mary J

    2014-01-01

    The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6), spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A component of this review is the identification of verification and validation (V&V) methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology Readiness

  4. Disease Prediction Models and Operational Readiness

    PubMed Central

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey; Noonan, Christine; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-01-01

    The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6), spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A component of this review is the identification of verification and validation (V&V) methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology Readiness

  5. Can contaminant transport models predict breakthrough?

    USGS Publications Warehouse

    Peng, Wei-Shyuan; Hampton, Duane R.; Konikow, Leonard F.; Kambham, Kiran; Benegar, Jeffery J.

    2000-01-01

    A solute breakthrough curve measured during a two-well tracer test was successfully predicted in 1986 using specialized contaminant transport models. Water was injected into a confined, unconsolidated sand aquifer and pumped out 125 feet (38.3 m) away at the same steady rate. The injected water was spiked with bromide for over three days; the outflow concentration was monitored for a month. Based on previous tests, the horizontal hydraulic conductivity of the thick aquifer varied by a factor of seven among 12 layers. Assuming stratified flow with small dispersivities, two research groups accurately predicted breakthrough with three-dimensional (12-layer) models using curvilinear elements following the arc-shaped flowlines in this test. Can contaminant transport models commonly used in industry, that use rectangular blocks, also reproduce this breakthrough curve? The two-well test was simulated with four MODFLOW-based models, MT3D (FD and HMOC options), MODFLOWT, MOC3D, and MODFLOW-SURFACT. Using the same 12 layers and small dispersivity used in the successful 1986 simulations, these models fit almost as accurately as the models using curvilinear blocks. Subtle variations in the curves illustrate differences among the codes. Sensitivities of the results to number and size of grid blocks, number of layers, boundary conditions, and values of dispersivity and porosity are briefly presented. The fit between calculated and measured breakthrough curves degenerated as the number of layers and/or grid blocks decreased, reflecting a loss of model predictive power as the level of characterization lessened. Therefore, the breakthrough curve for most field sites can be predicted only qualitatively due to limited characterization of the hydrogeology and contaminant source strength.

  6. A predictive model for artificial mechanical cochlea

    NASA Astrophysics Data System (ADS)

    Ahmed, Riaz U.; Adiba, Afifa; Banerjee, Sourav

    2015-03-01

    To recover the hearing deficiency, cochlea implantation is essential if the inner ear is damaged. Existing implantable cochlea is an electronic device, usually placed outside the ear to receive sound from environment, convert into electric impulses and send to auditory nerve bypassing the damaged cochlea. However, due to growing demand researchers are interested in fabricating artificial mechanical cochlea to overcome the limitations of electronic cochlea. Only a hand full number of research have been published in last couple of years showing fabrication of basilar membrane mimicking the cochlear operations. Basilar membrane plays the most important role in a human cochlea by responding only on sonic frequencies using its varying material properties from basal to apical end. Only few sonic frequencies have been sensed with the proposed models; however no process was discussed on how the frequency selectivity of the models can be improved to sense the entire sonic frequency range. Thus, we argue that a predictive model is the missing-link and is the utmost necessity. Hence, in this study, we intend to develop a multi-scale predictive model for basilar membrane such that sensing potential of the artificial cochlea can be designed and tuned predictively by altering the model parameters.

  7. Genetic models of homosexuality: generating testable predictions.

    PubMed

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  8. Genetic models of homosexuality: generating testable predictions

    PubMed Central

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  9. Robust model predictive control of Wiener systems

    NASA Astrophysics Data System (ADS)

    Biagiola, S. I.; Figueroa, J. L.

    2011-03-01

    Block-oriented models (BOMs) have shown to be appealing and efficient as nonlinear representations for many applications. They are at the same time valid and simple models in a more extensive region than time-invariant linear models. In this work, Wiener models are considered. They are one of the most diffused BOMs, and their structure consists in a linear dynamics in cascade with a nonlinear static block. Particularly, the problem of control of these systems in the presence of uncertainty is treated. The proposed methodology makes use of a robust identification procedure in order to obtain a robust model to represent the uncertain system. This model is then employed to design a model predictive controller. The mathematical problem involved in the controller design is formulated in the context of the existing linear matrix inequalities (LMI) theory. The main feature of this approach is that it takes advantage of the static nature of the nonlinearity, which allows to solve the control problem by focusing only in the linear dynamics. This formulation results in a simplified design procedure, because the original nonlinear model predictive control (MPC) problem turns into a linear one.

  10. Prediction failure of a wolf landscape model

    USGS Publications Warehouse

    Mech, L.D.

    2006-01-01

    I compared 101 wolf (Canis lupus) pack territories formed in Wisconsin during 1993-2004 to the logistic regression predictive model of Mladenoff et al. (1995, 1997, 1999). Of these, 60% were located in putative habitat suitabilities 50% remained unoccupied by known packs after 24 years of recolonization. This model was a poor predictor of wolf re-colonizing locations in Wisconsin, apparently because it failed to consider the adaptability of wolves. Such models should be used cautiously in wolf-management or restoration plans.

  11. Modeling Benthic Sediment Processes to Predict Water ...

    EPA Pesticide Factsheets

    The benthic sediment acts as a huge reservoir of particulate and dissolved material (within interstitial water) which can contribute to loading of contaminants and nutrients to the water column. A benthic sediment model is presented in this report to predict spatial and temporal benthic fluxes of nutrients and chemicals in Narragansett Bay. A benthic sediment model is presented in this report to identify benthic flux into the water column in Narragansett Bay. Benthic flux is essential to properly model water quality and ecology in estuarine and coastal systems.

  12. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl Nuclear Power Plant accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-07-01

    The coupled model LMDZORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5° × 1.27°, and the same grid stretched over Europe to reach a resolution of 0.66° × 0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels respectively, extending up to the mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The model is validated with the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986 using the emission inventory from Brandt et al. (2002). This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. The best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to De Cort et al., 1998), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for

  13. Radiation accidents

    SciTech Connect

    Saenger, E.L.

    1986-09-01

    It is essential that emergency physicians understand ways to manage patients contaminated by radioactive materials and/or exposed to external radiation sources. Contamination accidents require careful surveys to identify the metabolic pathway of the radionuclides to guide prognosis and treatment. The level of treatment required will depend on careful surveys and meticulous decontamination. There is no specific therapy for the acute radiation syndrome. Prophylactic antibodies are desirable. For severely exposed patients treatment is similar to the supportive care given to patients undergoing organ transplantation. For high-dose extremity injury, no methods have been developed to reverse the fibrosing endarteritis that eventually leads to tissue death so frequently found with this type of injury. Although the Three Mile Island episode of March 1979 created tremendous public concern, there were no radiation injuries. The contamination outside the reactor building and the release of radioiodine were negligible. The accidental fuel element meltdown at Chernobyl, USSR, resulted in many cases of acute radiation syndrome. More than 100,000 people were exposed to high levels of radioactive fallout. The general principles outlined here are applicable to accidents of that degree of severity.

  14. Predictive Models for Carcinogenicity and Mutagenicity ...

    EPA Pesticide Factsheets

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  15. Seasonal Predictability in a Model Atmosphere.

    NASA Astrophysics Data System (ADS)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  16. Development of an in vitro porcine aorta model to study the stability of stent grafts in motor vehicle accidents.

    PubMed

    Darvish, Kurosh; Shafieian, Mehdi; Romanov, Vasily; Rotella, Vittorio; Salvatore, Michael D; Blebea, John

    2009-04-01

    Endovascular stent grafts for the treatment of thoracic aortic aneurysms have become increasingly utilized and yet their locational stability in moderate chest trauma is unknown. A high speed impact system was developed to study the stability of aortic endovascular stent grafts in vitro. A straight segment of porcine descending aorta with stent graft was constrained in a custom-made transparent urethane casing. The specimen was tested in a novel impact system at an anterior inclination of 45 deg and an average deceleration of 55 G, which represented a frontal automobile crash. Due to the shock of the impact, which was shown to be below the threshold of aortic injury, the stent graft moved 0.6 mm longitudinally. This result was repeatable. The presented experimental model may be helpful in developing future grafts to withstand moderate shocks experienced in motor vehicle accidents or other dynamic loadings of the chest.

  17. Model atmospheres, predicted spectra, and colors

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Theoretical models of stellar atmospheres and the process of forming a spectrum are reviewed with particular reference to the spectra of B stars. In the case of classical models the stellar atmosphere is though to consist of plane parallel layers of gas in which radiative and hydrostatic equilibrium exists. No radiative energy is lost or gained in the model atmosphere, but the detailed shape of the spectrum is changed as a result of the interactions with the ionized gas. Predicted line spectra using statistical equilibrium local thermodynamic equilibrium (LTE), and non-LTE physics are compared and the determination of abundances is discussed. The limitations of classical modeling are examined. Models developed to demonstrate what motions in the upper atmosphere will do to the spectrum and to explore the effects of using geometries different from plane parallel layer are reviewed. In particular the problem of radiative transfer is addressed.

  18. Validated predictive modelling of the environmental resistome

    PubMed Central

    Amos, Gregory CA; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-01-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  19. Disease Prediction Models and Operational Readiness

    SciTech Connect

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  20. Predictive Modeling in Actinide Chemistry and Catalysis

    SciTech Connect

    Yang, Ping

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  1. Verification and Validation of Neutronic/Thermalhydraulic 3D-Time Dependent Model for Treatment of Super-critical States of Light water Research Reactors Accidents

    SciTech Connect

    Khaled, S.M.

    2015-07-01

    This work presents the Verification and testing both the neutronic and thermal-hydraulics response of the positive reactivity-initiated power excursion accidents in small light water research reactors. Some research reactors have to build its own severe accidents code system. In this sense, a 3D space-time-dependent neutron diffusion models with thermal hydraulic feedback have been introduced, compared and tested both experimentally at criticality 14-cent and theoretically up to 1.5 $ with a number of similar codes. The results shows that no expected core failure or moderator boiling. (author)

  2. Probabilistic prediction models for aggregate quarry siting

    USGS Publications Warehouse

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  3. Predictive Modeling of the CDRA 4BMS

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  4. Constructing predictive models of human running

    PubMed Central

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-01-01

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. PMID:25505131

  5. A predictive geologic model of radon occurrence

    SciTech Connect

    Gregg, L.T. )

    1990-01-01

    Earlier work by LeGrand on predictive geologic models for radon focused on hydrogeologic aspects of radon transport from a given uranium/radium source in a fractured crystalline rock aquifer, and included submodels for bedrock lithology (uranium concentration), topographic slope, and water-table behavior and characteristics. LeGrand's basic geologic model has been modified and extended into a submodel for crystalline rocks (Blue Ridge and Piedmont Provinces) and a submodel for sedimentary rocks (Valley and Ridge and Coastal Plain Provinces). Each submodel assigns a ranking of 1 to 15 to the bedrock type, based on (a) known or supposed uranium/thorium content, (b) petrography/lithology, and (c) structural features such as faults, shear or breccia zones, diabase dikes, and jointing/fracturing. The bedrock ranking is coupled with a generalized soil/saprolite model which ranks soil/saprolite type and thickness from 1 to 10. A given site is thus assessed a ranking of 1 to 150 as a guide to its potential for high radon occurrence in the upper meter or so of soil. Field trials of the model are underway, comparing model predictions with measured soil-gas concentrations of radon.

  6. Statistical Seasonal Sea Surface based Prediction Model

    NASA Astrophysics Data System (ADS)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  7. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  8. Applications of predictive environmental strain models.

    PubMed

    Reardon, M J; Gonzalez, R R; Pandolf, K B

    1997-02-01

    Researchers at the U.S. Army Research Institute of Environmental Medicine have developed and validated numerical models capable of predicting the extent of physiologic strain and adverse terrain and weather-related medical consequences of military operations in harsh environments. A descriptive historical account is provided that details how physiologic models for hot and cold weather exposure have been integrated into portable field advisory devices, computer-based meteorologic planning software, and combat-oriented simulation systems. It is important that medical officers be aware of the existence of these types of decision support tools so that they can assure that outputs are interpreted in a balanced and medically realistic manner. Additionally, these modeling applications may facilitate timely preventive medicine planning and efficient dissemination of appropriate measures to prevent weather- and altitude-related illnesses and performance decrements. Such environmental response modeling applications may therefore be utilized to support deployment preventive medicine planning by field medical officers.

  9. Predictive Computational Modeling of Chromatin Folding

    NASA Astrophysics Data System (ADS)

    di Pierro, Miichele; Zhang, Bin; Wolynes, Peter J.; Onuchic, Jose N.

    In vivo, the human genome folds into well-determined and conserved three-dimensional structures. The mechanism driving the folding process remains unknown. We report a theoretical model (MiChroM) for chromatin derived by using the maximum entropy principle. The proposed model allows Molecular Dynamics simulations of the genome using as input the classification of loci into chromatin types and the presence of binding sites of loop forming protein CTCF. The model was trained to reproduce the Hi-C map of chromosome 10 of human lymphoblastoid cells. With no additional tuning the model was able to predict accurately the Hi-C maps of chromosomes 1-22 for the same cell line. Simulations show unknotted chromosomes, phase separation of chromatin types and a preference of chromatin of type A to sit at the periphery of the chromosomes.

  10. Predictive mathematical models of cancer signalling pathways.

    PubMed

    Bachmann, J; Raue, A; Schilling, M; Becker, V; Timmer, J; Klingmüller, U

    2012-02-01

    Complex intracellular signalling networks integrate extracellular signals and convert them into cellular responses. In cancer cells, the tightly regulated and fine-tuned dynamics of information processing in signalling networks is altered, leading to uncontrolled cell proliferation, survival and migration. Systems biology combines mathematical modelling with comprehensive, quantitative, time-resolved data and is most advanced in addressing dynamic properties of intracellular signalling networks. Here, we introduce different modelling approaches and their application to medical systems biology, focusing on the identifiability of parameters in ordinary differential equation models and their importance in network modelling to predict cellular decisions. Two related examples are given, which include processing of ligand-encoded information and dual feedback regulation in erythropoietin (Epo) receptor signalling. Finally, we review the current understanding of how systems biology could foster the development of new treatment strategies in the context of lung cancer and anaemia.

  11. Progress towards a PETN Lifetime Prediction Model

    SciTech Connect

    Burnham, A K; Overturf III, G E; Gee, R; Lewis, P; Qiu, R; Phillips, D; Weeks, B; Pitchimani, R; Maiti, A; Zepeda-Ruiz, L; Hrousis, C

    2006-09-11

    Dinegar (1) showed that decreases in PETN surface area causes EBW detonator function times to increase. Thermal aging causes PETN to agglomerate, shrink, and densify indicating a ''sintering'' process. It has long been a concern that the formation of a gap between the PETN and the bridgewire may lead to EBW detonator failure. These concerns have led us to develop a model to predict the rate of coarsening that occurs with age for thermally driven PETN powder (50% TMD). To understand PETN contributions to detonator aging we need three things: (1) Curves describing function time dependence on specific surface area, density, and gap. (2) A measurement of the critical gap distance for no fire as a function of density and surface area for various wire configurations. (3) A model describing how specific surface area, density and gap change with time and temperature. We've had good success modeling high temperature surface area reduction and function time increase using a phenomenological deceleratory kinetic model based on a distribution of parallel nth-order reactions having evenly spaced activation energies where weighing factors of the reactions follows a Gaussian distribution about the reaction with the mean activation energy (Figure 1). Unfortunately, the mean activation energy derived from this approach is high (typically {approx}75 kcal/mol) so that negligible sintering is predicted for temperatures below 40 C. To make more reliable predictions, we've established a three-part effort to understand PETN mobility. First, we've measured the rates of step movement and pit nucleation as a function of temperature from 30 to 50 C for single crystals. Second, we've measured the evaporation rate from single crystals and powders from 105 to 135 C to obtain an activation energy for evaporation. Third, we've pursued mechanistic kinetic modeling of surface mobility, evaporation, and ripening.

  12. A SCOPING STUDY: Development of Probabilistic Risk Assessment Models for Reactivity Insertion Accidents During Shutdown In U.S. Commercial Light Water Reactors

    SciTech Connect

    S. Khericha

    2011-06-01

    This report documents the scoping study of developing generic simplified fuel damage risk models for quantitative analysis from inadvertent reactivity insertion events during shutdown (SD) in light water pressurized and boiling water reactors. In the past, nuclear fuel reactivity accidents have been analyzed both mainly deterministically and probabilistically for at-power and SD operations of nuclear power plants (NPPs). Since then, many NPPs had power up-rates and longer refueling intervals, which resulted in fuel configurations that may potentially respond differently (in an undesirable way) to reactivity accidents. Also, as shown in a recent event, several inadvertent operator actions caused potential nuclear fuel reactivity insertion accident during SD operations. The set inadvertent operator actions are likely to be plant- and operation-state specific and could lead to accident sequences. This study is an outcome of the concern which arose after the inadvertent withdrawal of control rods at Dresden Unit 3 in 2008 due to operator actions in the plant inadvertently three control rods were withdrawn from the reactor without knowledge of the main control room operator. The purpose of this Standardized Plant Analysis Risk (SPAR) Model development project is to develop simplified SPAR Models that can be used by staff analysts to perform risk analyses of operating events and/or conditions occurring during SD operation. These types of accident scenarios are dominated by the operator actions, (e.g., misalignment of valves, failure to follow procedures and errors of commissions). Human error probabilities specific to this model were assessed using the methodology developed for SPAR model human error evaluations. The event trees, fault trees, basic event data and data sources for the model are provided in the report. The end state is defined as the reactor becomes critical. The scoping study includes a brief literature search/review of historical events, developments of

  13. Predictive modeling by the cerebellum improves proprioception.

    PubMed

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  14. Detailed source term estimation of the atmospheric release for the Fukushima Daiichi Nuclear Power Station accident by coupling simulations of an atmospheric dispersion model with an improved deposition scheme and oceanic dispersion model

    NASA Astrophysics Data System (ADS)

    Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.

    2015-01-01

    Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Daiichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate the detailed atmospheric releases during the accident using a reverse estimation method which calculates the release rates of radionuclides by comparing measurements of air concentration of a radionuclide or its dose rate in the environment with the ones calculated by atmospheric and oceanic transport, dispersion and deposition models. The atmospheric and oceanic models used are WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN-FDM (Finite difference oceanic dispersion model), both developed by the authors. A sophisticated deposition scheme, which deals with dry and fog-water depositions, cloud condensation nuclei (CCN) activation, and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The results revealed that the major releases of radionuclides due to the FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, midnight of 14 March when the SRV (safety relief valve) was opened three times at Unit 2, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates. The simulation by WSPEEDI-II using the new source term reproduced the local and regional patterns of cumulative

  15. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J. T.

    1986-01-01

    A methodology is established to predict thermal barrier coating life in a environment similar to that experienced by gas turbine airfoils. Experiments were conducted to determine failure modes of the thermal barrier coating. Analytical studies were employed to derive a life prediction model. A review of experimental and flight service components as well as laboratory post evaluations indicates that the predominant mode of TBC failure involves thermomechanical spallation of the ceramic coating layer. This ceramic spallation involves the formation of a dominant crack in the ceramic coating parallel to and closely adjacent to the topologically complex metal ceramic interface. This mechanical failure mode clearly is influenced by thermal exposure effects as shown in experiments conducted to study thermal pre-exposure and thermal cycle-rate effects. The preliminary life prediction model developed focuses on the two major damage modes identified in the critical experiments tasks. The first of these involves a mechanical driving force, resulting from cyclic strains and stresses caused by thermally induced and externally imposed mechanical loads. The second is an environmental driving force based on experimental results, and is believed to be related to bond coat oxidation. It is also believed that the growth of this oxide scale influences the intensity of the mechanical driving force.

  16. Entity Network Prediction Using Multitype Topic Models

    NASA Astrophysics Data System (ADS)

    Shiozaki, Hitohiro; Eguchi, Koji; Ohkawa, Takenao

    Conveying information about who, what, when and where is a primary purpose of some genres of documents, typically news articles. Statistical models that capture dependencies between named entities and topics can play an important role. Although some relationships between who and where should be mentioned in such a document, no statistical topic models explicitly address in handling such information the textual interactions between a who-entity and a where-entity. This paper presents a statistical model that directly captures the dependencies between an arbitrary number of word types, such as who-entities, where-entities and topics, mentioned in each document. We show that this multitype topic model performs better at making predictions on entity networks, in which each vertex represents an entity and each edge weight represents how a pair of entities at the incident vertices is closely related, through our experiments on predictions of who-entities and links between them. We also demonstrate the scale-free property in the weighted networks of entities extracted from written mentions.

  17. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.; Cook, T. S.; Kim, K. S.

    1986-01-01

    This is the second annual report of the first 3-year phase of a 2-phase, 5-year program. The objectives of the first phase are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consists of an air plasma sprayed ZrO-Y2O3 top coat, a low pressure plasma sprayed NiCrAlY bond coat, and a Rene' 80 substrate. Task I was to evaluate TBC failure mechanisms. Both bond coat oxidation and bond coat creep have been identified as contributors to TBC failure. Key property determinations have also been made for the bond coat and the top coat, including tensile strength, Poisson's ratio, dynamic modulus, and coefficient of thermal expansion. Task II is to develop TBC life prediction models for the predominant failure modes. These models will be developed based on the results of thermmechanical experiments and finite element analysis. The thermomechanical experiments have been defined and testing initiated. Finite element models have also been developed to handle TBCs and are being utilized to evaluate different TBC failure regimes.

  18. Prediction of Chemical Function: Model Development and ...

    EPA Pesticide Factsheets

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  19. Modeling and Prediction of Krueger Device Noise

    NASA Technical Reports Server (NTRS)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  20. Gamma-ray Pulsars: Models and Predictions

    NASA Technical Reports Server (NTRS)

    Harding Alice K.; White, Nicholas E. (Technical Monitor)

    2000-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is, dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10(exp 12) - 10(exp 13) G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers of the primary curvature emission around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. Next-generation gamma-ray telescopes sensitive to GeV-TeV emission will provide critical tests of pulsar acceleration and emission mechanisms.

  1. Objective calibration of numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  2. Lagrangian predictability characteristics of an Ocean Model

    NASA Astrophysics Data System (ADS)

    Lacorata, Guglielmo; Palatella, Luigi; Santoleri, Rosalia

    2014-11-01

    The Mediterranean Forecasting System (MFS) Ocean Model, provided by INGV, has been chosen as case study to analyze Lagrangian trajectory predictability by means of a dynamical systems approach. To this regard, numerical trajectories are tested against a large amount of Mediterranean drifter data, used as sample of the actual tracer dynamics across the sea. The separation rate of a trajectory pair is measured by computing the Finite-Scale Lyapunov Exponent (FSLE) of first and second kind. An additional kinematic Lagrangian model (KLM), suitably treated to avoid "sweeping"-related problems, has been nested into the MFS in order to recover, in a statistical sense, the velocity field contributions to pair particle dispersion, at mesoscale level, smoothed out by finite resolution effects. Some of the results emerging from this work are: (a) drifter pair dispersion displays Richardson's turbulent diffusion inside the [10-100] km range, while numerical simulations of MFS alone (i.e., without subgrid model) indicate exponential separation; (b) adding the subgrid model, model pair dispersion gets very close to observed data, indicating that KLM is effective in filling the energy "mesoscale gap" present in MFS velocity fields; (c) there exists a threshold size beyond which pair dispersion becomes weakly sensitive to the difference between model and "real" dynamics; (d) the whole methodology here presented can be used to quantify model errors and validate numerical current fields, as far as forecasts of Lagrangian dispersion are concerned.

  3. Model predictive control of MSMPR crystallizers

    NASA Astrophysics Data System (ADS)

    Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc

    2005-02-01

    A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.

  4. An Anisotropic Hardening Model for Springback Prediction

    SciTech Connect

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-05

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  5. Prediction models from CAD models of 3D objects

    NASA Astrophysics Data System (ADS)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  6. Visual Performance Prediction Using Schematic Eye Models

    NASA Astrophysics Data System (ADS)

    Schwiegerling, James Theodore

    The goal of visual modeling is to predict the visual performance or a change in performance of an individual from a model of the human visual system. In designing a model of the human visual system, two distinct functions are considered. The first is the production of an image incident on the retina by the optical system of the eye, and the second is the conversion of this image into a perceived image by the retina and brain. The eye optics are evaluated using raytracing techniques familiar to the optical engineer. The effect of the retinal and brain function are combined with the raytracing results by analyzing the modulation of the retinal image. Each of these processes is important far evaluating the performance of the entire visual system. Techniques for converting the abstract system performance measures used by optical engineers into clinically -applicable measures such as visual acuity and contrast sensitivity are developed in this dissertation. Furthermore, a methodology for applying videokeratoscopic height data to the visual model is outlined. These tools are useful in modeling the visual effects of corrective lenses, ocular maladies and refractive surgeries. The modeling techniques are applied to examples of soft contact lenses, keratoconus, radial keratotomy, photorefractive keratectomy and automated lamellar keratoplasty. The modeling tools developed in this dissertation are meant to be general and modular. As improvements to the measurements of the properties and functionality of the various visual components are made, the new information can be incorporated into the visual system model. Furthermore, the examples discussed here represent only a small subset of the applications of the visual model. Additional ocular maladies and emerging refractive surgeries can be modeled as well.

  7. Predictive models of malignant transudative pleural effusions

    PubMed Central

    Gude, Francisco; Toubes, María E.; Lama, Adriana; Suárez-Antelo, Juan; San-José, Esther; González-Barcala, Francisco Javier; Golpe, Antonio; Álvarez-Dobaño, José M.; Rábade, Carlos; Rodríguez-Núñez, Nuria; Díaz-Louzao, Carla; Valdés, Luis

    2017-01-01

    Background There are no firm recommendations when cytology should be performed in pleural transudates, since some malignant pleural effusions (MPEs) behave biochemically as transudates. The objective was to assess when would be justified to perform cytology on pleural transudates. Methods Consecutive patients with transudative pleural effusion (PE) were enrolled and divided in two groups: malignant and non-MPE. Logistic regression analysis was used to estimate the probability of malignancy. Two prognostic models were considered: (I) clinical-radiological variables; and (II) combination of clinical-radiological and analytical variables. Calibration and discrimination [receiver operating characteristics (ROC) curves and area under the curve (AUC)] were performed. Results A total of 281 pleural transudates were included: 26 malignant and 255 non-malignant. The AUC obtained with Model 1 (left PE, radiological images compatible with malignancy, absence of dyspnea, and serosanguinous appearance of the fluid), and Model 2 (the variables of Model 1 plus CEA) were 0.973 and 0.995, respectively. Although no false negatives are found in Models 1 and 2 to probabilities of 11% and 14%, respectively, by applying bootstrapping techniques to not find false negatives in 95% of other possible samples would require lowering the cut-off points for the aforementioned probabilities to 3% (Model 1) and 4% (Model 2), respectively. The false positive results are 32 (Model 1) and 18 (Model 2), with no false negatives. Conclusions The applied models have a high discriminative ability to predict when a transudative PE may be of neoplastic origin, being superior to adding an analytical variable to the clinic-radiological variables. PMID:28203412

  8. Modeling of leachable 137Cs in throughfall and stemflow for Japanese forest canopies after Fukushima Daiichi Nuclear Power Plant accident.

    PubMed

    Loffredo, Nicolas; Onda, Yuichi; Kawamori, Ayumi; Kato, Hiroaki

    2014-09-15

    The Fukushima accident dispersed significant amounts of radioactive cesium (Cs) in the landscape. Our research investigated, from June 2011 to November 2013, the mobility of leachable Cs in forests canopies. In particular, (137)Cs and (134)Cs activity concentrations were measured in rainfall, throughfall, and stemflow in broad-leaf and cedar forests in an area located 40 km from the power plant. Leachable (137)Cs loss was modeled by a double exponential (DE) model. This model could not reproduce the variation in activity concentration observed. In order to refine the DE model, the main physical measurable parameters (rainfall intensity, wind velocity, and snowfall occurrence) were assessed, and rainfall was identified as the dominant factor controlling observed variation. A corrective factor was then developed to incorporate rainfall intensity in an improved DE model. With the original DE model, we estimated total (137)Cs loss by leaching from canopies to be 72 ± 4%, 67 ± 4%, and 48 ± 2% of the total plume deposition under mature cedar, young cedar, and broad-leaf forests, respectively. In contrast, with the improved DE model, the total (137)Cs loss by leaching was estimated to be 34 ± 2%, 34 ± 2%, and 16 ± 1% of the total plume deposition under mature cedar, young cedar, and broad-leaf forests, respectively. The improved DE model corresponds better to observed data in literature. Understanding (137)Cs and (134)Cs forest dynamics is important for forecasting future contamination of forest soils around the FDNPP. It also provides a basis for understanding forest transfers in future potential nuclear disasters.

  9. Internal Flow Thermal/Fluid Modeling of STS-107 Port Wing in Support of the Columbia Accident Investigation Board

    NASA Technical Reports Server (NTRS)

    Sharp, John R.; Kittredge, Ken; Schunk, Richard G.

    2003-01-01

    As part of the aero-thermodynamics team supporting the Columbia Accident Investigation Board (CAB), the Marshall Space Flight Center was asked to perform engineering analyses of internal flows in the port wing. The aero-thermodynamics team was split into internal flow and external flow teams with the support being divided between shorter timeframe engineering methods and more complex computational fluid dynamics. In order to gain a rough order of magnitude type of knowledge of the internal flow in the port wing for various breach locations and sizes (as theorized by the CAB to have caused the Columbia re-entry failure), a bulk venting model was required to input boundary flow rates and pressures to the computational fluid dynamics (CFD) analyses. This paper summarizes the modeling that was done by MSFC in Thermal Desktop. A venting model of the entire Orbiter was constructed in FloCAD based on Rockwell International s flight substantiation analyses and the STS-107 reentry trajectory. Chemical equilibrium air thermodynamic properties were generated for SINDA/FLUINT s fluid property routines from a code provided by Langley Research Center. In parallel, a simplified thermal mathematical model of the port wing, including the Thermal Protection System (TPS), was based on more detailed Shuttle re-entry modeling previously done by the Dryden Flight Research Center. Once the venting model was coupled with the thermal model of the wing structure with chemical equilibrium air properties, various breach scenarios were assessed in support of the aero-thermodynamics team. The construction of the coupled model and results are presented herein.

  10. Predictive modelling of ferroelectric tunnel junctions

    NASA Astrophysics Data System (ADS)

    Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.

    2016-05-01

    Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.

  11. Modelling the global atmospheric transport and deposition of radionuclides from the Fukushima Dai-ichi nuclear accident

    NASA Astrophysics Data System (ADS)

    Christoudias, T.; Lelieveld, J.

    2013-02-01

    We modeled the global atmospheric dispersion and deposition of radionuclides released from the Fukushima Dai-ichi nuclear power plant accident. The EMAC atmospheric chemistry - general circulation model was used, with circulation dynamics nudged towards ERA-Interim reanalysis data. We applied a resolution of approximately 0.5 degrees in latitude and longitude (T255). The model accounts for emissions and transport of the radioactive isotopes 131I and 137Cs, and removal processes through precipitation, particle sedimentation and dry deposition. In addition, we simulated the release of 133Xe, a noble gas that can be regarded as a passive transport tracer of contaminated air. The source terms are based on Chino et al. (2011) and Stohl et al. (2012); especially the emission estimates of 131I are associated with a high degree of uncertainty. The calculated concentrations have been compared to station observations by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO). We calculated that about 80% of the radioactivity from Fukushima which was released to the atmosphere deposited into the Pacific Ocean. In Japan a large inhabited land area was contaminated by more than 40 kBq m-2. We also estimated the inhalation and 50-year dose by 137Cs, 134Cs and 131I to which the people in Japan are exposed.

  12. Modelling the global atmospheric transport and deposition of radionuclides from the Fukushima Dai-ichi nuclear accident

    NASA Astrophysics Data System (ADS)

    Christoudias, T.; Lelieveld, J.

    2012-09-01

    We modeled the global atmospheric dispersion and deposition of radionuclides released from the Fukushima Dai-ichi nuclear power plant accident. The EMAC atmospheric chemistry - general circulation model was used, with circulation dynamics nudged towards ERA-Interim reanalysis data. We applied a resolution of approximately 0.5 degrees in latitude and longitude (T255). The model accounts for emissions and transport of the radioactive isotopes 131I and 137Cs, and removal processes through precipitation, particle sedimentation and dry deposition. In addition, we simulated the release of 133Xe, a noble gas that can be regarded as a passive transport tracer of contaminated air. The source terms are based on Stohl et al. (2012) and Chino et al. (2011); especially the emission estimates of 131I are associated with a high degree of uncertainty. The calculated concentrations have been compared to station observations by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO). We calculated that about 80% of the radioactivity from Fukushima which was released to the atmosphere deposited into the Pacific Ocean. In Japan a land area of 34 000 km2 around the reactors, inhabited by nearly 10 million people, was contaminated by more than 40 kBq m-2. We also estimated the inhalation and 50-yr dose by 137Cs and 131I to which the people in Japan have been exposed.

  13. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J. T.; Sheffler, K. D.

    1986-01-01

    The objective of this program is to establish a methodology to predict Thermal Barrier Coating (TBC) life on gas turbine engine components. The approach involves experimental life measurement coupled with analytical modeling of relevant degradation modes. The coating being studied is a flight qualified two layer system, designated PWA 264, consisting of a nominal ten mil layer of seven percent yttria partially stabilized zirconia plasma deposited over a nominal five mil layer of low pressure plasma deposited NiCoCrAlY. Thermal barrier coating degradation modes being investigated include: thermomechanical fatigue, oxidation, erosion, hot corrosion, and foreign object damage.

  14. SAS4A: A computer model for the analysis of hypothetical core disruptive accidents in liquid metal reactors

    SciTech Connect

    Tentner, A.M.; Birgersson, G.; Cahalan, J.E.; Dunn, F.E.; Kalimullah; Miles, K.J.

    1987-01-01

    To ensure that the public health and safety are protected under any accident conditions in a Liquid Metal Fast Breeder Reactor (LMFBR), many accidents are analyzed for their potential consequences. The SAS4A code system, described in this paper, provides such an analysis capability, including the ability to analyze low probability events such as the Hypothetical Core Disruptive Accidents (HCDAs). The SAS4A code system has been designed to simulate all the events that occur in a LMFBR core during the initiating phase of a Hypothetical Core Disruptive Accident. During such postulated accident scenarios as the Loss-of-Flow and Transient Overpower events, a large number of interrelated physical phenomena occur during a relatively short time. These phenomena include transient heat transfer and hydrodynamic events, coolant boiling and fuel and cladding melting and relocation. During to the strong neutronic feedback present in a nuclear reactor, these events can significantly influence the reactor power. The SAS4A code system is used in the safety analysis of nuclear reactors, in order to estimate the energetic potential of very low probability accidents. The results of SAS4A simulations are also used by reactor designers in order to build safer reactors and eliminate the possibility of any accident which could endanger the public safety.

  15. Commuting accidents in the German chemical industry.

    PubMed

    Zepf, Kirsten Isabel; Letzel, Stephan; Voelter-Mahlknecht, Susanne; Wriede, Ulrich; Husemann, Britta; Escobar Pinzón, Luis Carlos

    2010-01-01

    Due to accident severity and the extent of claim payments commuting accidents are a significant expense factor in the German industry. Therefore the aim of the present study was the identification of risk factors for commuting accidents in a German chemical company. A retrospective analysis of commuting accidents recorded between 1990 and 2003 was conducted in a major chemical company in Germany. A logistic regression-model was calculated in order to determine factors influencing the duration of work inability as a result of commuting accidents. The analysed data included 5,484 employees with commuting accidents. Cars (33.1%) and bicycles (30.5%) were the most common types of vehicles used by commuters who had an accident. The highest number of commuting accidents was observed in the age group under 26 yr. Accidents on the route from the work site to the worker's residence were less frequently observed, but they caused longer periods of work inability than accidents on the way to the work site. The longest periods of work inability were found in the groups of motorcyclists and older employees. The present study identifies specific groups at risk for commuting accidents. The data of the present investigation also underline the need for developing group specific prevention strategies.

  16. Critical conceptualism in environmental modeling and prediction.

    PubMed

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.

  17. Predictive modelling of boiler fouling. Final report.

    SciTech Connect

    Chatwani, A

    1990-12-31

    A spectral element method embodying Large Eddy Simulation based on Re- Normalization Group theory for simulating Sub Grid Scale viscosity was chosen for this work. This method is embodied in a computer code called NEKTON. NEKTON solves the unsteady, 2D or 3D,incompressible Navier Stokes equations by a spectral element method. The code was later extended to include the variable density and multiple reactive species effects at low Mach numbers, and to compute transport of large particles governed by inertia. Transport of small particles is computed by treating them as trace species. Code computations were performed for a number of test conditions typical of flow past a deep tube bank in a boiler. Results indicate qualitatively correct behavior. Predictions of deposition rates and deposit shape evolution also show correct qualitative behavior. These simulations are the first attempts to compute flow field results at realistic flow Reynolds numbers of the order of 10{sup 4}. Code validation was not done; comparison with experiment also could not be made as many phenomenological model parameters, e.g., sticking or erosion probabilities and their dependence on experimental conditions were not known. The predictions however demonstrate the capability to predict fouling from first principles. Further work is needed: use of large or massively parallel machine; code validation; parametric studies, etc.

  18. Regression model analysis of the decreasing trend of cesium-137 concentration in the atmosphere since the Fukushima accident.

    PubMed

    Kitayama, Kyo; Ohse, Kenji; Shima, Nagayoshi; Kawatsu, Kencho; Tsukada, Hirofumi

    2016-11-01

    The decreasing trend of the atmospheric (137)Cs concentration in two cities in Fukushima prefecture was analyzed by a regression model to clarify the relation between the parameter of the decrease in the model and the trend and to compare the trend with that after the Chernobyl accident. The (137)Cs particle concentration measurements were conducted in urban Fukushima and rural Date sites from September 2012 to June 2015. The (137)Cs particle concentrations were separated in two groups: particles of more than 1.1 μm aerodynamic diameters (coarse particles) and particles with aerodynamic diameter lower than 1.1 μm (fine particles). The averages of the measured concentrations were 0.1 mBq m(-3) in Fukushima and Date sites. The measured concentrations were applied in the regression model which decomposed them into two components: trend and seasonal variation. The trend concentration included the parameters for the constant and the exponential decrease. The parameter for the constant was slightly different between the Fukushima and Date sites. The parameter for the exponential decrease was similar for all the cases, and much higher than the value of the physical radioactive decay except for the concentration in the fine particles at the Date site. The annual decreasing rates of the (137)Cs concentration evaluated by the trend concentration ranged from 44 to 53% y(-1) with average and standard deviation of 49 ± 8% y(-1) for all the cases in 2013. In the other years, the decreasing rates also varied slightly for all cases. These indicated that the decreasing trend of the (137)Cs concentration was nearly unchanged for the location and ground contamination level in the three years after the accident. The (137)Cs activity per aerosol particle mass also decreased with the same trend as the (137)Cs concentration in the atmosphere. The results indicated that the decreasing trend of the atmospheric (137)Cs concentration was related with the reduction of the (137)Cs

  19. Model predictive formation control of helicopter systems

    NASA Astrophysics Data System (ADS)

    Saffarian, Mehdi

    In this thesis, a robust formation control framework for formation control of a group of helicopters is proposed and designed. The dynamic model of the helicopter has been developed and verified through simulations. The control framework is constructed using two main control schemes for navigation of a helicopter group in three-dimensional (3D) environments. Two schemes are designed to maintain the position of one helicopter with respect to one or two other neighboring members, respectively. The developed parameters can uniquely define the position of the helicopters with respect to each other and can be used for any other aerial and under water vehicles such as airplanes, spacecrafts and submarines. Also, since this approach is modular, it is possible to use it for desired number and form of the group helicopters. Using the defined control parameters, two decentralized controllers are designed based on Nonlinear Model Predictive Control (NMPC) algorithm technique. The framework performance has been tested through simulation of different formation scenarios.

  20. Predictive Capability Maturity Model for computational modeling and simulation.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  1. Review of the chronic exposure pathways models in MACCS (MELCOR Accident Consequence Code System) and several other well-known probabilistic risk assessment models

    SciTech Connect

    Tveten, U. )

    1990-06-01

    The purpose of this report is to document the results of the work performed by the author in connection with the following task, performed for US Nuclear Regulatory Commission, (USNRC) Office of Nuclear Regulatory Research, Division of Systems Research: MACCS Chronic Exposure Pathway Models: Review the chronic exposure pathway models implemented in the MELCOR Accident Consequence Code System (MACCS) and compare those models to the chronic exposure pathway models implemented in similar codes developed in countries that are members of the OECD. The chronic exposures concerned are via: the terrestrial food pathways, the water pathways, the long-term groundshine pathway, and the inhalation of resuspended radionuclides pathway. The USNRC has indicated during discussions of the task that the major effort should be spent on the terrestrial food pathways. There is one chapter for each of the categories of chronic exposure pathways listed above.

  2. A predictive fitness model for influenza

    NASA Astrophysics Data System (ADS)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  3. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Sheffler, K. D.; Demasi, J. T.

    1985-01-01

    A methodology was established to predict thermal barrier coating life in an environment simulative of that experienced by gas turbine airfoils. Specifically, work is being conducted to determine failure modes of thermal barrier coatings in the aircraft engine environment. Analytical studies coupled with appropriate physical and mechanical property determinations are being employed to derive coating life prediction model(s) on the important failure mode(s). An initial review of experimental and flight service components indicates that the predominant mode of TBC failure involves thermomechanical spallation of the ceramic coating layer. This ceramic spallation involves the formation of a dominant crack in the ceramic coating parallel to and closely adjacent to the metal-ceramic interface. Initial results from a laboratory test program designed to study the influence of various driving forces such as temperature, thermal cycle frequency, environment, and coating thickness, on ceramic coating spalling life suggest that bond coat oxidation damage at the metal-ceramic interface contributes significantly to thermomechanical cracking in the ceramic layer. Low cycle rate furnace testing in air and in argon clearly shows a dramatic increase of spalling life in the non-oxidizing environments.

  4. A predictive fitness model for influenza.

    PubMed

    Luksza, Marta; Lässig, Michael

    2014-03-06

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  5. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J.; Sheffler, K.

    1984-01-01

    The objective of this program is to develop an integrated life prediction model accounting for all potential life-limiting Thermal Barrier Coating (TBC) degradation and failure modes including spallation resulting from cyclic thermal stress, oxidative degradation, hot corrosion, erosion, and foreign object damage (FOD). The mechanisms and relative importance of the various degradation and failure modes will be determined, and the methodology to predict predominant mode failure life in turbine airfoil application will be developed and verified. An empirically based correlative model relating coating life to parametrically expressed driving forces such as temperature and stress will be employed. The two-layer TBC system being investigated, designated PWA264, currently is in commercial aircraft revenue service. It consists of an inner low pressure chamber plasma-sprayed NiCoCrAlY metallic bond coat underlayer (4 to 6 mils) and an outer air plasma-sprayed 7 w/o Y2O3-ZrO2 (8 to 12 mils) ceramic top layer.

  6. Estimation of radionuclide ((137)Cs) emission rates from a nuclear power plant accident using the Lagrangian Particle Dispersion Model (LPDM).

    PubMed

    Park, Soon-Ung; Lee, In-Hye; Ju, Jae-Won; Joo, Seung Jin

    2016-10-01

    A methodology for the estimation of the emission rate of (137)Cs by the Lagrangian Particle Dispersion Model (LPDM) with the use of monitored (137)Cs concentrations around a nuclear power plant has been developed. This method has been employed with the MM5 meteorological model in the 600 km × 600 km model domain with the horizontal grid scale of 3 km × 3 km centered at the Fukushima nuclear power plant to estimate (137)Cs emission rate for the accidental period from 00 UTC 12 March to 00 UTC 6 April 2011. The Lagrangian Particles are released continuously with the rate of one particle per minute at the first level modelled, about 15 m above the power plant site. The presently developed method was able to simulate quite reasonably the estimated (137)Cs emission rate compared with other studies, suggesting the potential usefulness of the present method for the estimation of the emission rate from the accidental power plant without detailed inventories of reactors and fuel assemblies and spent fuels. The advantage of this method is not so complicated but can be applied only based on one-time forward LPDM simulation with monitored concentrations around the power plant, in contrast to other inverse models. It was also found that continuously monitored radionuclides concentrations from possibly many sites located in all directions around the power plant are required to get accurate continuous emission rates from the accident power plant. The current methodology can also be used to verify the previous version of radionuclides emissions used among other modeling groups for the cases of intermittent or discontinuous samplings.

  7. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    PubMed

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  8. Detailed source term estimation of atmospheric release during the Fukushima Dai-ichi nuclear power plant accident by coupling atmospheric and oceanic dispersion models

    NASA Astrophysics Data System (ADS)

    Katata, Genki; Chino, Masamichi; Terada, Hiroaki; Kobayashi, Takuya; Ota, Masakazu; Nagai, Haruyasu; Kajino, Mizuo

    2014-05-01

    Temporal variations of release amounts of radionuclides during the Fukushima Dai-ichi Nuclear Power Plant (FNPP1) accident and their dispersion process are essential to evaluate the environmental impacts and resultant radiological doses to the public. Here, we estimated a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data and coupling atmospheric and oceanic dispersion simulations by WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN developed by the authors. New schemes for wet, dry, and fog depositions of radioactive iodine gas (I2 and CH3I) and other particles (I-131, Te-132, Cs-137, and Cs-134) were incorporated into WSPEEDI-II. The deposition calculated by WSPEEDI-II was used as input data of ocean dispersion calculations by SEA-GEARN. The reverse estimation method based on the simulation by both models assuming unit release rate (1 Bq h-1) was adopted to estimate the source term at the FNPP1 using air dose rate, and air sea surface concentrations. The results suggested that the major release of radionuclides from the FNPP1 occurred in the following periods during March 2011: afternoon on the 12th when the venting and hydrogen explosion occurred at Unit 1, morning on the 13th after the venting event at Unit 3, midnight on the 14th when several openings of SRV (steam relief valve) were conducted at Unit 2, morning and night on the 15th, and morning on the 16th. The modified WSPEEDI-II using the newly estimated source term well reproduced local and regional patterns of air dose rate and surface deposition of I-131 and Cs-137 obtained by airborne observations. Our dispersion simulations also revealed that the highest radioactive contamination areas around FNPP1 were created from 15th to 16th March by complicated interactions among rainfall (wet deposition), plume movements, and phase properties (gas or particle) of I-131 and release rates

  9. Severe accident analysis using dynamic accident progression event trees

    NASA Astrophysics Data System (ADS)

    Hakobyan, Aram P.

    In present, the development and analysis of Accident Progression Event Trees (APETs) are performed in a manner that is computationally time consuming, difficult to reproduce and also can be phenomenologically inconsistent. One of the principal deficiencies lies in the static nature of conventional APETs. In the conventional event tree techniques, the sequence of events is pre-determined in a fixed order based on the expert judgments. The main objective of this PhD dissertation was to develop a software tool (ADAPT) for automated APET generation using the concept of dynamic event trees. As implied by the name, in dynamic event trees the order and timing of events are determined by the progression of the accident. The tool determines the branching times from a severe accident analysis code based on user specified criteria for branching. It assigns user specified probabilities to every branch, tracks the total branch probability, and truncates branches based on the given pruning/truncation rules to avoid an unmanageable number of scenarios. The function of a dynamic APET developed includes prediction of the conditions, timing, and location of containment failure or bypass leading to the release of radioactive material, and calculation of probabilities of those failures. Thus, scenarios that can potentially lead to early containment failure or bypass, such as through accident induced failure of steam generator tubes, are of particular interest. Also, the work is focused on treatment of uncertainties in severe accident phenomena such as creep rupture of major RCS components, hydrogen burn, containment failure, timing of power recovery, etc. Although the ADAPT methodology (Analysis of Dynamic Accident Progression Trees) could be applied to any severe accident analysis code, in this dissertation the approach is demonstrated by applying it to the MELCOR code [1]. A case study is presented involving station blackout with the loss of auxiliary feedwater system for a

  10. Two criteria for evaluating risk prediction models.

    PubMed

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  11. Light-Weight Radioisotope Heater Unit Safety Analysis Report (LWRHU-SAR). Volume II. Accident model document

    SciTech Connect

    Johnson, E.W.

    1985-10-01

    Purposes of this volume (AMD), are to: Identify all malfunctions, both singular and multiple, which can occur during the complete mission profile that could lead to release outside the clad of the radioisotopic material contained therein; provide estimates of occurrence probabilities associated with these various accidents; evaluate the response of the LWRHU (or its components) to the resultant accident environments; and associate the potential event history with test data or analysis to determine the potential interaction of the released radionuclides with the biosphere.

  12. Test Data for USEPR Severe Accident Code Validation

    SciTech Connect

    J. L. Rempe

    2007-05-01

    This document identifies data that can be used for assessing various models embodied in severe accident analysis codes. Phenomena considered in this document, which were limited to those anticipated to be of interest in assessing severe accidents in the USEPR developed by AREVA, include: • Fuel Heatup and Melt Progression • Reactor Coolant System (RCS) Thermal Hydraulics • In-Vessel Molten Pool Formation and Heat Transfer • Fuel/Coolant Interactions during Relocation • Debris Heat Loads to the Vessel • Vessel Failure • Molten Core Concrete Interaction (MCCI) and Reactor Cavity Plug Failure • Melt Spreading and Coolability • Hydrogen Control Each section of this report discusses one phenomenon of interest to the USEPR. Within each section, an effort is made to describe the phenomenon and identify what data are available modeling it. As noted in this document, models in US accident analysis codes (MAAP, MELCOR, and SCDAP/RELAP5) differ. Where possible, this report identifies previous assessments that illustrate the impact of modeling differences on predicting various phenomena. Finally, recommendations regarding the status of data available for modeling USEPR severe accident phenomena are summarized.

  13. Developing Models for Predictive Climate Science

    SciTech Connect

    Drake, John B; Jones, Philip W

    2007-01-01

    The Community Climate System Model results from a multi-agency collaboration designed to construct cutting-edge climate science simulation models for a broad research community. Predictive climate simulations are currently being prepared for the petascale computers of the near future. Modeling capabilities are continuously being improved in order to provide better answers to critical questions about Earth's climate. Climate change and its implications are front page news in today's world. Could global warming be responsible for the July 2006 heat waves in Europe and the United States? Should more resources be devoted to preparing for an increase in the frequency of strong tropical storms and hurricanes like Katrina? Will coastal cities be flooded due to a rise in sea level? The National Climatic Data Center (NCDC), which archives all weather data for the nation, reports that global surface temperatures have increased over the last century, and that the rate of increase is three times greater since 1976. Will temperatures continue to climb at this rate, will they decline again, or will the rate of increase become even steeper? To address such a flurry of questions, scientists must adopt a systematic approach and develop a predictive framework. With responsibility for advising on energy and technology strategies, the DOE is dedicated to advancing climate research in order to elucidate the causes of climate change, including the role of carbon loading from fossil fuel use. Thus, climate science--which by nature involves advanced computing technology and methods--has been the focus of a number of DOE's SciDAC research projects. Dr. John Drake (ORNL) and Dr. Philip Jones (LANL) served as principal investigators on the SciDAC project, 'Collaborative Design and Development of the Community Climate System Model for Terascale Computers.' The Community Climate System Model (CCSM) is a fully-coupled global system that provides state-of-the-art computer simulations of the

  14. Radiation protection issues on preparedness and response for a severe nuclear accident: experiences of the Fukushima accident.

    PubMed

    Homma, T; Takahara, S; Kimura, M; Kinase, S

    2015-06-01

    Radiation protection issues on preparedness and response for a severe nuclear accident are discussed in this paper based on the experiences following the accident at Fukushima Daiichi nuclear power plant. The criteria for use in nuclear emergencies in the Japanese emergency preparedness guide were based on the recommendations of International Commission of Radiological Protection (ICRP) Publications 60 and 63. Although the decision-making process for implementing protective actions relied heavily on computer-based predictive models prior to the accident, urgent protective actions, such as evacuation and sheltering, were implemented effectively based on the plant conditions. As there were no recommendations and criteria for long-term protective actions in the emergency preparedness guide, the recommendations of ICRP Publications 103, 109, and 111 were taken into consideration in determining the temporary relocation of inhabitants of heavily contaminated areas. These recommendations were very useful in deciding the emergency protective actions to take in the early stages of the Fukushima accident. However, some suggestions have been made for improving emergency preparedness and response in the early stages of a severe nuclear accident.

  15. Estimating the magnitude of prediction uncertainties for the APLE model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  16. Heuristic Modeling for TRMM Lifetime Predictions

    NASA Technical Reports Server (NTRS)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  17. Prediction of catastrophes: an experimental model.

    PubMed

    Peters, Randall D; Le Berre, Martine; Pomeau, Yves

    2012-08-01

    Catastrophes of all kinds can be roughly defined as short-duration, large-amplitude events following and followed by long periods of "ripening." Major earthquakes surely belong to the class of "catastrophic" events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in two separate dynamical models. The first is an "abstract" model in which a time-dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for unstable creep). Hope thus exists that similar changes in the response to noise could forewarn catastrophes in other situations, where such precursor effects should manifest early enough.

  18. Predictive modeling of low solubility semiconductor alloys

    NASA Astrophysics Data System (ADS)

    Rodriguez, Garrett V.; Millunchick, Joanna M.

    2016-09-01

    GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.

  19. Implementing a predictive modeling program, part II: Use of motivational interviewing in a predictive modeling program.

    PubMed

    Calhoun, Jean; Admire, Kaye S

    2005-01-01

    This is the second article of a two-part series about issues encountered in implementing a predictive modeling program. Part I looked at how to effectively implement a program and discussed helpful hints and lessons learned for case managers who are required to change their approach to patients. In Part II, we discuss the readiness to change model, examine the spirit of motivational interviewing and related techniques, and explore how motivational interviewing is different from more traditional interviewing and assessment methods.

  20. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    SciTech Connect

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  1. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  2. A Review of Accident Modelling Approaches for Complex Critical Sociotechnical Systems

    DTIC Science & Technology

    2008-01-01

    integrated model has been demonstrated by analysing the failure in the Therac -25 sociotechnical system. THERAC -25 was an X-ray treatment machine...Baxter (2003) developed a three-layer model for the THERAC -25 system: the regulation authorities, the company who developed the system, and the...FailureFault Error FailureFault Error Failure THERAC -25 FAILURE DESIGN and CERTIFICATION Programmer Company Regulation Authorities Figure 7: Integrating

  3. [Study of prediction models for oil thickness based on spectral curve].

    PubMed

    Sun, Peng; Song, Mei-Ping; An, Ju-Bai

    2013-07-01

    Nowdays, oil spill accidents on sea occur frequently. It is a practical topic to estimate the amount of spilled oil, which is helpful for the subsequent processing and loss assessment. With the rapid development of hyperspectral remote sensing technology, estimating the oil thickness becomes possible. Firstly, a series of oil thicknesses are tested with the AvaSpec Spectrometer to get their corresponding spectral curves. And then the characteristics of the spectral curve are extracted to analyze their relationship with the oil thickness. The study shows that the oil thickness has large correlation with variables based on hyperspectral positions such as R(g), R(o), and vegetation indexes such as RDVI, TVI and Haboudane. Curve fitting, BP neural network and SVD iteration method were chosen to build the prediction models for oil thicknesses. Finally, the analysis and evaluation of each estimating model are provided.

  4. Emulation and Sobol' sensitivity analysis of an atmospheric dispersion model applied to the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne

    2016-04-01

    Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.

  5. Model predictive control of a wind turbine modelled in Simpack

    NASA Astrophysics Data System (ADS)

    Jassmann, U.; Berroth, J.; Matzke, D.; Schelenz, R.; Reiter, M.; Jacobs, G.; Abel, D.

    2014-06-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine to

  6. Woodpecker cavity aeration: a predictive model.

    PubMed

    Ar, Amos; Barnea, Anat; Yom-Tov, Yoram; Mersten-Katz, Cynthia

    2004-12-15

    We studied characteristics of the Syrian woodpecker (Dendrocopos syriacus) cavities in the field and a laboratory model, and rates of gas exchange in the laboratory. Night temperature of occupied cavities is 4.3 degrees C higher than empty ones, representing energy savings of approximately 24%. Oxygen conductance (GNO2) of an empty cavity is 7.1 ml[STPD] (Torr h)(-1), and is affected by winds at velocities up to 0.8 m/s. Day and night body temperatures were 42.0 and 40.1 degrees C, respectively. Steady-state O2 consumption rates (MO2) were 3.49 +/- 0.49 and 2.53 +/- 0.26 ml[STPD] (g h)(-1) during day and night respectively -- higher than predicted by allometry. A mathematical model describing PO2 in a cavity, taking into consideration MO2, GNO2, heat convection and wind speed, from the moment birds inhabit it, was developed. It shows that on the average, one woodpecker staying in its cavity at night does not encounter hypoxic conditions. However, in nest cavities with below the average GNO2, with more inhabitants (e.g. during the breeding season), hypoxia may become a problem.

  7. Predicting expressway crash frequency using a random effect negative binomial model: A case study in China.

    PubMed

    Ma, Zhuanglin; Zhang, Honglu; Chien, Steven I-Jy; Wang, Jin; Dong, Chunjiao

    2017-01-01

    To investigate the relationship between crash frequency and potential influence factors, the accident data for events occurring on a 50km long expressway in China, including 567 crash records (2006-2008), were collected and analyzed. Both the fixed-length and the homogeneous longitudinal grade methods were applied to divide the study expressway section into segments. A negative binomial (NB) model and a random effect negative binomial (RENB) model were developed to predict crash frequency. The parameters of both models were determined using the maximum likelihood (ML) method, and the mixed stepwise procedure was applied to examine the significance of explanatory variables. Three explanatory variables, including longitudinal grade, road width, and ratio of longitudinal grade and curve radius (RGR), were found as significantly affecting crash frequency. The marginal effects of significant explanatory variables to the crash frequency were analyzed. The model performance was determined by the relative prediction error and the cumulative standardized residual. The results show that the RENB model outperforms the NB model. It was also found that the model performance with the fixed-length segment method is superior to that with the homogeneous longitudinal grade segment method.

  8. A cross-scale numerical modeling system for management support of oil spill accidents.

    PubMed

    Azevedo, Alberto; Oliveira, Anabela; Fortunato, André B; Zhang, Joseph; Baptista, António M

    2014-03-15

    A flexible 2D/3D oil spill modeling system addressing the distinct nature of the surface and water column fluids, major oil weathering and improved retention/reposition processes in coastal zones is presented. The system integrates hydrodynamic, transport and oil weathering modules, which can be combined to offer different-complexity descriptions as required by applications across the river-to-ocean continuum. Features include accounting for different composition and reology in the surface and water column mixtures, as well as spreading, evaporation, water-in-oil emulsification, shoreline retention, dispersion and dissolution. The use of unstructured grids provides flexibility and efficiency in handling spills in complex geometries and across scales. The use of high-order Eulerian-Lagrangian methods allows for computational efficiency and for handling key processes in ways consistent with their distinct mathematical nature and time scales. The modeling system is tested through a suite of synthetic, laboratory and realistic-domain benchmarks, which demonstrate robust handling of key processes and of 2D/3D couplings. The application of the modeling system to a spill scenario at the entrance of a port in a coastal lagoon illustrates the power of the approach to represent spills that occur in coastal regions with complex boundaries and bathymetry.

  9. Risk-based modeling of early warning systems for pollution accidents.

    PubMed

    Grayman, W M; Males, R M

    2002-01-01

    An early warning system is a mechanism for detecting, characterizing and providing notification of a source water contamination event (spill event) in order to mitigate the impact of contamination. Spill events are highly probabilistic occurrences with major spills, which can have very significant impacts on raw water sources of drinking water, being relatively rare. A systematic method for designing and operating early warning systems that considers the highly variable, probabilistic nature of many aspects of the system is described. The methodology accounts for the probability of spills, behavior of monitoring equipment, variable hydrology, and the probability of obtaining information about spills independent of a monitoring system. Spill Risk, a risk-based model using Monte Carlo simulation techniques has been developed and its utility has been demonstrated as part of an AWWA Research Foundation sponsored project. The model has been applied to several hypothetical river situations and to an actual section of the Ohio River. Additionally, the model has been systematically applied to a wide range of conditions in order to develop general guidance on design of early warning systems.

  10. Analysis of Kuosheng Station Blackout Accident Using MELCOR 1.8.4

    SciTech Connect

    Wang, S.-J.; Chien, C.-S.; Wang, T.-C.; Chiang, K.-S

    2000-11-15

    The MELCOR code, developed by Sandia National Laboratories, is a fully integrated, relatively fast-running code that models the progression of severe accidents in commercial light water nuclear power plants (NPPs).A specific station blackout (SBO) accident for Kuosheng (BWR-6) NPP is simulated using the MELCOR 1.8.4 code. The MELCOR input deck for Kuosheng NPP is established based on Kuosheng NPP design data and the MELCOR users' guides. The initial steady-state conditions are generated with a developed self-initialization algorithm. The main severe accident phenomena and the fission product release fractions associated with the SBO accident were simulated. The predicted results are plausible and as expected in light of current understanding of severe accident phenomena. The uncertainty of this analysis is briefly discussed. The important features of the MELCOR 1.8.4 are described. The estimated results provide useful information for the probabilistic risk assessment (PRA) of Kuosheng NPP. This tool will be applied to the PRA, the severe accident analysis, and the severe accident management study of Kuosheng NPP in the near future.

  11. The determinants of fishing vessel accident severity.

    PubMed

    Jin, Di

    2014-05-01

    The study examines the determinants of fishing vessel accident severity in the Northeastern United States using vessel accident data from the U.S. Coast Guard for 2001-2008. Vessel damage and crew injury severity equations were estimated separately utilizing the ordered probit model. The results suggest that fishing vessel accident severity is significantly affected by several types of accidents. Vessel damage severity is positively associated with loss of stability, sinking, daytime wind speed, vessel age, and distance to shore. Vessel damage severity is negatively associated with vessel size and daytime sea level pressure. Crew injury severity is also positively related to the loss of vessel stability and sinking.

  12. Supercomputer predictive modeling for ensuring space flight safety

    NASA Astrophysics Data System (ADS)

    Betelin, V. B.; Smirnov, N. N.; Nikitin, V. F.

    2015-04-01

    Development of new types of rocket engines, as well as upgrading the existing engines needs computer aided design and mathematical tools for supercomputer modeling of all basic processes of mixing, ignition, combustion and outflow through the nozzle. Even small upgrades and changes introduced in existing rocket engines without proper simulations cause severe accidents at launch places witnessed recently. The paper presents the results of computer code developing, verification and validation, making it possible to simulate unsteady processes of ignition and combustion in rocket engines.

  13. Applying quantile regression for modeling equivalent property damage only crashes to identify accident blackspots.

    PubMed

    Washington, Simon; Haque, Md Mazharul; Oh, Jutaek; Lee, Dongmin

    2014-05-01

    Hot spot identification (HSID) aims to identify potential sites-roadway segments, intersections, crosswalks, interchanges, ramps, etc.-with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing

  14. RFI modeling and prediction approach for SATOP applications: RFI prediction models

    NASA Astrophysics Data System (ADS)

    Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang

    2016-05-01

    This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.

  15. Predictive modelling for EAST divertor operation

    NASA Astrophysics Data System (ADS)

    Chen, YiPing

    2011-06-01

    The predictive modelling study of the divertor operation in EAST tokamak [B. Wan et al., Nucl. Fusion 49, 104011 (2009)] with double null (DN) configuration is carried out by using the two-dimensional edge plasma code B2.5-SOLPS5.0 [D. P. Coster, X. Bonnin et al., J. Nucl. Mater. 337-339, 366 (2005)]. The modelling study includes the particle and power balance in the scrape-off-layer (SOL), the operation parameters of plasma density, temperature and plasma heat fluxes at the separatrix, the target plates and the wall, and the effect of the gas puffing, drifts, and vertical target plate on the divertor operation. The fluid model for the edge plasma is applied using the real magnetohydrodynamic (MHD) equilibrium from the MHD equilibrium code EFIT [L. L. Lao et al., Nucl. Fusion 25, 1611 (1985)] and the real divertor geometry in the device. Before EAST tokamak starts its experimental programme of divertor operation, the modelling plays an important role in the design of its experimental programme and the optimization of the divertor operation parameters. Based on the modelling results, EAST divertor can operate over a large wide of plasma parameters with different regimes. For a heating power of 8 MW and an edge density at core-SOL interface Nedge = 0.8 × 10191/m3 and Nedge = 1.3 × 10191/m3, the EAST divertor begins access to the high recycling operation regime at the outer and inner target plates, respectively, where the plasma temperature and the heat fluxes at the target plates decrease. The gas puffing can increase the plasma density at the separatrix and trigger the transition from the high recycling operation into detachment at the target plates. When E × B and B × ▿B drifts are taken into account, the asymmetry of plasma parameters and heat fluxes between up-down SOLs can be found. The vertical target plate in EAST divertor can reduce the peak values of heat fluxes at the target plate and enables detachment at lower plasma density. The divertor with the

  16. Predictive modelling for EAST divertor operation

    SciTech Connect

    Chen Yiping

    2011-06-15

    The predictive modelling study of the divertor operation in EAST tokamak [B. Wan et al., Nucl. Fusion 49, 104011 (2009)] with double null (DN) configuration is carried out by using the two-dimensional edge plasma code B2.5-SOLPS5.0 [D. P. Coster, X. Bonnin et al., J. Nucl. Mater. 337-339, 366 (2005)]. The modelling study includes the particle and power balance in the scrape-off-layer (SOL), the operation parameters of plasma density, temperature and plasma heat fluxes at the separatrix, the target plates and the wall, and the effect of the gas puffing, drifts, and vertical target plate on the divertor operation. The fluid model for the edge plasma is applied using the real magnetohydrodynamic (MHD) equilibrium from the MHD equilibrium code EFIT [L. L. Lao et al., Nucl. Fusion 25, 1611 (1985)] and the real divertor geometry in the device. Before EAST tokamak starts its experimental programme of divertor operation, the modelling plays an important role in the design of its experimental programme and the optimization of the divertor operation parameters. Based on the modelling results, EAST divertor can operate over a large wide of plasma parameters with different regimes. For a heating power of 8 MW and an edge density at core-SOL interface N{sub edge} = 0.8 x 10{sup 19}1/m{sup 3} and N{sub edge} = 1.3 x 10{sup 19}1/m{sup 3}, the EAST divertor begins access to the high recycling operation regime at the outer and inner target plates, respectively, where the plasma temperature and the heat fluxes at the target plates decrease. The gas puffing can increase the plasma density at the separatrix and trigger the transition from the high recycling operation into detachment at the target plates. When E x B and B x {nabla}B drifts are taken into account, the asymmetry of plasma parameters and heat fluxes between up-down SOLs can be found. The vertical target plate in EAST divertor can reduce the peak values of heat fluxes at the target plate and enables detachment at lower

  17. Global and local cancer risks after the Fukushima Nuclear Power Plant accident as seen from Chernobyl: a modeling study for radiocaesium ((134)Cs &(137)Cs).

    PubMed

    Evangeliou, Nikolaos; Balkanski, Yves; Cozic, Anne; Møller, Anders Pape

    2014-03-01

    The accident at the Fukushima Daiichi Nuclear Power Plant (NPP) in Japan resulted in the release of a large number of fission products that were transported worldwide. We study the effects of two of the most dangerous radionuclides emitted, (137)Cs (half-life: 30.2years) and (134)Cs (half-life: 2.06years), which were transported across the world constituting the global fallout (together with iodine isotopes and noble gasses) after nuclear releases. The main purpose is to provide preliminary cancer risk estimates after the Fukushima NPP accident, in terms of excess lifetime incident and death risks, prior to epidemiology, and compare them with those occurred after the Chernobyl accident. Moreover, cancer risks are presented for the local population in the form of high-resolution risk maps for 3 population classes and for both sexes. The atmospheric transport model LMDZORINCA was used to simulate the global dispersion of radiocaesium after the accident. Air and ground activity concentrations have been incorporated with monitoring data as input to the LNT-model (Linear Non-Threshold) frequently used in risk assessments of all solid cancers. Cancer risks were estimated to be small for the global population in regions outside Japan. Women are more sensitive to radiation than men, although the largest risks were recorded for infants; the risk is not depended on the sex at the age-at-exposure. Radiation risks from Fukushima were more enhanced near the plant, while the evacuation measures were crucial for its reduction. According to our estimations, 730-1700 excess cancer incidents are expected of which around 65% may be fatal, which are very close to what has been already published (see references therein). Finally, we applied the same calculations using the DDREF (Dose and Dose Rate Effectiveness Factor), which is recommended by the ICRP, UNSCEAR and EPA as an alternative reduction factor instead of using a threshold value (which is still unknown). Excess lifetime cancer

  18. Predictability of the Indian Ocean Dipole in the coupled models

    NASA Astrophysics Data System (ADS)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2017-03-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  19. Comparing Predictions Made by a Prediction Model, Clinical Score, and Physicians

    PubMed Central

    Farion, K.J.; Wilk, S.; Michalowski, W.; O’Sullivan, D.; Sayyad-Shirabad, J.

    2013-01-01

    Summary Background Asthma exacerbations are one of the most common medical reasons for children to be brought to the hospital emergency department (ED). Various prediction models have been proposed to support diagnosis of exacerbations and evaluation of their severity. Objectives First, to evaluate prediction models constructed from data using machine learning techniques and to select the best performing model. Second, to compare predictions from the selected model with predictions from the Pediatric Respiratory Assessment Measure (PRAM) score, and predictions made by ED physicians. Design A two-phase study conducted in the ED of an academic pediatric hospital. In phase 1 data collected prospectively using paper forms was used to construct and evaluate five prediction models, and the best performing model was selected. In phase 2 data collected prospectively using a mobile system was used to compare the predictions of the selected prediction model with those from PRAM and ED physicians. Measurements Area under the receiver operating characteristic curve and accuracy in phase 1; accuracy, sensitivity, specificity, positive and negative predictive values in phase 2. Results In phase 1 prediction models were derived from a data set of 240 patients and evaluated using 10-fold cross validation. A naive Bayes (NB) model demonstrated the best performance and it was selected for phase 2. Evaluation in phase 2 was conducted on data from 82 patients. Predictions made by the NB model were less accurate than the PRAM score and physicians (accuracy of 70.7%, 73.2% and 78.0% respectively), however, according to McNemar’s test it is not possible to conclude that the differences between predictions are statistically significant. Conclusion Both the PRAM score and the NB model were less accurate than physicians. The NB model can handle incomplete patient data and as such may complement the PRAM score. However, it requires further research to improve its accuracy. PMID:24155790

  20. Severe accident simulation at Olkiuoto

    SciTech Connect

    Tirkkonen, H.; Saarenpaeae, T.; Cliff Po, L.C.

    1995-09-01

    A personal computer-based simulator was developed for the Olkiluoto nuclear plant in Finland for training in severe accident management. The generic software PCTRAN was expanded to model the plant-specific features of the ABB Atom designed BWR including its containment over-pressure protection and filtered vent systems. Scenarios including core heat-up, hydrogen generation, core melt and vessel penetration were developed in this work. Radiation leakage paths and dose rate distribution are presented graphically for operator use in diagnosis and mitigation of accidents. Operating on an graphically for operator use in diagnosis and mitigation of accidents. Operating on an 486 DX2-66, PCTRAN-TVO achieves a speed about 15 times faster than real-time. A convenient and user-friendly graphic interface allows full interactive control. In this paper a review of the component models and verification runs are presented.

  1. Health effects models for nuclear power plant accident consequence analysis: Low LET radiation: Part 2, Scientific bases for health effects models

    SciTech Connect

    Abrahamson, S.; Bender, M.; Book, S.; Buncher, C.; Denniston, C.; Gilbert, E.; Hahn, F.; Hertzberg, V.; Maxon, H.; Scott, B.

    1989-05-01

    This report provides dose-response models intended to be used in estimating the radiological health effects of nuclear power plant accidents. Models of early and continuing effects, cancers and thyroid nodules, and genetic effects are provided. Two-parameter Weibull hazard functions are recommended for estimating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary and gastrointestinal syndromes -- are considered. Linear and linear-quadratic models are recommended for estimating cancer risks. Parameters are given for analyzing the risks of seven types of cancer in adults -- leukemia, bone, lung, breast, gastrointestinal, thyroid and ''other''. The category, ''other'' cancers, is intended to reflect the combined risks of multiple myeloma, lymphoma, and cancers of the bladder, kidney, brain, ovary, uterus and cervix. Models of childhood cancers due to in utero exposure are also provided. For most cancers, both incidence and mortality are addressed. Linear and linear-quadratic models are also recommended for assessing genetic risks. Five classes of genetic disease -- dominant, x-linked, aneuploidy, unbalanced translocation and multifactorial diseases --are considered. In addition, the impact of radiation-induced genetic damage on the incidence of peri-implantation embryo losses is discussed. The uncertainty in modeling radiological health risks is addressed by providing central, upper, and lower estimates of all model parameters. Data are provided which should enable analysts to consider the timing and severity of each type of health risk. 22 refs., 14 figs., 51 tabs.

  2. Accident prevention in radiotherapy

    PubMed Central

    Holmberg, O

    2007-01-01

    In order to prevent accidents in radiotherapy, it is important to learn from accidents that have occurred previously. Lessons learned from a number of accidents are summarised and underlying patterns are looked for in this paper. Accidents can be prevented by applying several safety layers of preventive actions. Categories of these preventive actions are discussed together with specific actions belonging to each category of safety layer. PMID:21614274

  3. Nonconvex model predictive control for commercial refrigeration

    NASA Astrophysics Data System (ADS)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  4. Predictive models for moving contact line flows

    NASA Technical Reports Server (NTRS)

    Rame, Enrique; Garoff, Stephen

    2003-01-01

    Modeling flows with moving contact lines poses the formidable challenge that the usual assumptions of Newtonian fluid and no-slip condition give rise to a well-known singularity. This singularity prevents one from satisfying the contact angle condition to compute the shape of the fluid-fluid interface, a crucial calculation without which design parameters such as the pressure drop needed to move an immiscible 2-fluid system through a solid matrix cannot be evaluated. Some progress has been made for low Capillary number spreading flows. Combining experimental measurements of fluid-fluid interfaces very near the moving contact line with an analytical expression for the interface shape, we can determine a parameter that forms a boundary condition for the macroscopic interface shape when Ca much les than l. This parameter, which plays the role of an "apparent" or macroscopic dynamic contact angle, is shown by the theory to depend on the system geometry through the macroscopic length scale. This theoretically established dependence on geometry allows this parameter to be "transferable" from the geometry of the measurement to any other geometry involving the same material system. Unfortunately this prediction of the theory cannot be tested on Earth.

  5. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.

    1985-01-01

    This is the first report of the first phase of a 3-year program. Its objectives are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system, then to develop and verify life prediction models accounting for these degradation modes. The first task (Task I) is to determine the major failure mechanisms. Presently, bond coat oxidation and bond coat creep are being evaluated as potential TBC failure mechanisms. The baseline TBC system consists of an air plasma sprayed ZrO2-Y2O3 top coat, a low pressure plasma sprayed NiCrAlY bond coat, and a Rene'80 substrate. Pre-exposures in air and argon combined with thermal cycle tests in air and argon are being utilized to evaluate bond coat oxidation as a failure mechanism. Unexpectedly, the specimens pre-exposed in argon failed before the specimens pre-exposed in air in subsequent thermal cycles testing in air. Four bond coats with different creep strengths are being utilized to evaluate the effect of bond coat creep on TBC degradation. These bond coats received an aluminide overcoat prior to application of the top coat to reduce the differences in bond coat oxidation behavior. Thermal cycle testing has been initiated. Methods have been selected for measuring tensile strength, Poisson's ratio, dynamic modulus and coefficient of thermal expansion both of the bond coat and top coat layers.

  6. Application of MELCOR Code to a French PWR 900 MWe Severe Accident Sequence and Evaluation of Models Performance Focusing on In-Vessel Thermal Hydraulic Results

    SciTech Connect

    De Rosa, Felice

    2006-07-01

    In the ambit of the Severe Accident Network of Excellence Project (SARNET), funded by the European Union, 6. FISA (Fission Safety) Programme, one of the main tasks is the development and validation of the European Accident Source Term Evaluation Code (ASTEC Code). One of the reference codes used to compare ASTEC results, coming from experimental and Reactor Plant applications, is MELCOR. ENEA is a SARNET member and also an ASTEC and MELCOR user. During the first 18 months of this project, we performed a series of MELCOR and ASTEC calculations referring to a French PWR 900 MWe and to the accident sequence of 'Loss of Steam Generator (SG) Feedwater' (known as H2 sequence in the French classification). H2 is an accident sequence substantially equivalent to a Station Blackout scenario, like a TMLB accident, with the only difference that in H2 sequence the scram is forced to occur with a delay of 28 seconds. The main events during the accident sequence are a loss of normal and auxiliary SG feedwater (0 s), followed by a scram when the water level in SG is equal or less than 0.7 m (after 28 seconds). There is also a main coolant pumps trip when {delta}Tsat < 10 deg. C, a total opening of the three relief valves when Tric (core maximal outlet temperature) is above 603 K (330 deg. C) and accumulators isolation when primary pressure goes below 1.5 MPa (15 bar). Among many other points, it is worth noting that this was the first time that a MELCOR 1.8.5 input deck was available for a French PWR 900. The main ENEA effort in this period was devoted to prepare the MELCOR input deck using the code version v.1.8.5 (build QZ Oct 2000 with the latest patch 185003 Oct 2001). The input deck, completely new, was prepared taking into account structure, data and same conditions as those found inside ASTEC input decks. The main goal of the work presented in this paper is to put in evidence where and when MELCOR provides good enough results and why, in some cases mainly referring to its

  7. Fatal traffic accidents among trailer truck drivers and accident causes as viewed by other truck drivers.

    PubMed

    Häkkänen, H; Summala, H

    2001-03-01

    Causality factors, the responsibility of the driver and driver fatigue-related factors were studied in fatal two-vehicle accidents where a trailer truck driver was involved during the period of 1991-1997 (n = 337). In addition, 251 long-haul truck drivers were surveyed in order to study their views regarding contributing factors in accidents involving trucks and the development of possible countermeasure against driver fatigue. Trailer truck drivers were principally responsible for 16% of all the accidents. Younger driver age and driving during evening hours were significant predictors of being principally responsible. In addition, the probability of being principally responsible for the accident increased by a factor of over three if the driver had a chronic illness. Prolonged driving preceding the accident, accident history or traffic offence history did not have a significant effect. Only 2% of the drivers were estimated to have fallen asleep while driving just prior to the accident, and altogether 4% of the drivers had been tired prior to the accident. Of the drivers 13% had however, been driving over 10 h preceding the accident (which has been criminally punishably in Finland since 1995 under the EC regulation) but no individual factors had a significant effect in predicting prolonged driving. The surveyed views regarding causes of truck accidents correspond well with the accident analysis. Accidents were viewed as being most often caused by other road users and driver fatigue was viewed to be no more than the fifth (out of eight) common cause of accidents. The probability of viewing fatigue as a more common cause increased significantly if the driver had experienced fatigue-related problems while driving. However, nearly half of the surveyed truck drivers expressed a negative view towards developing a technological countermeasure against driver fatigue. The negative view was not related to personal experiences of fatigue-related problems while driving.

  8. Evaluation of performance of predictive models for deoxynivalenol in wheat.

    PubMed

    van der Fels-Klerx, H J

    2014-02-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields than data used for model development. The two models were run for six preset scenarios, varying in the period for which weather forecast data were used, from zero-day (historical data only) to a 13-day period around wheat flowering. Model predictions using forecast weather data were compared to those using historical data. Furthermore, model predictions using historical weather data were evaluated against observed deoxynivalenol contamination of the wheat fields. Results showed that the use of weather forecast data rather than observed data only slightly influenced model predictions. The percent of correct model predictions, given a threshold of 1,250 μg/kg (legal limit in European Union), was about 95% for the two models. However, only three samples had a deoxynivalenol concentration above this threshold, and the models were not able to predict these samples correctly. It was concluded that two- week weather forecast data can reliable be used in descriptive models for deoxynivalenol contamination of wheat, resulting in more timely model predictions. The two models are able to predict lower deoxynivalenol contamination correctly, but model performance in situations with high deoxynivalenol contamination needs to be further validated. This will need years with conducive environmental conditions for deoxynivalenol contamination of wheat.

  9. An inverse modeling method to assess the source term of the Fukushima nuclear power plant accident using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, O.; Mathieu, A.; Didier, D.; Tombette, M.; Quélo, D.; Winiarek, V.; Bocquet, M.

    2013-06-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term including the time evolution of the release rate and its distribution between radioisotopes. Inverse modeling methods, which combine environmental measurements and atmospheric dispersion models, have proven efficient in assessing source term due to an accidental situation (Gudiksen, 1989; Krysta and Bocquet, 2007; Stohl et al., 2012a; Winiarek et al., 2012). Most existing approaches are designed to use air sampling measurements (Winiarek et al., 2012) and some of them also use deposition measurements (Stohl et al., 2012a; Winiarek et al., 2013) but none of them uses dose rate measurements. However, it is the most widespread measurement system, and in the event of a nuclear accident, these data constitute the main source of measurements of the plume and radioactive fallout during releases. This paper proposes a method to use dose rate measurements as part of an inverse modeling approach to assess source terms. The method is proven efficient and reliable when applied to the accident at the Fukushima Daiichi nuclear power plant (FD-NPP). The emissions for the eight main isotopes 133Xe, 134Cs, 136Cs, 137Cs, 137mBa, 131I, 132I and 132Te have been assessed. Accordingly, 103 PBq of 131I, 35.5 PBq of 132I, 15.5 PBq of 137Cs and 12 100 PBq of noble gases were released. The events at FD-NPP (such as venting, explosions, etc.) known to have caused atmospheric releases are well identified in the retrieved source term. The estimated source term is validated by comparing simulations of atmospheric dispersion and deposition with environmental observations. The result is that the model-measurement agreement for all of the monitoring locations is correct for 80% of simulated dose rates that are within a factor of 2 of the observed values. Changes in dose rates over time have been overall properly reconstructed, especially

  10. Allostasis: a model of predictive regulation.

    PubMed

    Sterling, Peter

    2012-04-12

    The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to

  11. Graphite Oxidation Simulation in HTR Accident Conditions

    SciTech Connect

    El-Genk, Mohamed

    2012-10-19

    Massive air and water ingress, following a pipe break or leak in steam-generator tubes, is a design-basis accident for high-temperature reactors (HTRs). Analysis of these accidents in both prismatic and pebble bed HTRs requires state-of-the-art capability for predictions of: 1) oxidation kinetics, 2) air helium gas mixture stratification and diffusion into the core following the depressurization, 3) transport of multi-species gas mixture, and 4) graphite corrosion. This project will develop a multi-dimensional, comprehensive oxidation kinetics model of graphite in HTRs, with diverse capabilities for handling different flow regimes. The chemical kinetics/multi-species transport model for graphite burning and oxidation will account for temperature-related changes in the properties of graphite, oxidants (O2, H2O, CO), reaction products (CO, CO2, H2, CH4) and other gases in the mixture (He and N2). The model will treat the oxidation and corrosion of graphite in geometries representative of HTR core component at temperatures of 900°C or higher. The developed chemical reaction kinetics model will be user-friendly for coupling to full core analysis codes such as MELCOR and RELAP, as well as computational fluid dynamics (CFD) codes such as CD-adapco. The research team will solve governing equations for the multi-dimensional flow and the chemical reactions and kinetics using Simulink, an extension of the MATLAB solver, and will validate and benchmark the model's predictions using reported experimental data. Researchers will develop an interface to couple the validated model to a commercially available CFD fluid flow and thermal-hydraulic model of the reactor , and will perform a simulation of a pipe break in a prismatic core HTR, with the potential for future application to a pebble-bed type HTR.

  12. The role of personality traits and driving experience in self-reported risky driving behaviors and accident risk among Chinese drivers.

    PubMed

    Tao, Da; Zhang, Rui; Qu, Xingda

    2017-02-01

    The purpose of this study was to explore the role of personality traits and driving experience in the prediction of risky driving behaviors and accident risk among Chinese population. A convenience sample of drivers (n=511; mean (SD) age=34.2 (8.8) years) completed a self-report questionnaire that was designed based on validated scales for measuring personality traits, risky driving behaviors and self-reported accident risk. Results from structural equation modeling analysis demonstrated that the data fit well with our theoretical model. While showing no direct effects on accident risk, personality traits had direct effects on risky driving behaviors, and yielded indirect effects on accident risk mediated by risky driving behaviors. Both driving experience and risky driving behaviors directly predicted accident risk and accounted for 15% of its variance. There was little gender difference in personality traits, risky driving behaviors and accident risk. The findings emphasized the importance of personality traits and driving experience in the understanding of risky driving behaviors and accident risk among Chinese drivers and provided new insight into the design of evidence-based driving education and accident prevention interventions.

  13. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    ERIC Educational Resources Information Center

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  14. An Overview of the NASA Aviation Safety Program (AVSP) Systemwide Accident Prevention (SWAP) Human Performance Modeling (HPM) Element

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Goodman, Allen; Hooley, Becky L.

    2003-01-01

    An overview is provided of the Human Performance Modeling (HPM) element within the NASA Aviation Safety Program (AvSP). Two separate model development tracks for performance modeling of real-world aviation environments are described: the first focuses on the advancement of cognitive modeling tools for system design, while the second centers on a prescriptive engineering model of activity tracking for error detection and analysis. A progressive implementation strategy for both tracks is discussed in which increasingly more complex, safety-relevant applications are undertaken to extend the state-of-the-art, as well as to reveal potential human-system vulnerabilities in the aviation domain. Of particular interest is the ability to predict the precursors to error and to assess potential mitigation strategies associated with the operational use of future flight deck technologies.

  15. Predictable Components of ENSO Evolution in Real-time Multi-Model Predictions

    PubMed Central

    Zheng, Zhihai; Hu, Zeng-Zhen; L’Heureux, Michelle

    2016-01-01

    The most predictable components of the El Niño-Southern Oscillation (ENSO) evolution in real-time multi-model predictions are identified by applying an empirical orthogonal function analysis of the model data that maximizes the signal-to-noise ratio (MSN EOF). The normalized Niño3.4 index is analyzed for nine 3-month overlapping seasons. In this sense, the first most predictable component (MSN EOF1) is the decaying phase of ENSO during the Northern Hemisphere spring, followed by persistence through autumn and winter. The second most predictable component of ENSO evolution, with lower prediction skill and smaller explained variance than MSN EOF1, corresponds to the growth during spring and then persistence in summer and autumn. This result suggests that decay phase of ENSO is more predictable than the growth phase. Also, the most predictable components and the forecast skills in dynamical and statistical models are similar overall, with some differences arising during spring season initial conditions. Finally, the reconstructed predictions, with only the first two MSN components, show higher skill than the model raw predictions. Therefore this method can be used as a diagnostic for model comparison and development, and it can provide a new perspective for the most predictable components of ENSO. PMID:27775016

  16. Comparison of perceived and modelled geographical access to accident and emergency departments: a cross-sectional analysis from the Caerphilly Health and Social Needs Study

    PubMed Central

    Fone, David L; Christie, Stephen; Lester, Nathan

    2006-01-01

    Background Assessment of the spatial accessibility of hospital accident and emergency departments as perceived by local residents has not previously been investigated. Perceived accessibility may affect where, when, and whether potential patients attend for treatment. Using data on 11,853 respondents to a population survey in Caerphilly county borough, Wales, UK, we present an analysis comparing the accessibility of accident and emergency departments as reported by local residents and drive-time to the nearest accident and emergency department modelled using a geographical information system (GIS). Results Median drive-times were significantly shorter in the lowest perceived access category and longer in the best perceived access category (p < 0.001). The perceived access and GIS modelled drive-time variables were positively correlated (Spearman's rank correlation coefficient, r = 0.38, p < 0.01). The strongest correlation was found for respondents living in areas in which nearly all households had a car or van (r = 0.47, p < 0.01). Correlations were stronger among respondents reporting good access to public transport and among those reporting a recent accident and emergency attendance for injury treatment compared to other respondents. Correlation coefficients did not vary substantially by levels of household income. Drive-time, road distance and straight-line distance were highly inter-correlated and substituting road distance or straight-line distance as the GIS modelled spatial accessibility measure only marginally decreased the magnitude of the correlations between perceived and GIS modelled access. Conclusion This study provides evidence that the accessibility of hospital-based health care services as perceived by local residents is related to measures of spatial accessibility modelled using GIS. For studies that aim to model geographical separation in a way that correlates well with the perception of local residents, there may be minimal advantage in using

  17. Atmospheric releases from severe nuclear accidents: Environmental transport and pathways to man: Modelling of radiation doses to man from Chernobyl releases

    SciTech Connect

    Anspaugh, L.R.; Goldman, M.; Catlin, R.J.

    1987-01-01

    The Chernobyl accident released a large amount of highly fractionated radioactive debris, including approximately 89 PBq of /sup 137/Cs. We calculated the resulting collective dose commitment to the Northern Hemisphere via the pathways of external exposure and ingestion of radionuclides withd food. We developed a rural/urban model of external dose and we used the PATHWAY model for ingestion. The results are a collective dose commitment of 630,000 person-Gy over the first year and 1,200,000 person-Gy over 50 years. 13 refs., 1 tab.

  18. Predictive modeling of dental pain using neural network.

    PubMed

    Kim, Eun Yeob; Lim, Kun Ok; Rhee, Hyun Sill

    2009-01-01

    The mouth is a part of the body for ingesting food that is the most basic foundation and important part. The dental pain predicted by the neural network model. As a result of making a predictive modeling, the fitness of the predictive modeling of dental pain factors was 80.0%. As for the people who are likely to experience dental pain predicted by the neural network model, preventive measures including proper eating habits, education on oral hygiene, and stress release must precede any dental treatment.

  19. From Predictive Models to Instructional Policies

    ERIC Educational Resources Information Center

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  20. The Complexity of Developmental Predictions from Dual Process Models

    ERIC Educational Resources Information Center

    Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

    2011-01-01

    Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

  1. Sweat loss prediction using a multi-model approach

    NASA Astrophysics Data System (ADS)

    Xu, Xiaojiang; Santee, William R.

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  2. A windows based mechanistic subsidence prediction model for longwall mining

    SciTech Connect

    Begley, R.; Beheler, P.; Khair, A.W.

    1996-12-31

    The previously developed Mechanistic Subsidence Prediction Model (MSPM) has been incorporated into the graphical interface environment of MS Windows. MSPM has the unique capability of predicting maximum subsidence, angle of draw and the subsidence profile of a longwall panel at various locations for both the transverse and longitudinal orientations. The resultant enhanced model can be operated by individuals with little knowledge of subsidence prediction theories or little computer programming experience. In addition, predictions of subsidence can be made in a matter of seconds without the need to develop input data files or use the keyboard in some cases. The predictions are based upon the following input parameters: panel width, mining height, overburden depth, rock quality designation, and percent hard rock in the immediate roof, main roof and the entire overburden. The recently developed enhanced model has the capability to compare predictions in a graphical format for one half of the predicted subsidence profile based upon changes in input parameters easily and instantly on the same screen. In addition another screen can be obtained from a pull down menu where the operator can compare predictions for the entire subsidence profiles. This paper presents the background of the subsidence prediction model and the methodology of the enhanced model development. The paper also presents comparisons of subsidence predictions for several different sets of input parameters in addition to comparisons of the subsidence predictions with actual field data.

  3. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  4. Prediction using patient comparison vs. modeling: a case study for mortality prediction.

    PubMed

    Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter

    2016-08-01

    Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.

  5. An inverse modeling method to assess the source term of the Fukushima Nuclear Power Plant accident using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, O.; Mathieu, A.; Didier, D.; Tombette, M.; Quélo, D.; Winiarek, V.; Bocquet, M.

    2013-11-01

    The Chernobyl nuclear accident, and more recently the Fukushima accident, highlighted that the largest source of error on consequences assessment is the source term, including the time evolution of the release rate and its distribution between radioisotopes. Inverse modeling methods, which combine environmental measurements and atmospheric dispersion models, have proven efficient in assessing source term due to an accidental situation (Gudiksen, 1989; Krysta and Bocquet, 2007; Stohl et al., 2012a; Winiarek et al., 2012). Most existing approaches are designed to use air sampling measurements (Winiarek et al., 2012) and some of them also use deposition measurements (Stohl et al., 2012a; Winiarek et al., 2014). Some studies have been performed to use dose rate measurements (Duranova et al., 1999; Astrup et al., 2004; Drews et al., 2004; Tsiouri et al., 2012) but none of the developed methods were carried out to assess the complex source term of a real accident situation like the Fukushima accident. However, dose rate measurements are generated by the most widespread measurement system, and in the event of a nuclear accident, these data constitute the main source of measurements of the plume and radioactive fallout during releases. This paper proposes a method to use dose rate measurements as part of an inverse modeling approach to assess source terms. The method is proven efficient and reliable when applied to the accident at the Fukushima Daiichi Nuclear Power Plant (FD-NPP). The emissions for the eight main isotopes 133Xe, 134Cs, 136Cs, 137Cs, 137mBa, 131I, 132I and 132Te have been assessed. Accordingly, 105.9 PBq of 131I, 35.8 PBq of 132I, 15.5 PBq of 137Cs and 12 134 PBq of noble gases were released. The events at FD-NPP (such as venting, explosions, etc.) known to have caused atmospheric releases are well identified in the retrieved source term. The estimated source term is validated by comparing simulations of atmospheric dispersion and deposition with

  6. Comparing prediction models for radiographic exposures

    NASA Astrophysics Data System (ADS)

    Ching, W.; Robinson, J.; McEntee, M. F.

    2015-03-01

    During radiographic exposures the milliampere-seconds (mAs), kilovoltage peak (kVp) and source-to-image distance can be adjusted for variations in patient thicknesses. Several exposure adjustment systems have been developed to assist with this selection. This study compares the accuracy of four systems to predict the required mAs for pelvic radiographs taken on a direct digital radiography system (DDR). Sixty radiographs were obtained by adjusting mAs to compensate for varying combinations of source-to-image distance (SID), kVp and patient thicknesses. The 25% rule, the DuPont Bit System and the DigiBit system were compared to determine which of these three most accurately predicted the mAs required for an increase in patient thickness. Similarly, the 15% rule, the DuPont Bit System and the DigiBit system were compared for an increase in kVp. The exposure index (EI) was used as an indication of exposure to the DDR. For each exposure combination the mAs was adjusted until an EI of 1500+/-2% was achieved. The 25% rule was the most accurate at predicting the mAs required for an increase in patient thickness, with 53% of the mAs predictions correct. The DigiBit system was the most accurate at predicting mAs needed for changes in kVp, with 33% of predictions correct. This study demonstrated that the 25% rule and DigiBit system were the most accurate predictors of mAs required for an increase in patient thickness and kVp respectively. The DigiBit system worked well in both scenarios as it is a single exposure adjustment system that considers a variety of exposure factors.

  7. Predictive modeling and reducing cyclic variability in autoignition engines

    DOEpatents

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  8. Visualization of Traffic Accidents

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Shen, Yuzhong; Khattak, Asad

    2010-01-01

    Traffic accidents have tremendous impact on society. Annually approximately 6.4 million vehicle accidents are reported by police in the US and nearly half of them result in catastrophic injuries. Visualizations of traffic accidents using geographic information systems (GIS) greatly facilitate handling and analysis of traffic accidents in many aspects. Environmental Systems Research Institute (ESRI), Inc. is the world leader in GIS research and development. ArcGIS, a software package developed by ESRI, has the capabilities to display events associated with a road network, such as accident locations, and pavement quality. But when event locations related to a road network are processed, the existing algorithm used by ArcGIS does not utilize all the information related to the routes of the road network and produces erroneous visualization results of event locations. This software bug causes serious problems for applications in which accurate location information is critical for emergency responses, such as traffic accidents. This paper aims to address this problem and proposes an improved method that utilizes all relevant information of traffic accidents, namely, route number, direction, and mile post, and extracts correct event locations for accurate traffic accident visualization and analysis. The proposed method generates a new shape file for traffic accidents and displays them on top of the existing road network in ArcGIS. Visualization of traffic accidents along Hampton Roads Bridge Tunnel is included to demonstrate the effectiveness of the proposed method.

  9. Perception of road accident causes.

    PubMed

    Vanlaar, Ward; Yannis, George

    2006-01-01

    A theoretical two-dimensional model on prevalence and risk was developed. The objective of this study was to validate this model empirically to answer three questions: How do European drivers perceive the importance of several causes of road accidents? Are there important differences in perceptions between member states? Do these perceptions reflect the real significance of road accident causes? Data were collected from 23 countries, based on representative national samples of at least 1000 respondents each (n=24,372). Face-to-face interviews with fully licensed, active car drivers were conducted using a questionnaire containing closed answer questions. Respondents were asked to rate 15 causes of road accidents, each using a six-point ordinal scale. The answers were analyzed by calculating Kendall's tau for each pair of items to form lower triangle similarity matrices per country and for Europe as a whole. These matrices were then used as the input files for an individual difference scaling to draw a perceptual map of the 15 items involved. The hypothesized model on risk and prevalence fits the data well and enabled us to answer the three questions of concern. The subject space of the model showed that there are no relevant differences between the 23 countries. The group space of the model comprises four quadrants, each containing several items (high perceived risk/low perceived prevalence items; high perceived risk/high perceived prevalence items; low perceived risk/high perceived prevalence items and low perceived risk/low perceived prevalence items). Finally, perceptions of the items driving under the influence of alcohol, drugs and medicines and driving using a handheld or hands-free mobile phone are discussed with regard to their real significance in causing road accidents. To conclude, individual difference scaling offers some promising possibilities to study drivers' perception of road accident causes.

  10. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  11. Approaches to ionospheric modelling, simulation and prediction

    NASA Astrophysics Data System (ADS)

    Schunk, R. W.; Sojka, J. J.

    1992-08-01

    The ionosphere is a complex, multispecies, anisotropic medium that exhibits a significant variation with time, space, season, solar cycle, and geomagnetic activity. In recent years, a wide range of models have been developed in an effort to describe ionospheric behavior. The modeling efforts include: (1) empirical models based on extensive worldwide data sets; (2) simple analytical models for a restricted number of ionospheric parameters; (3) comprehensive, 3D, time-dependent models that require supercomputers; (4) spherical harmonic models based on fits to output obtained from comprehensive numerical models; and (5) ionospheric models driven by real-time magnetospheric inputs. In an effort to achieve simplicity, some of the models have been restricted to certain altitude or latitude domains, while others have been restricted to certain ionospheric parameters, such as the F-region peak density, the auroral conductivity, and the plasma temperatures. The current status of the modeling efforts is reviewed.

  12. Detailed source term estimation of the atmospheric release for the Fukushima Daiichi Nuclear Power Station accident by coupling simulations of atmospheric dispersion model with improved deposition scheme and oceanic dispersion model

    NASA Astrophysics Data System (ADS)

    Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.

    2014-06-01

    Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Dai-ichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data with atmospheric model simulations from WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information), and simulations from the oceanic dispersion model SEA-GEARN-FDM, both developed by the authors. A sophisticated deposition scheme, which deals with dry and fogwater depositions, cloud condensation nuclei (CCN) activation and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The fallout to the ocean surface calculated by WSPEEDI-II was used as input data for the SEA-GEARN-FDM calculations. Reverse and inverse source-term estimation methods based on coupling the simulations from both models was adopted using air dose rates and concentrations, and sea surface concentrations. The results revealed that the major releases of radionuclides due to FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, the morning of 13 March after the venting event at Unit 3, midnight of 14 March when the SRV (Safely Relief Valve) at Unit 2 was opened three times, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of

  13. Atmospheric removal times of the aerosol-bound radionuclides 137Cs and 131I measured after the Fukushima Dai-ichi nuclear accident - a constraint for air quality and climate models

    NASA Astrophysics Data System (ADS)

    Kristiansen, N. I.; Stohl, A.; Wotawa, G.

    2012-11-01

    Caesium-137 (137Cs) and iodine-131 (131I) are radionuclides of particular concern during nuclear accidents, because they are emitted in large amounts and are of significant health impact. 137Cs and 131I attach to the ambient accumulation-mode (AM) aerosols and share their fate as the aerosols are removed from the atmosphere by scavenging within clouds, precipitation and dry deposition. Here, we estimate their removal times from the atmosphere using a unique high-precision global measurement data set collected over several months after the accident at the Fukushima Dai-ichi nuclear power plant in March 2011. The noble gas xenon-133 (133Xe), also released during the accident, served as a passive tracer of air mass transport for determining the removal times of 137Cs and 131I via the decrease in the measured ratios 137Cs/133Xe and 131I/133Xe over time. After correction for radioactive decay, the 137Cs/133Xe ratios reflect the removal of aerosols by wet and dry deposition, whereas the 131I/133Xe ratios are also influenced by aerosol production from gaseous 131I. We find removal times for 137Cs of 10.0-13.9 days and for 131I of 17.1-24.2 days during April and May 2011. The removal time of 131I is longer due to the aerosol production from gaseous 131I, thus the removal time for 137Cs serves as a better estimate for aerosol lifetime. The removal time of 131I is of interest for semi-volatile species. We discuss possible caveats (e.g. late emissions, resuspension) that can affect the results, and compare the 137Cs removal times with observation-based and modeled aerosol lifetimes. Our 137Cs removal time of 10.0-13.9 days should be representative of a "background" AM aerosol well mixed in the extratropical Northern Hemisphere troposphere. It is expected that the lifetime of this vertically mixed background aerosol is longer than the lifetime of fresh AM aerosols directly emitted from surface sources. However, the substantial difference to the mean lifetimes of AM aerosols

  14. Atmospheric removal times of the aerosol-bound radionuclides 137Cs and 131I during the months after the Fukushima Dai-ichi nuclear power plant accident - a constraint for air quality and climate models

    NASA Astrophysics Data System (ADS)

    Kristiansen, N. I.; Stohl, A.; Wotawa, G.

    2012-05-01

    Caesium-137 (137Cs) and iodine-131 (131I) are radionuclides of particular concern during nuclear accidents, because they are emitted in large amounts and are of significant health impact. 137Cs and 131I attach to the ambient accumulation-mode (AM) aerosols and share their fate as the aerosols are removed from the atmosphere by scavenging within clouds, precipitation and dry deposition. Here, we estimate their removal times from the atmosphere using a unique high-precision global measurement data set collected over several months after the accident at the Fukushima Dai-ichi nuclear power plant in March 2011. The noble gas xenon-133 (133Xe), also released during the accident, served as a passive tracer of air mass transport for determining the removal times of 137Cs and 131I via the decrease in the measured ratios 137Cs/133Xe and 131I/133Xe over time. After correction for radioactive decay, the 137Cs/133Xe ratios reflect the removal of aerosols by wet and dry deposition, whereas the 131I/133Xe ratios are also influenced by aerosol production from gaseous 131I. We find removal times for 137Cs of 10.0-13.9 days and for 131I of 17.1-24.2 days during April and May 2011. We discuss possible caveats (e.g. late emissions, resuspension) that can affect the results, and compare the 137Cs removal times with observation-based and modeled aerosol lifetimes. Our 137Cs removal time of 10.0-13.9 days should be representative of a "background" AM aerosol well mixed in the extratropical Northern Hemisphere troposphere. It is expected that the lifetime of this vertically mixed background aerosol is longer than the lifetime of AM aerosols originating from surface sources. However, the substantial difference to the mean lifetimes of AM aerosols obtained from aerosol models, typically in the range of 3-7 days, warrants further research on the cause of this discrepancy. Too short modeled AM aerosol lifetimes would have serious implications for air quality and climate model predictions.

  15. Organizational safety climate and supervisor safety enforcement: Multilevel explorations of the causes of accident underreporting.

    PubMed

    Probst, Tahira M

    2015-11-01

    According to national surveillance statistics, over 3 million employees are injured each year; yet, research indicates that these may be substantial underestimates of the true prevalence. The purpose of the current project was to empirically test the hypothesis that organizational safety climate and transactional supervisor safety leadership would predict the extent to which accidents go unreported by employees. Using hierarchical linear modeling and survey data collected from 1,238 employees in 33 organizations, employee-level supervisor safety enforcement behaviors (and to a less consistent extent, organizational-level safety climate) predicted employee accident underreporting. There was also a significant cross-level interaction, such that the effect of supervisor enforcement on underreporting was attenuated in organizations with a positive safety climate. These results may benefit human resources and safety professionals by pinpointing methods of increasing the accuracy of accident reporting, reducing actual safety incidents, and reducing the costs to individuals and organizations that result from underreporting.

  16. A Prediction Model of the Capillary Pressure J-Function

    PubMed Central

    Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.

    2016-01-01

    The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701

  17. A model to predict the power output from wind farms

    SciTech Connect

    Landberg, L.

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  18. Tampa Bay Water Clarity Model (TBWCM): As a Predictive Tool

    EPA Science Inventory

    The Tampa Bay Water Clarity Model was developed as a predictive tool for estimating the impact of changing nutrient loads on water clarity as measured by secchi depth. The model combines a physical mixing model with an irradiance model and nutrient cycling model. A 10 segment bi...

  19. EOID Model Validation and Performance Prediction

    DTIC Science & Technology

    2002-09-30

    Our long-term goal is to accurately predict the capability of the current generation of laser-based underwater imaging sensors to perform Electro ... Optic Identification (EOID) against relevant targets in a variety of realistic environmental conditions. The two most prominent technologies in this area

  20. Predicting Career Advancement with Structural Equation Modelling

    ERIC Educational Resources Information Center

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  1. Evaluation of Fast-Time Wake Vortex Prediction Models

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  2. Simple model for predicting microchannel heat sink performance and optimization

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Hsun; Chein, Reiyu

    2012-05-01

    A simple model was established to predict microchannel heat sink performance based on energy balance. Both hydrodynamically and thermally developed effects were included. Comparisons with the experimental data show that this model provides satisfactory thermal resistance prediction. The model is further extended to carry out geometric optimization on the microchannel heat sink. The results from the simple model are in good agreement as compared with those obtained from three-dimensional simulations.

  3. Predictive growth model of LID: light intensification model

    NASA Astrophysics Data System (ADS)

    Tan, ChingSeong; Patel, D.; Wang, X.; Schlitz, D.; Dehkordi, P. S.; Menoni, C. S.; Chong, E. K. P.

    2013-11-01

    General precursors and growth model of Laser Induced Damage (LID) have been the focus of research in fused silica material, such as polishing residues, fractures, and contaminations. Assuming the absorption due to trapped material and mechanical strength is the same across the surfaces, various studies have shown that the LID could be minimized by reducing the light field intensification of the layers upon the laser strikes. By revisiting the definition of non-ionising radiation damage, this paper presents the modelling work and simulation of light intensification of laser induced damage condition. Our contribution is to predict the LID growth that take into various factors, specifically on the light intensification problem. The light intensification problem is a function of the inter-layer or intra-layer micro-optical properties, such as transmittance and absorption coefficient of the material at micro- or sub-micro-meter range. The proposed model will first estimate the light propagation that convoluted with the multiply scattering light and subsequently the field intensification within the nodule dimension. This will allow us to evaluate the geometrical factor of the nodule effect over the intensification. The result show that the light intensification is higher whenever the backscattering and multiple scattering components are higher due to its interference with the incoming wave within its coherency.

  4. The Fukushima Daiichi Accident Study Information Portal

    SciTech Connect

    Shawn St. Germain; Curtis Smith; David Schwieder; Cherie Phelan

    2012-11-01

    This paper presents a description of The Fukushima Daiichi Accident Study Information Portal. The Information Portal was created by the Idaho National Laboratory as part of joint NRC and DOE project to assess the severe accident modeling capability of the MELCOR analysis code. The Fukushima Daiichi Accident Study Information Portal was created to collect, store, retrieve and validate information and data for use in reconstructing the Fukushima Daiichi accident. In addition to supporting the MELCOR simulations, the Portal will be the main DOE repository for all data, studies and reports related to the accident at the Fukushima Daiichi nuclear power station. The data is stored in a secured (password protected and encrypted) repository that is searchable and accessible to researchers at diverse locations.

  5. A predictive model for biomimetic plate type broadband frequency sensor

    NASA Astrophysics Data System (ADS)

    Ahmed, Riaz U.; Banerjee, Sourav

    2016-04-01

    In this work, predictive model for a bio-inspired broadband frequency sensor is developed. Broadband frequency sensing is essential in many domains of science and technology. One great example of such sensor is human cochlea, where it senses a frequency band of 20 Hz to 20 KHz. Developing broadband sensor adopting the physics of human cochlea has found tremendous interest in recent years. Although few experimental studies have been reported, a true predictive model to design such sensors is missing. A predictive model is utmost necessary for accurate design of selective broadband sensors that are capable of sensing very selective band of frequencies. Hence, in this study, we proposed a novel predictive model for the cochlea-inspired broadband sensor, aiming to select the frequency band and model parameters predictively. Tapered plate geometry is considered mimicking the real shape of the basilar membrane in the human cochlea. The predictive model is intended to develop flexible enough that can be employed in a wide variety of scientific domains. To do that, the predictive model is developed in such a way that, it can not only handle homogeneous but also any functionally graded model parameters. Additionally, the predictive model is capable of managing various types of boundary conditions. It has been found that, using the homogeneous model parameters, it is possible to sense a specific frequency band from a specific portion (B) of the model length (L). It is also possible to alter the attributes of `B' using functionally graded model parameters, which confirms the predictive frequency selection ability of the developed model.

  6. Econometric models for predicting confusion crop ratios

    NASA Technical Reports Server (NTRS)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  7. Impact of modellers' decisions on hydrological a priori predictions

    NASA Astrophysics Data System (ADS)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  8. Repository preclosure accident scenarios

    SciTech Connect

    Yook, H.R.; Arbital, J.G.; Keeton, J.M.; Mosier, J.E.; Weaver, B.S.

    1984-09-01

    Waste-handling operations at a spent-fuel repository were investigated to identify operational accidents that could occur. The facility was subdivided, through systems engineering procedures, into individual operations that involve the waste and one specific component of the waste package, in one specific area of the handling facility. From this subdivision approximately 600 potential accidents involving waste package components were identified and then discussed. Supporting descriptive data included for each accident scenario are distance of drop, speed of collision, weight of package component, and weight of equipment involved. The energy of impact associated with each potential accident is calculated to provide a basis for comparison of the relative severities of all the accidents. The results and conclusions suggest approaches to accident consequence mitigation through waste package and facility design. 35 figures, 9 tables.

  9. Fukushima nuclear power plant accident was preventable

    NASA Astrophysics Data System (ADS)

    Kanoglu, Utku; Synolakis, Costas

    2015-04-01

    , insufficient attention was paid to evidence of large tsunamis inundating the region, i.e., AD 869 Jogan and 1677 Empo Boso-oki tsunamis, and the 1896 Sanriku tsunami maximum height in eastern Japan whose maximum runup was 38m. Two, the design safety conditions were different in Onagawa, Fukushima and Tokai NPPs. It is inconceivable to have had different earthquake scenarios for the NPPs at such close distance from each other. Three, studying the sub-standard TEPCO analysis performed only months before the accident shows that it is not the accuracy of numerical computations or the veracity of the computational model that doomed the NPP, but the lack of familiarity with the context of numerical predictions. Inundation projections, even if correct for one particular scenario, need to always be put in context of similar studies and events elsewhere. To put it in colloquial terms, following a recipe from a great cookbook and having great cookware does not always result in great food, if the cook is an amateur. The Fukushima accident was preventable. Had the plant's owner TEPCO and NISA followed international best practices and standards, they would had predicted the possibility of the plant being struck by the size of tsunami that materialized in 2011. If the EDGs had been relocated inland or higher, there would have been no loss of power. A clear chance to have reduced the impact of the tsunami at Fukushima was lost after the 2010 Chilean tsunami. Standards are not only needed for evaluating the vulnerability of NPPs against tsunami attack, but also for evaluating the competence of modelers and evaluators. Acknowledgment: This work is partially supported by the project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe) FP7-ENV2013 6.4-3, Grant 603839 to the Technical University of Crete and the Middle East Technical University.

  10. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  11. DIANA: A multi-phase, multi-component hydrodynamic model for the analysis of severe accidents in heavy water reactors with multiple-tube assemblies

    SciTech Connect

    Tentner, A.M.

    1994-03-01

    A detailed hydrodynamic fuel relocation model has been developed for the analysis of severe accidents in Heavy Water Reactors with multiple-tube Assemblies. This model describes the Fuel Disruption and Relocation inside a nuclear fuel assembly and is designated by the acronym DIANA. DIANA solves the transient hydrodynamic equations for all the moving materials in the core and treats all the relevant flow regimes. The numerical solution techniques and some of the physical models included in DIANA have been developed taking advantage of the extensive experience accumulated in the development and validation of the LEVITATE (1) fuel relocation model of SAS4A [2, 3]. The model is designed to handle the fuel and cladding relocation in both voided and partially voided channels. It is able to treat a wide range of thermal/ hydraulic/neutronic conditions and the presence of various flow regimes at different axial locations within the same hydrodynamic channel.

  12. Laser accidents: Being Prepared

    SciTech Connect

    Barat, K

    2003-01-24

    The goal of the Laser Safety Officer and any laser safety program is to prevent a laser accident from occurring, in particular an injury to a person's eyes. Most laser safety courses talk about laser accidents, causes, and types of injury. The purpose of this presentation is to present a plan for safety offices and users to follow in case of accident or injury from laser radiation.

  13. Models Predicting Success of Infertility Treatment: A Systematic Review

    PubMed Central

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  14. The regional prediction model of PM10 concentrations for Turkey

    NASA Astrophysics Data System (ADS)

    Güler, Nevin; Güneri İşçi, Öznur

    2016-11-01

    This study is aimed to predict a regional model for weekly PM10 concentrations measured air pollution monitoring stations in Turkey. There are seven geographical regions in Turkey and numerous monitoring stations at each region. Predicting a model conventionally for each monitoring station requires a lot of labor and time and it may lead to degradation in quality of prediction when the number of measurements obtained from any õmonitoring station is small. Besides, prediction models obtained by this way only reflect the air pollutant behavior of a small area. This study uses Fuzzy C-Auto Regressive Model (FCARM) in order to find a prediction model to be reflected the regional behavior of weekly PM10 concentrations. The superiority of FCARM is to have the ability of considering simultaneously PM10 concentrations measured monitoring stations in the specified region. Besides, it also works even if the number of measurements obtained from the monitoring stations is different or small. In order to evaluate the performance of FCARM, FCARM is executed for all regions in Turkey and prediction results are compared to statistical Autoregressive (AR) Models predicted for each station separately. According to Mean Absolute Percentage Error (MAPE) criteria, it is observed that FCARM provides the better predictions with a less number of models.

  15. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models.

    PubMed

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L; Huffman, Jennifer E; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F; Wilson, James F; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S

    2015-07-15

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge.

  16. Gaussian mixture models as flux prediction method for central receivers

    NASA Astrophysics Data System (ADS)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie

    2016-05-01

    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  17. [Accidents with the "paraglider"].

    PubMed

    Lang, T H; Dengg, C; Gabl, M

    1988-09-01

    With a collective of 46 patients we show the details and kinds of accidents caused by paragliding. The base for the casuistry of the accidents was a questionnaire which was answered by most of the injured persons. These were questions about the theoretical and practical training, the course of the flight during the different phases, and the subjective point of view of the course of the accident. The patterns of the injuries showed a high incidence of injuries of the spinal column and high risks for the ankles. At the end, we give some advice how to prevent these accidents.

  18. Accident mortality among children

    PubMed Central

    Swaroop, S.; Albrecht, R. M.; Grab, B.

    1956-01-01

    The authors present statistics on mortality from accidents, with special reference to those relating to the age-group 1-19 years. For a number of countries figures are given for the proportional mortality from accidents (the number of accident deaths expressed as a percentage of the number of deaths from all causes) and for the specific death-rates, per 100 000 population, from all causes of death, from selected causes, from all causes of accidents, and from various types of accident. From these figures it appears that, in most countries, accidents are becoming relatively increasingly prominent as a cause of death in childhood, primarily because of the conquest of other causes of death—such as infectious and parasitic diseases, which formerly took a heavy toll of children and adolescents—but also to some extent because the death-rate from motor-vehicle accidents is rising and cancelling out the reduction in the rate for other causes of accidental death. In the authors' opinion, further epidemiological investigations into accident causation are required for the purpose of devising quicker and more effective methods of accident prevention. PMID:13383361

  19. Testing prediction capabilities of an 131I terrestrial transport model by using measurements collected at the Hanford nuclear facility.

    PubMed

    Apostoaei, A Iulian

    2005-05-01

    A model describing transport of 131I in the environment was developed by SENES Oak Ridge, Inc., for assessment of radiation doses and excess lifetime risk from 131I atmospheric releases from Oak Ridge Reservation in Oak Ridge, Tennessee, and from Idaho National Engineering and Environmental Laboratory in southeast Idaho. This paper describes the results of an exercise designed to test the reliability of this model and to identify the main sources of uncertainty in doses and risks estimated by this model. The testing of the model was based on materials published by the International Atomic Energy Agency BIOMASS program, specifically environmental data collected after the release into atmosphere of 63 curies of 131I during 2-5 September 1963, after an accident at the Hanford PUREX Chemical Separations Plant, in Hanford, Washington. Measurements of activity in air, vegetation, and milk were collected in nine counties around Hanford during the first couple of months after the accident. The activity of 131I in the thyroid glands of two children was measured 47 d after the accident. The model developed by SENES Oak Ridge, Inc., was used to estimate concentrations of 131I in environmental media, thyroid doses for the general population, and the activity of 131I in thyroid glands of the two children. Predicted concentrations of 131I in pasture grass and milk and thyroid doses were compared with similar estimates produced by other modelers. The SENES model was also used to estimate excess lifetime risk of thyroid cancer due to the September 1963 releases of 131I from Hanford. The SENES model was first calibrated and then applied to all locations of interest around Hanford without fitting the model parameters to a given location. Predictions showed that the SENES model reproduces satisfactorily the time-dependent and the time-integrated measured concentrations in vegetation and milk, and provides reliable estimates of 131I activity in thyroids of children. SENES model

  20. Super-Micro Computer Weather Prediction Model

    DTIC Science & Technology

    1990-06-01

    model equations 2 b. Grid domain and horizontal nesting 5 c. Time integration and outer lateral boundary condition 8 d. Coupling of the model with the...c. Eddy diffusion sensitivity tests 36 4. Domain for Prototype testing 39 5 . Comparison of the Boundary-Layer Parameterizations - -__ With the...including radiation calculations, with other boundary layer work will be presented in section 5 , and the report concludes witb section 6. 2. Model

  1. Prediction and Prescription in Systems Modeling

    DTIC Science & Technology

    1988-06-30

    17 COSATI CODES 18 SUSjECT TERMS (Continue on retverse if niecessary and identify by block number) FIEL GRUP SB-GOUP Modelling complex systems; non...exponentially increasing forcing functions , population and energy use among them. Now one does not have to run such a model very many hours on a large... function of the superposition of these estimated values. In the modeling, some effort was exerted, quite creditably, to examine the robustness of the

  2. Children's and Adults' Models for Predicting Teleological Action: The Development of a Biology-Based Model.

    ERIC Educational Resources Information Center

    Opfer, John E.; Gelman, Susan A.

    2001-01-01

    Two studies examined models that preschoolers, fifth-graders, and adults use to guide predictions of self-beneficial, goal-directed action. Found that preschoolers' predictions were consistent with an animal-based model, fifth-graders' with biology-based and complexity-based models, and adults' predictions with a biology-based model. All age…

  3. Report to the American Physical Society of the Study Group on Radionuclide Release From Severe Accidents at Nuclear Power Plants

    NASA Astrophysics Data System (ADS)

    Shaw, George

    The release of radioiodine during the Three Mile Island (TMI) accident was more than an order of magnitude smaller than what had been predicted from analyses of hypothetical nuclear accidents. The Reactor Safety Study of 1975 (RSS), which carried out the analyses, is a fundamental factor in formulating regulations concerned with such accidents. This American Physical Society (APS) study group report is a result of the obvious need to reevaluate the RSS analysis of the “source term,” that is, the amount of various radionuclides that are predicted to be emitted under various reactor failure scenarios.The report includes an introductory background to the history of nuclear reactor accidents and accident studies and to the health aspects of radionuclide releases. It then describes nuclear reactors and reactor failure modes, including reasonably detailed descriptions of particular modes thought to be especially critical. The most extensive discussion concerns the chemical and physical processes important in the generation, transport, and release of radionuclides. The large computer codes used to model these processes are considered and evaluated. The results of some of the computer runs are examined in the light of a simplified but informative model to evaluate those features of an accident that are most likely to affect the source term. A review of the research programs currently underway precedes both the study group conclusions about the need to revise the source terms from those in the RSS and recommendations for further studies that are necessary to better evaluate the source term.

  4. Questioning the Faith - Models and Prediction in Stream Restoration (Invited)

    NASA Astrophysics Data System (ADS)

    Wilcock, P.

    2013-12-01

    River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.

  5. Heat up and potential failure of BWR upper internals during a severe accident

    SciTech Connect

    Robb, Kevin R

    2015-01-01

    In boiling water reactors, the steam dome, steam separators, and dryers above the core are comprised of approximately 100 tons of stainless steel. During a severe accident in which the coolant boils away and exothermic oxidation of zirconium occurs, gases (steam and hydrogen) are superheated in the core region and pass through the upper internals. Historically, the upper internals have been modeled using severe accident codes with relatively simple approximations. The upper internals are typically modeled in MELCOR as two lumped volumes with simplified heat transfer characteristics, with no structural integrity considerations, and with limited ability to oxidize, melt, and relocate. The potential for and the subsequent impact of the upper internals to heat up, oxidize, fail, and relocate during a severe accident was investigated. A higher fidelity representation of the shroud dome, steam separators, and steam driers was developed in MELCOR v1.8.6 by extending the core region upwards. This modeling effort entailed adding 45 additional core cells and control volumes, 98 flow paths, and numerous control functions. The model accounts for the mechanical loading and structural integrity, oxidation, melting, flow area blockage, and relocation of the various components. The results indicate that the upper internals can reach high temperatures during a severe accident; they are predicted to reach a high enough temperature such that they lose their structural integrity and relocate. The additional 100 tons of stainless steel debris influences the subsequent in-vessel and ex-vessel accident progression.

  6. Posterior Predictive Assessment of Item Response Theory Models

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Johnson, Matthew S.; Stern, Hal S.

    2006-01-01

    Model checking in item response theory (IRT) is an underdeveloped area. There is no universally accepted tool for checking IRT models. The posterior predictive model-checking method is a popular Bayesian model-checking tool because it has intuitive appeal, is simple to apply, has a strong theoretical basis, and can provide graphical or numerical…

  7. Posterior Predictive Model Checking for Multidimensionality in Item Response Theory

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.; Sinharay, Sandip

    2009-01-01

    If data exhibit multidimensionality, key conditional independence assumptions of unidimensional models do not hold. The current work pursues posterior predictive model checking, a flexible family of model-checking procedures, as a tool for criticizing models due to unaccounted for dimensions in the context of item response theory. Factors…

  8. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J. T.; Manning, S. L.; Ortiz, M.; Sheffler, K. D.

    1987-01-01

    The objectives of this program are to increase understanding of thermal barrier coating (TBC) degradation and failure modes, to generate quantitative ceramic failure life data under cyclic thermal conditions which simulate those encountered in gas turbine engine service, and to develop an analytical methodology for prediction of coating life in the engine. Observations of degradation and failure modes in plasma deposited ceramic indicate that spallation failure results from progressive cracking of the ceramic parallel to and adjacent to, but not coincident with the metal-ceramic interface.

  9. Accident/Mishap Investigation System

    NASA Technical Reports Server (NTRS)

    Keller, Richard; Wolfe, Shawn; Gawdiak, Yuri; Carvalho, Robert; Panontin, Tina; Williams, James; Sturken, Ian

    2007-01-01

    InvestigationOrganizer (IO) is a Web-based collaborative information system that integrates the generic functionality of a database, a document repository, a semantic hypermedia browser, and a rule-based inference system with specialized modeling and visualization functionality to support accident/mishap investigation teams. This accessible, online structure is designed to support investigators by allowing them to make explicit, shared, and meaningful links among evidence, causal models, findings, and recommendations.

  10. Climate predictability and prediction skill on seasonal time scales over South America from CHFP models

    NASA Astrophysics Data System (ADS)

    Osman, Marisol; Vera, C. S.

    2016-11-01

    This work presents an assessment of the predictability and skill of climate anomalies over South America. The study was made considering a multi-model ensemble of seasonal forecasts for surface air temperature, precipitation and regional circulation, from coupled global circulation models included in the Climate Historical Forecast Project. Predictability was evaluated through the estimation of the signal-to-total variance ratio while prediction skill was assessed computing anomaly correlation coefficients. Both indicators present over the continent higher values at the tropics than at the extratropics for both, surface air temperature and precipitation. Moreover, predictability and prediction skill for temperature are slightly higher in DJF than in JJA while for precipitation they exhibit similar levels in both seasons. The largest values of predictability and skill for both variables and seasons are found over northwestern South America while modest but still significant values for extratropical precipitation at southeastern South America and the extratropical Andes. The predictability levels in ENSO years of both variables are slightly higher, although with the same spatial distribution, than that obtained considering all years. Nevertheless, predictability at the tropics for both variables and seasons diminishes in both warm and cold ENSO years respect to that in all years. The latter can be attributed to changes in signal rather than in the noise. Predictability and prediction skill for low-level winds and upper-level zonal winds over South America was also assessed. Maximum levels of predictability for low-level winds were found were maximum mean values are observed, i.e. the regions associated with the equatorial trade winds, the midlatitudes westerlies and the South American Low-Level Jet. Predictability maxima for upper-level zonal winds locate where the subtropical jet peaks. Seasonal changes in wind predictability are observed that seem to be related to

  11. Assessing Predicted Contacts for Building Protein Three-Dimensional Models.

    PubMed

    Adhikari, Badri; Bhattacharya, Debswapna; Cao, Renzhi; Cheng, Jianlin

    2017-01-01

    Recent successes of contact-guided protein structure prediction methods have revived interest in solving the long-standing problem of ab initio protein structure prediction. With homology modeling failing for many protein sequences that do not have templates, contact-guided structure prediction has shown promise, and consequently, contact prediction has gained a lot of interest recently. Although a few dozen contact prediction tools are already currently available as web servers and downloadables, not enough research has been done towards using existing measures like precision and recall to evaluate these contacts with the goal of building three-dimensional models. Moreover, when we do not have a native structure for a set of predicted contacts, the only analysis we can perform is a simple contact map visualization of the predicted contacts. A wider and more rigorous assessment of the predicted contacts is needed, in order to build tertiary structure models. This chapter discusses instructions and protocols for using tools and applying techniques in order to assess predicted contacts for building three-dimensional models.

  12. Multidimensional models for predicting muscle structure and fascicle pennation.

    PubMed

    Randhawa, Avleen; Wakeling, James M

    2015-10-07

    Pennation angles change during muscle contraction and must be tracked by muscle models. When muscles contract they can change in depth (distance between the bounding sheets of aponeurosis) or width, and this is related to pennation angle and muscle fascicle length. As a simplification to these relationships, many models of pennate muscle assume a constant distance between aponeuroses during contraction (constant depth). It is possible that these 1D models do not recreate the internal structure of muscles adequately, whereas 2D panel models that assume a constant panel area, or 3D models that assume a constant muscle volume may better predict the structural changes that occur within muscle during contraction. However, these ideas have never been validated in man. The purpose of this study was to test the accuracy with which 1D, 2D or 3D structural models of muscle could predict the pennation and muscle depth within the medial gastrocnemius (MG) and lateral gastrocnemius (LG) in man during ankle plantarflexions. The 1D model, by definition, was unable to account for changes in muscle depth. The 2D model predicted change in depth as the aponeurosis was loaded, but could only allow a decrease in depth as the aponeurosis is stretched. This was not sufficient to predict the increases in depth that occur in the LG during plantarflexion. The 3D model had the ability to predict either increases or decreases in depth during the ankle plantarflexions and predicted opposing changes in depth that occurred between the MG and LG, whilst simultaneously predicting the pennation more accurately than the 1D or 2D models. However, when using mean parameters, the 3D model performed no better than the more simple 1D model, and so if the intent of a model is purely to establish a good relation between fascicle length and pennation then the 1D model is a suitable choice for these muscles.

  13. Predicting Error Bars for QSAR Models

    SciTech Connect

    Schroeter, Timon; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Mueller, Klaus-Robert

    2007-09-18

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D{sub 7} models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  14. Predicting Error Bars for QSAR Models

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  15. Considerations on the ICRP model predictions of the transfer of (137)Cs from food to the milk and urine of lactating mothers.

    PubMed

    Giussani, Augusto; Risica, Serena

    2014-06-01

    A recent work has shown that the current ICRP biokinetic model for the transfer of caesium radionuclides from food to human breast milk was able to describe with satisfactory accuracy (137)Cs activity concentrations in human breast samples collected a few weeks after the Chernobyl accident as well as in samples collected some years later. However, systematic discrepancies were observed for the predictions of the activity concentrations in urine samples. In the present work, modifications to the model were investigated with the aim of improving the agreement between model predictions and data. It turned out that the disagreement for the urine data was ascribable to the mathematical simplifications used by the ICRP to describe urinary excretion in the first few days after delivery. However, the predictive performances of the model remained unchanged even when differences in the bioavailability of caesium from the ingested food types were considered or metabolic interactions between caesium and potassium were introduced into the model formulation.

  16. A predictive ocean oil spill model

    SciTech Connect

    Sanderson, J.; Barnette, D.; Papodopoulos, P.; Schaudt, K.; Szabo, D.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Initially, the project focused on creating an ocean oil spill model and working with the major oil companies to compare their data with the Los Alamos global ocean model. As a result of this initial effort, Los Alamos worked closely with the Eddy Joint Industry Project (EJIP), a consortium oil and gas producing companies in the US. The central theme of the project was to use output produced from LANL`s global ocean model to look in detail at ocean currents in selected geographic areas of the world of interest to consortium members. Once ocean currents are well understood this information could be used to create oil spill models, improve offshore exploration and drilling equipment, and aid in the design of semi-permanent offshore production platforms.

  17. Cyclic Oxidation Modeling and Life Prediction

    NASA Technical Reports Server (NTRS)

    Smialek, James L.

    2004-01-01

    The cyclic oxidation process can be described as an iterative scale growth and spallation sequence by a number of similar models. Model input variable include oxide scale type and growth parameters, spalling geometry, spall constant, and cycle duration. Outputs include net weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. All models and their variations produce a number of similar characteristic features. In general, spalling and material consumption increase to a steady state rate, at which point the retained scale approaches a constant and the rate of weight loss becomes linear. For one model, this regularity was demonstrated as dimensionless, universal expressions, obtained by normalizing the variables by critical performance factors. These insights were enabled through the use of the COSP for Windows cyclic oxidation spalling program.

  18. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  19. Aggregate driver model to enable predictable behaviour

    NASA Astrophysics Data System (ADS)

    Chowdhury, A.; Chakravarty, T.; Banerjee, T.; Balamuralidhar, P.

    2015-09-01

    The categorization of driving styles, particularly in terms of aggressiveness and skill is an emerging area of interest under the broader theme of intelligent transportation. There are two possible discriminatory techniques that can be applied for such categorization; a microscale (event based) model and a macro-scale (aggregate) model. It is believed that an aggregate model will reveal many interesting aspects of human-machine interaction; for example, we may be able to understand the propensities of individuals to carry out a given task over longer periods of time. A useful driver model may include the adaptive capability of the human driver, aggregated as the individual propensity to control speed/acceleration. Towards that objective, we carried out experiments by deploying smartphone based application to be used for data collection by a group of drivers. Data is primarily being collected from GPS measurements including position & speed on a second-by-second basis, for a number of trips over a two months period. Analysing the data set, aggregate models for individual drivers were created and their natural aggressiveness were deduced. In this paper, we present the initial results for 12 drivers. It is shown that the higher order moments of the acceleration profile is an important parameter and identifier of journey quality. It is also observed that the Kurtosis of the acceleration profiles stores major information about the driving styles. Such an observation leads to two different ranking systems based on acceleration data. Such driving behaviour models can be integrated with vehicle and road model and used to generate behavioural model for real traffic scenario.

  20. Validating predictions from climate envelope models

    USGS Publications Warehouse

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  1. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Strangman, T. E.; Neumann, J. F.; Liu, A.

    1986-01-01

    Thermal barrier coatings (TBCs) for turbine airfoils in high-performance engines represent an advanced materials technology with both performance and durability benefits. The foremost TBC benefit is the reduction of heat transferred into air-cooled components, which yields performance and durability benefits. This program focuses on predicting the lives of two types of strain-tolerant and oxidation-resistant TBC systems that are produced by commercial coating suppliers to the gas turbine industry. The plasma-sprayed TBC system, composed of a low-pressure plasma-spray (LPPS) or an argon shrouded plasma-spray (ASPS) applied oxidation resistant NiCrAlY (or CoNiCrAlY) bond coating and an air-plasma-sprayed yttria (8 percent) partially stabilized zirconia insulative layer, is applied by Chromalloy, Klock, and Union Carbide. The second type of TBC is applied by the electron beam-physical vapor deposition (EB-PVD) process by Temescal.

  2. MJO prediction skill, predictability, and teleconnection impacts in the Beijing Climate Center Atmospheric General Circulation Model

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Ren, Hong-Li; Zuo, Jinqing; Zhao, Chongbo; Chen, Lijuan; Li, Qiaoping

    2016-09-01

    This study evaluates performance of Madden-Julian oscillation (MJO) prediction in the Beijing Climate Center Atmospheric General Circulation Model (BCC_AGCM2.2). By using the real-time multivariate MJO (RMM) indices, it is shown that the MJO prediction skill of BCC_AGCM2.2 extends to about 16-17 days before the bivariate anomaly correlation coefficient drops to 0.5 and the root-mean-square error increases to the level of the climatological prediction. The prediction skill showed a seasonal dependence, with the highest skill occurring in boreal autumn, and a phase dependence with higher skill for predictions initiated from phases 2-4. The results of the MJO predictability analysis showed that the upper bounds of the prediction skill can be extended to 26 days by using a single-member estimate, and to 42 days by using the ensemble-mean estimate, which also exhibited an initial amplitude and phase dependence. The observed relationship between the MJO and the North Atlantic Oscillation was accurately reproduced by BCC_AGCM2.2 for most initial phases of the MJO, accompanied with the Rossby wave trains in the Northern Hemisphere extratropics driven by MJO convection forcing. Overall, BCC_AGCM2.2 displayed a significant ability to predict the MJO and its teleconnections without interacting with the ocean, which provided a useful tool for fully extracting the predictability source of subseasonal prediction.

  3. Predictions of Cockpit Simulator Experimental Outcome Using System Models

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Goka, T.

    1984-01-01

    This study involved predicting the outcome of a cockpit simulator experiment where pilots used cockpit displays of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. The experiments were run on the NASA Ames Research Center multicab cockpit simulator facility. Prior to the experiments, a mathematical model of the pilot/aircraft/CDTI flight system was developed which included relative in-trail and vertical dynamics between aircraft in the approach string. This model was used to construct a digital simulation of the string dynamics including response to initial position errors. The model was then used to predict the outcome of the in-trail following cockpit simulator experiments. Outcome included performance and sensitivity to different separation criteria. The experimental results were then used to evaluate the model and its prediction accuracy. Lessons learned in this modeling and prediction study are noted.

  4. Multikernel linear mixed models for complex phenotype prediction

    PubMed Central

    Weissbrod, Omer; Geiger, Dan; Rosset, Saharon

    2016-01-01

    Linear mixed models (LMMs) and their extensions have recently become the method of choice in phenotype prediction for complex traits. However, LMM use to date has typically been limited by assuming simple genetic architectures. Here, we present multikernel linear mixed model (MKLMM), a predictive modeling framework that extends the standard LMM using multiple-kernel machine learning approaches. MKLMM can model genetic interactions and is particularly suitable for modeling complex local interactions between nearby variants. We additionally present MKLMM-Adapt, which automatically infers interaction types across multiple genomic regions. In an analysis of eight case-control data sets from the Wellcome Trust Case Control Consortium and more than a hundred mouse phenotypes, MKLMM-Adapt consistently outperforms competing methods in phenotype prediction. MKLMM is as computationally efficient as standard LMMs and does not require storage of genotypes, thus achieving state-of-the-art predictive power without compromising computational feasibility or genomic privacy. PMID:27302636

  5. Product component genealogy modeling and field-failure prediction

    SciTech Connect

    King, Caleb; Hong, Yili; Meeker, William Q.

    2016-04-13

    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can be achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.

  6. Prediction of High-Lift Flows using Turbulent Closure Models

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.; Ying, Susan X.; Bertelrud, Arild

    1997-01-01

    The flow over two different multi-element airfoil configurations is computed using linear eddy viscosity turbulence models and a nonlinear explicit algebraic stress model. A subset of recently-measured transition locations using hot film on a McDonnell Douglas configuration is presented, and the effect of transition location on the computed solutions is explored. Deficiencies in wake profile computations are found to be attributable in large part to poor boundary layer prediction on the generating element, and not necessarily inadequate turbulence modeling in the wake. Using measured transition locations for the main element improves the prediction of its boundary layer thickness, skin friction, and wake profile shape. However, using measured transition locations on the slat still yields poor slat wake predictions. The computation of the slat flow field represents a key roadblock to successful predictions of multi-element flows. In general, the nonlinear explicit algebraic stress turbulence model gives very similar results to the linear eddy viscosity models.

  7. Noncausal spatial prediction filtering based on an ARMA model

    NASA Astrophysics Data System (ADS)

    Liu, Zhipeng; Chen, Xiaohong; Li, Jingye

    2009-06-01

    Conventional f-x prediction filtering methods are based on an autoregressive model. The error section is first computed as a source noise but is removed as additive noise to obtain the signal, which results in an assumption inconsistency before and after filtering. In this paper, an autoregressive, moving-average model is employed to avoid the model inconsistency. Based on the ARMA model, a noncasual prediction filter is computed and a self-deconvolved projection filter is used for estimating additive noise in order to suppress random noise. The 1-D ARMA model is also extended to the 2-D spatial domain, which is the basis for noncasual spatial prediction filtering for random noise attenuation on 3-D seismic data. Synthetic and field data processing indicate this method can suppress random noise more effectively and preserve the signal simultaneously and does much better than other conventional prediction filtering methods.

  8. Evaluation of battery models for prediction of electric vehicle range

    NASA Technical Reports Server (NTRS)

    Frank, H. A.; Phillips, A. M.

    1977-01-01

    Three analytical models for predicting electric vehicle battery output and the corresponding electric vehicle range for various driving cycles were evaluated. The models were used to predict output and range, and then compared with experimentally determined values determined by laboratory tests on batteries using discharge cycles identical to those encountered by an actual electric vehicle while on SAE cycles. Results indicate that the modified Hoxie model gave the best predictions with an accuracy of about 97 to 98% in the best cases and 86% in the worst case. A computer program was written to perform the lengthy iterative calculations required. The program and hardware used to automatically discharge the battery are described.

  9. Possibility of quantitative prediction of cavitation erosion without model test

    SciTech Connect

    Kato, Hiroharu; Konno, Akihisa; Maeda, Masatsugu; Yamaguchi, Hajime

    1996-09-01

    A scenario for quantitative prediction of cavitation erosion was proposed. The key value is the impact force/pressure spectrum on a solid surface caused by cavitation bubble collapse. As the first step of prediction, the authors constructed the scenario from an estimation of the cavity generation rate to the prediction of impact force spectrum, including the estimations of collapsing cavity number and impact pressure. The prediction was compared with measurements of impact force spectra on a partially cavitating hydrofoil. A good quantitative agreement was obtained between the prediction and the experiment. However, the present method predicted a larger effect of main flow velocity than that observed. The present scenario is promising as a method of predicting erosion without using a model test.

  10. Evaluation of Community Land Model Hydrologic Predictions

    NASA Astrophysics Data System (ADS)

    Li, K. Y.; Lettenmaier, D. P.; Bohn, T.; Delire, C.

    2005-12-01

    Confidence in representation and parameterization of land surface processes in coupled land-atmosphere models is strongly dependent on a diversity of opportunities for model testing, since such coupled models are usually intended for application in a wide range of conditions (regional models) or globally. Land surface models have been increasing in complexity over the past decade, which has increased the demands on data sets appropriate for model testing and evaluation. In this study, we compare the performance of two commonly used land surface schemes - the Variable Infiltration Capacity (VIC) and Community Land Model (CLM) with respect to their ability to reproduce observed water and energy fluxes in off-line tests for two large river basins with contrasting hydroclimatic conditions spanning the range from temperate continental to arctic, and for five point (column flux) sites spanning the range from tropical to arctic. The two large river basins are the Arkansas-Red in U.S. southern Great Plains, and the Torne-Kalix in northern Scandinavia. The column flux evaluations are for a tropical forest site at Reserva Jaru (ABRACOS) in Brazil, a prairie site (FIFE) near Manhattan, Kansas in the central U.S., a soybean site at Caumont (HAPEX-Monbilhy) in France, a meadow site at Cabauw in the Netherlands, and a small grassland catchment at Valday, Russia. The results indicate that VIC can reasonably well capture the land surface biophysical processes, while CLM is somewhat less successful. We suggest changes to the CLM parameterizations that would improve its general performance with respect to its representation of land surface hydrologic processes.

  11. Efficient Reduction and Analysis of Model Predictive Error

    NASA Astrophysics Data System (ADS)

    Doherty, J.

    2006-12-01

    Most groundwater models are calibrated against historical measurements of head and other system states before being used to make predictions in a real-world context. Through the calibration process, parameter values are estimated or refined such that the model is able to reproduce historical behaviour of the system at pertinent observation points reasonably well. Predictions made by the model are deemed to have greater integrity because of this. Unfortunately, predictive integrity is not as easy to achieve as many groundwater practitioners would like to think. The level of parameterisation detail estimable through the calibration process (especially where estimation takes place on the basis of heads alone) is strictly limited, even where full use is made of modern mathematical regularisation techniques such as those encapsulated in the PEST calibration package. (Use of these mechanisms allows more information to be extracted from a calibration dataset than is possible using simpler regularisation devices such as zones of piecewise constancy.) Where a prediction depends on aspects of parameterisation detail that are simply not inferable through the calibration process (which is often the case for predictions related to contaminant movement, and/or many aspects of groundwater/surface water interaction), then that prediction may be just as much in error as it would have been if the model had not been calibrated at all. Model predictive error arises from two sources. These are (a) the presence of measurement noise within the calibration dataset through which linear combinations of parameters spanning the "calibration solution space" are inferred, and (b) the sensitivity of the prediction to members of the "calibration null space" spanned by linear combinations of parameters which are not inferable through the calibration process. The magnitude of the former contribution depends on the level of measurement noise. The magnitude of the latter contribution (which often

  12. Drawability Prediction Method using Continuous Texture Evolution Model

    NASA Astrophysics Data System (ADS)

    Morimoto, Toshiharu; Yanagimoto, Jun

    2011-08-01

    Drawability is one of steel strip properties which control press forming. Many predicted method for the Lankford value have been proposed. First, we predict recystallization texture based on the idea that total amount of microscopic crystal slips is proportional to accumulated dislocation density in grain boundaries. Next, we can predict Lankford value of ultra low carbon strips and ferritic stainless steel strips using Sachs model. Our method is very practical to use in hot and cold steel rolling industry.

  13. Reconstructing historical habitat data with predictive models.

    PubMed

    Zweig, Christa L

    2014-01-01

    Historical vegetation data are important to ecological studies, as many structuring processes operate at long time scales, from decades to centuries. Capturing the pattern of variability within a system (enough to declare a significant change from past to present) relies on correct assumptions about the temporal scale of the processes involved. Sufficient long-term data are often lacking, and current techniques have their weaknesses. To address this concern, we constructed multistate and artificial neural network models (ANN) to provide fore- and hindcast vegetation communities considered critical foraging habitat for an endangered bird, the Florida Snail Kite (Rostrhamus sociabilis). Multistate models were not able to hindcast due to our data not satisfying a detailed balance requirement for time reversibility in Markovian dynamics. Multistate models were useful for forecasting and providing environmental variables for the ANN. Results from our ANN hindcast closely mirrored the population collapse of the Snail Kite population using only environmental data to inform the model. The parallel between the two gives us confidence in the hindcasting results and their use in future demographic models.

  14. Development of operational models for space weather prediction

    NASA Astrophysics Data System (ADS)

    Liu, Siqing; Gong, Jiancun

    Since space weather prediction is currently at the stage of transition from human experience to objective forecasting methods, developing operational forecasting models becomes an important way to improve the capabilities of space weather service. As the existing theoretical models are not fully operational when it comes to space weather prediction, we carried out researches on developing operational models, considering the user needs for prediction of key elements in space environment, which have vital impacts on space assets security. We focused on solar activities, geomagnetic activities, high-energy particles, atmospheric density, plasma environment and so forth. Great progresses have been made in developing 3D dynamic asymmetric magnetopause model, plasma sheet energetic electron flux forecasting model and 400km-atmospheric density forecasting model, and also in the prediction of high-speed solar-wind streams from coronal holes and geomagnetic AE indices. Some of these models have already been running in the operational system of Space Environment Prediction Center, National Space Science Center (SEPC/NSSC). This presentation will introduce the research plans for space weather prediction in China, and current progresses of developing operational models and their applications in daily space weather services in SEPC/NSSC.

  15. Ensemble Learning of QTL Models Improves Prediction of Complex Traits

    PubMed Central

    Bian, Yang; Holland, James B.

    2015-01-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  16. Predictive modeling of pedestal structure in KSTAR using EPED model

    NASA Astrophysics Data System (ADS)

    Han, Hyunsun; Kwon, Ohjin; Kim, J. Y.

    2013-10-01

    A predictive calculation is given for the structure of edge pedestal in the H-mode plasma of the KSTAR (Korea Superconducting Tokamak Advanced Research) device using the EPED model. Particularly, the dependence of pedestal width and height on various plasma parameters is studied in detail. The two codes, ELITE and HELENA, are utilized for the stability analysis of the peeling-ballooning and kinetic ballooning modes, respectively. Summarizing the main results, the pedestal slope and height have a strong dependence on plasma current, rapidly increasing with it, while the pedestal width is almost independent of it. The plasma density or collisionality gives initially a mild stabilization, increasing the pedestal slope and height, but above some threshold value its effect turns to a destabilization, reducing the pedestal width and height. Among several plasma shape parameters, the triangularity gives the most dominant effect, rapidly increasing the pedestal width and height, while the effect of elongation and squareness appears to be relatively weak. Implication of these edge results, particularly in relation to the global plasma performance, is discussed.

  17. Predictive modeling of pedestal structure in KSTAR using EPED model

    SciTech Connect

    Han, Hyunsun; Kim, J. Y.; Kwon, Ohjin

    2013-10-15

    A predictive calculation is given for the structure of edge pedestal in the H-mode plasma of the KSTAR (Korea Superconducting Tokamak Advanced Research) device using the EPED model. Particularly, the dependence of pedestal width and height on various plasma parameters is studied in detail. The two codes, ELITE and HELENA, are utilized for the stability analysis of the peeling-ballooning and kinetic ballooning modes, respectively. Summarizing the main results, the pedestal slope and height have a strong dependence on plasma current, rapidly increasing with it, while the pedestal width is almost independent of it. The plasma density or collisionality gives initially a mild stabilization, increasing the pedestal slope and height, but above some threshold value its effect turns to a destabilization, reducing the pedestal width and height. Among several plasma shape parameters, the triangularity gives the most dominant effect, rapidly increasing the pedestal width and height, while the effect of elongation and squareness appears to be relatively weak. Implication of these edge results, particularly in relation to the global plasma performance, is discussed.

  18. Ecosystem model-based approach for modelling the dynamics of 137Cs transfer to marine plankton populations: application to the western North Pacific Ocean after the Fukushima nuclear power plant accident

    NASA Astrophysics Data System (ADS)

    Belharet, M.; Estournel, C.; Charmasson, S.

    2015-06-01

    Huge amounts of radionuclides, especially 137Cs, were released into the western North Pacific Ocean after the Fukushima nuclear power plant (FNPP) accident that occurred on 11 March 2011, resulting in contamination of the marine biota. In this study we developed a radioecological model to estimate 137Cs concentrations in phytoplankton and zooplankton populations representing the lower levels of the pelagic trophic chain. We coupled this model to a lower trophic level ecosystem model and an ocean circulation model to take into account the site-specific environmental conditions in the area. The different radioecological parameters of the model were estimated by calibration, and a sensitivity analysis to parameter uncertainties was carried out, showing a high sensitivity of the model results, especially to the 137Cs concentration in seawater, to the rates of uptake from water and to the radionuclide assimilation efficiency for zooplankton. The results of the 137Cs concentrations in planktonic populations simulated in this study were then validated through comparison with the some data available in the region after the accident. The model results have shown that the maximum concentrations in plankton after the accident were about two to four orders of magnitude higher than those observed before the accident depending on the distance from FNPP. Finally, the maximum 137Cs absorbed dose rate for phyto- and zooplankton populations was estimated to be about 10-2 μGy h-1, and was, therefore, lower than the 10 μGy h-1 benchmark value defined in the ERICA assessment approach from which a measurable effect on the marine biota can be observed.

  19. A color prediction model for imagery analysis

    NASA Technical Reports Server (NTRS)

    Skaley, J. E.; Fisher, J. R.; Hardy, E. E.

    1977-01-01

    A simple model has been devised to selectively construct several points within a scene using multispectral imagery. The model correlates black-and-white density values to color components of diazo film so as to maximize the color contrast of two or three points per composite. The CIE (Commission Internationale de l'Eclairage) color coordinate system is used as a quantitative reference to locate these points in color space. Superimposed on this quantitative reference is a perceptional framework which functionally contrasts color values in a psychophysical sense. This methodology permits a more quantitative approach to the manual interpretation of multispectral imagery while resulting in improved accuracy and lower costs.

  20. How "lucky" we are that the Fukushima disaster occurred in early spring: predictions on the contamination levels from various fission products released from the accident and updates on the risk assessment for solid and thyroid cancers.

    PubMed

    Evangeliou, Nikolaos; Balkanski, Yves; Cozic, Anne; Møller, Anders Pape

    2014-12-01

    The present paper studies how a random event (earthquake) and the subsequent disaster in Japan affect transport and deposition of fallout and the resulting health consequences. Therefore, except for the original accident in March 2011, three additional scenarios are assessed assuming that the same releases took place in winter 2010, summer 2011 and autumn 2011 in order to cover a full range of annual seasonality. This is also the first study where a large number of fission products released from the accident are used to assess health risks with the maximum possible efficiency. Xenon-133 and (137)Cs are directly estimated within the model, whereas 15 other radionuclides are calculated indirectly using reported isotopic ratios. As much as 85% of the released (137)Cs would be deposited in continental regions worldwide if the accident occurred in winter 2010, 22% in spring 2011 (when it actually happened), 55% in summer 2011 and 48% if it occurred during autumn 2011. Solid cancer incidents and mortalities from Fukushima are estimated to be between 160 and 880 and from 110 to 640 close to previous estimations. By adding thyroid cancers, the total number rises from 230 to 850 for incidents and from 120 to 650 for mortalities. Fatalities due to worker exposure and mandatory evacuation have been reported to be around 610 increasing total estimated mortalities to 730-1260. These estimates are 2.8 times higher than previously reported ones for radiocaesium and (131)I and 16% higher than those reported based on radiocaesium only. Total expected fatalities from Fukushima are 32% lower than in the winter scenario, 5% that in the summer scenario and 30% lower than in the autumn scenario. Nevertheless, cancer fatalities are expected to be less than 5% of those from the tsunami (~20,000).

  1. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  2. Predicting Market Impact Costs Using Nonparametric Machine Learning Models

    PubMed Central

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  3. Learning lessons from Natech accidents - the eNATECH accident database

    NASA Astrophysics Data System (ADS)

    Krausmann, Elisabeth; Girgin, Serkan

    2016-04-01

    equipment vulnerability models linking the natural-hazard severity to the observed damage almost impossible. As a consequence, the European Commission has set up the eNATECH database for the systematic collection of Natech accident data and near misses. The database exhibits the more sophisticated accident representation required to capture the characteristics of Natech events and is publicly accessible at http://enatech.jrc.ec.europa.eu. This presentation outlines the general lessons-learning process, introduces the eNATECH database and its specific structure, and discusses natural-hazard specific lessons learned and features common to Natech accidents triggered by different natural hazards.

  4. Evaluation of prediction intervals for expressing uncertainties in groundwater flow model predictions

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    We tested the accuracy of 95% individual prediction intervals for hydraulic heads, streamflow gains, and effective transmissivities computed by groundwater models of two Danish aquifers. To compute the intervals, we assumed that each predicted value can be written as the sum of a computed dependent variable and a random error. Testing was accomplished by using a cross-validation method and by using new field measurements of hydraulic heads and transmissivities that were not used to develop or calibrate the models. The tested null hypotheses are that the coverage probability of the prediction intervals is not significantly smaller than the assumed probability (95%) and that each tail probability is not significantly different from the assumed probability (2.5%). In all cases tested, these hypotheses were accepted at the 5% level of significance. We therefore conclude that for the groundwater models of two real aquifers the individual prediction intervals appear to be accurate.We tested the accuracy of 95% individual prediction intervals for hydraulic heads, streamflow gains, and effective transmissivities computed by groundwater models of two Danish aquifers. To compute the intervals, we assumed that each predicted value can be written as the sum of a computed dependent variable and a random error. Testing was accomplished by using a cross-validation method and by using new field measurements of hydraulic heads and transmissivities that were not used to develop or calibrate the models. The tested null hypotheses are that the coverage probability of the prediction intervals is not significantly smaller than the assumed probability (95%) and that each tail probability is not significantly different from the assumed probability (2.5%). In all cases tested, these hypotheses were accepted at the 5% level of significance. We therefore conclude that for the groundwater models of two real aquifers the individual prediction intervals appear to be accurate.

  5. Fun with Numbers: Alternative Models for Predicting Salary Levels.

    ERIC Educational Resources Information Center

    Johnson, Catherine B.; And Others

    1987-01-01

    The increasing concern with equity issues in higher education, along with litigation, has prompted institutions to undertake salary prediction studies. Four models were compared: (1) entering all variables, (2) excluding rank and tenure, (3) using predicted rank and tenure, and (4) using only "objective" variables. (Author/MLW)

  6. Computer Model for Prediction of PCB Dechlorination and Biodegradation Endpoints

    SciTech Connect

    Just, E.M.; Klasson, T.

    1999-04-19

    Mathematical modeling of polychlorinated biphenyl (PCB) transformation served as a means of predicting possible endpoints of bioremediation, thus allowing evaluation of several of the most common transformation patterns. Correlation between laboratory-observed and predicted endpoint data was, in some cases, as good as 0.98 (perfect correlation = 1.0).

  7. PPS-87: a new event oriented solar proton prediction model.

    PubMed

    Smart, D F; Shea, M A

    1989-01-01

    A new event-oriented solar proton prediction model has been developed and implemented at the USAF Space Environment forecast facility. This new model generates predicted solar proton time-intensity profiles for a number of user adjustable energy ranges and is also capable of making predictions for the heavy ion flux. The computer program is designed so a forecaster can select inputs based on the data available in near real-time at the forecast center as the solar flare is occurring. The predicted event amplitude is based on the electromagnetic emission parameters of the solar flare (either microwave or soft X-ray emission) and the solar flare position on the sun. The model also has an update capability where the forecaster can normalize the prediction to actual spacecraft observations of spectral slope and particle flux as the event is occurring in order to more accurately predict the future time-intensity profile of the solar particle flux. Besides containing improvements in the accuracy of the predicted energetic particle event onset time and magnitude, the new model converts the predicted solar particle flux into an expected radiation dose that might be experienced by an astronaut during EVA activities or inside the space shuttle.

  8. Validation of a tuber blight (Phytophthora infestans) prediction model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  9. Geospatial application of the Water Erosion Prediction Project (WEPP) model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Water Erosion Prediction Project (WEPP) model is a process-based technology for prediction of soil erosion by water at hillslope profile, field, and small watershed scales. In particular, WEPP utilizes observed or generated daily climate inputs to drive the surface hydrology processes (infiltrat...

  10. A Model for Prediction of Heat Stability of Photosynthetic Membranes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A previous study has revealed a positive correlation between heat-induced damage to photosynthetic membranes (thylakoid membranes) and chlorophyll loss. In this study, we exploited this correlation and developed a model for prediction of thermal damage to thylakoids. Prediction is based on estimat...

  11. Predicting Magazine Audiences with a Loglinear Model.

    DTIC Science & Technology

    1987-07-01

    important use of e.d. estimates is in media selection ( Aaker 1975; Lee 1962, 1963; Little and Lodish 1969). All advertising campaigns have a budget. It...N.Z. Listener 6061 39.0 4 0 22 References Aaker , D.A. (1975), "ADMOD:An Advertising Decision Model," Journal of Marketing Research, February, 37-45

  12. VHSIC/VHSIC-Like Reliability Prediction Modeling

    DTIC Science & Technology

    1989-10-01

    flexibility in the methods used by the manufacturer in proving the adequacy of a design. The detailed model accounts for the manufacturing process...factor’ is added to account for the imnpioved reliabUity expectod fron the procedure taken by mianufacturer that -,i on the Qualified Manufacturaer

  13. Behaviour of oceanic 137Cs following the Fukushima Daiichi Nuclear Power Plant accident for four years simulated numerically by a regional ocean model

    NASA Astrophysics Data System (ADS)

    Torn, M. S.; Koven, C. D.; Riley, W. J.; Zhu, B.; Hicks Pries, C.; Phillips, C. L.

    2014-12-01

    A series of accidents at the Fukushima Dai-ichi Nuclear Power Plant (1F NPP) following the earthquake and tsunami of 11 March 2011 resulted in the release of radioactive materials to the ocean by two major pathways, direct release from the accident site and atmospheric deposition.We reconstructed spatiotemporal variability of 137Cs activity in the regional ocean for four years by numerical model, such as a regional scale and the North Pacific scale oceanic dispersion models, an atmospheric transport model, a sediment transport model, a dynamic biological compartment model for marine biota and river runoff model. Direct release rate of 137Cs were estimated for four years after the accident by comparing simulated results and observed activities very close to the site. The estimated total amounts of directly release was 3.6±0.7 PBq. Directly release rate of 137Cs decreased exponentially with time by the end of December 2012 and then, was almost constant. Decrease rate were quite small after 2013. The daily release rate of 137Cs was estimated to be the order of magnitude of 1010 Bq/day by the end of March 2015. The activity of directly released 137Cs was detectable only in the coastal zone after December 2012. Simulated 137Cs activities attributable to direct release were in good agreement with observed activities, a result that implies the estimated direct release rate was reasonable. There is no observed data of 137Cs activity in the ocean from 11 to 21 March 2011. Observed data of marine biota should reflect the history of 137Cs activity in this early period. We reconstructed the history of 137Cs activity in this early period by considering atmospheric deposition, river input, rain water runoff from the 1F NPP site. The comparisons between simulated 137Cs activity of marine biota by a dynamic biological compartment and observed data also suggest that simulated 137Cs activity attributable to atmospheric deposition was underestimated in this early period. The

  14. Behaviour of oceanic 137Cs following the Fukushima Daiichi Nuclear Power Plant accident for four years simulated numerically by a regional ocean model

    NASA Astrophysics Data System (ADS)

    Tsumune, D.; Tsubono, T.; Aoyama, M.; Misumi, K.; Tateda, Y.

    2015-12-01

    A series of accidents at the Fukushima Dai-ichi Nuclear Power Plant (1F NPP) following the earthquake and tsunami of 11 March 2011 resulted in the release of radioactive materials to the ocean by two major pathways, direct release from the accident site and atmospheric deposition.We reconstructed spatiotemporal variability of 137Cs activity in the regional ocean for four years by numerical model, such as a regional scale and the North Pacific scale oceanic dispersion models, an atmospheric transport model, a sediment transport model, a dynamic biological compartment model for marine biota and river runoff model. Direct release rate of 137Cs were estimated for four years after the accident by comparing simulated results and observed activities very close to the site. The estimated total amounts of directly release was 3.6±0.7 PBq. Directly release rate of 137Cs decreased exponentially with time by the end of December 2012 and then, was almost constant. Decrease rate were quite small after 2013. The daily release rate of 137Cs was estimated to be the order of magnitude of 1010 Bq/day by the end of March 2015. The activity of directly released 137Cs was detectable only in the coastal zone after December 2012. Simulated 137Cs activities attributable to direct release were in good agreement with observed activities, a result that implies the estimated direct release rate was reasonable. There is no observed data of 137Cs activity in the ocean from 11 to 21 March 2011. Observed data of marine biota should reflect the history of 137Cs activity in this early period. We reconstructed the history of 137Cs activity in this early period by considering atmospheric deposition, river input, rain water runoff from the 1F NPP site. The comparisons between simulated 137Cs activity of marine biota by a dynamic biological compartment and observed data also suggest that simulated 137Cs activity attributable to atmospheric deposition was underestimated in this early period. The

  15. Risk Prediction Models for Lung Cancer: A Systematic Review.

    PubMed

    Gray, Eoin P; Teare, M Dawn; Stevens, John; Archer, Rachel

    2016-03-01

    Many lung cancer risk prediction models have been published but there has been no systematic review or comprehensive assessment of these models to assess how they could be used in screening. We performed a systematic review of lung cancer prediction models and identified 31 articles that related to 25 distinct models, of which 11 considered epidemiological factors only and did not require a clinical input. Another 11 articles focused on models that required a clinical assessment such as a blood test or scan, and 8 articles considered the 2-stage clonal expansion model. More of the epidemiological models had been externally validated than the more recent clinical assessment models. There was varying discrimination, the ability of a model to distinguish between cases and controls, with an area under the curve between 0.57 and 0.879 and calibration, the model's ability to assign an accurate probability to an individual. In our review we found that further validation studies need to be considered; especially for the Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial 2012 Model Version (PLCOM2012) and Hoggart models, which recorded the best overall performance. Future studies will need to focus on prediction rules, such as optimal risk thresholds, for models for selective screening trials. Only 3 validation studies considered prediction rules when validating the models and overall the models were validated using varied tests in distinct populations, which made direct comparisons difficult. To improve this, multiple models need to be tested on the same data set with considerations for sensitivity, specificity, model accuracy, and positive predictive values at the optimal risk thresholds.

  16. [Accidents and injuries at work].

    PubMed

    Standke, W

    2014-06-01

    In the case of an accident at work, the person concerned is insured by law according to the guidelines of the Sozialgesetzbuch VII as far as the injuries have been caused by this accident. The most important source of information on the incident in question is the accident report that has to be sent to the responsible institution for statutory accident insurance and prevention by the employer, if the accident of the injured person is fatal or leads to an incapacity to work for more than 3 days (= reportable accident). Data concerning accidents like these are sent to the Deutsche Gesetzliche Unfallversicherung (DGUV) as part of a random sample survey by the institutions for statutory accident insurance and prevention and are analyzed statistically. Thus the key issues of accidents can be established and used for effective prevention. Although the success of effective accident prevention is undisputed, there were still 919,025 occupational accidents in 2011, with clear gender-related differences. Most occupational accidents involve the upper and lower extremities. Accidents are analyzed comprehensively and the results are published and made available to all interested parties in an effort to improve public awareness of possible accidents. Apart from reportable accidents, data on the new occupational accident pensions are also gathered and analyzed statistically. Thus, additional information is gained on accidents with extremely serious consequences and partly permanent injuries for the accident victims.

  17. The Resource Requirements Prediction Model 1 (RRPM-1): An Overview.

    ERIC Educational Resources Information Center

    Gulko, Warren W.

    This paper provides a brief overview of the conceptual approach used in the initial version of the WICHE Resource Requirements Prediction Model (RRPM-1). RRPM-1 is an institutional-oriented, computer-based model which simulates the cost of operating a college campus over a 3- to 10-year time frame. The model may be viewed as a management tool to…

  18. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1975-01-01

    The feasibility of extended and long-range weather prediction by means of global atmospheric models was studied. A number of computer experiments were conducted at GISS with the GISS global general circulation model. Topics discussed include atmospheric response to sea-surface temperature anomalies, and monthly mean forecast experiments with the global model.

  19. Propagating uncertainties in statistical model based shape prediction

    NASA Astrophysics Data System (ADS)

    Syrkina, Ekaterina; Blanc, Rémi; Székely, Gàbor

    2011-03-01

    This paper addresses the question of accuracy assessment and confidence regions estimation in statistical model based shape prediction. Shape prediction consists in estimating the shape of an organ based on a partial observation, due e.g. to a limited field of view or poorly contrasted images, and generally requires a statistical model. However, such predictions can be impaired by several sources of uncertainty, in particular the presence of noise in the observation, limited correlations between the predictors and the shape to predict, as well as limitations of the statistical shape model - in particular the number of training samples. We propose a framework which takes these into account and derives confidence regions around the predicted shape. Our method relies on the construction of two separate statistical shape models, for the predictors and for the unseen parts, and exploits the correlations between them assuming a joint Gaussian distribution. Limitations of the models are taken into account by jointly optimizing the prediction and minimizing the shape reconstruction error through cross-validation. An application to the prediction of the shape of the proximal part of the human tibia given the shape of the distal femur is proposed, as well as the evaluation of the reliability of the estimated confidence regions, using a database of 184 samples. Potential applications are reconstructive surgery, e.g. to assess whether an implant fits in a range of acceptable shapes, or functional neurosurgery when the target's position is not directly visible and needs to be inferred from nearby visible structures.

  20. Reconnection in NIMROD: Model, Predictions, Remedies

    SciTech Connect

    Fowler, T K; Bulmer, R H; Cohen, B I; Hau, D D

    2003-06-25

    It is shown that in NIMROD the formation of closed current configurations, occurring only after the voltage is turned off, is due to the faster resistive decay of nonsymmetric modes compared to the symmetric projection of the 3D steady state achieved by gun injection. Implementing Spitzer resistivity is required to make a definitive comparison with experiment, using two experimental signatures of the model discussed in the paper. If there are serious disagreements, it is suggested that a phenomenological hyper-resistivity be added to the n = 0 component of Ohm's law, similar to hyper-resistive Corsica models that appear to fit experiments. Hyper-resistivity might capture physics at small scale missed by NIMROD. Encouraging results would motivate coupling NIMROD to SPICE with edge physics inspired by UEDGE, as a tool for experimental data analysis.

  1. New Model Predicts Fire Activity in South America

    NASA Video Gallery

    UC Irvine scientist Jim Randerson discusses a new model that is able to predict fire activity in South America using sea surface temperature observations of the Pacific and Atlantic Ocean. The find...

  2. Submission Form for Peer-Reviewed Cancer Risk Prediction Models

    Cancer.gov

    If you have information about a peer-reviewd cancer risk prediction model that you would like to be considered for inclusion on this list, submit as much information as possible through the form on this page.

  3. Prediction of Chemical Function: Model Development and Application

    EPA Science Inventory

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (...

  4. Model Predictive Control for Nonlinear Parabolic Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Hashimoto, Tomoaki; Yoshioka, Yusuke; Ohtsuka, Toshiyuki

    In this study, the optimal control problem of nonlinear parabolic partial differential equations (PDEs) is investigated. Optimal control of nonlinear PDEs is an open problem with applications that include fluid, thermal, biological, and chemically-reacting systems. Model predictive control with a fast numerical solution method has been well established to solve the optimal control problem of nonlinear systems described by ordinary differential equations. In this study, we develop a design method of the model predictive control for nonlinear systems described by parabolic PDEs. Our approach is a direct infinite dimensional extension of the model predictive control method for finite-dimensional systems. The objective of this paper is to develop an efficient algorithm for numerically solving the model predictive control problem of nonlinear parabolic PDEs. The effectiveness of the proposed method is verified by numerical simulations.

  5. Using Pareto points for model identification in predictive toxicology

    PubMed Central

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  6. Reliability Prediction Models for Discrete Semiconductor Devices

    DTIC Science & Technology

    1988-07-01

    group for improved model utility. Diode types include: Switching diodes Analog diodes Power rectifiers High voltage rectifiers Fast recovery diodes...analog = .0023, switching = .069, fast recovery = .011, power rectifier diodes including schottky power diodes = .019/junction, power rectifier HV stack...analog, switch, fast recovery rectifier and power rectifiers, and HV stack diodes = 4914, Ge analo