Science.gov

Sample records for accident prediction model

  1. Updating outdated predictive accident models.

    PubMed

    Wood, A G; Mountain, L J; Connors, R D; Maher, M J; Ropkins, K

    2013-06-01

    Reliable predictive accident models (PAMs) (also referred to as safety performance functions (SPFs)) are essential to design and maintain safe road networks however, ongoing changes in road and vehicle design coupled with road safety initiatives, mean that these models can quickly become dated. Unfortunately, because the fitting of sophisticated PAMs including a wide range of explanatory variables is not a trivial task, available models tend to be based on data collected many years ago and seem unlikely to give reliable estimates of current accidents. Large, expensive studies to produce new models are likely to be, at best, only a temporary solution. This paper thus seeks to develop a practical and efficient methodology to allow currently available PAMs to be updated to give unbiased estimates of accident frequencies at any point in time. Two principal issues are examined: the extent to which the temporal transferability of predictive accident models varies with model complexity; and the practicality and efficiency of two alternative updating strategies. The models used to illustrate these issues are the suites of models developed for rural dual and single carriageway roads in the UK. These are widely used in several software packages in spite of being based on data collected during the 1980s and early 1990s. It was found that increased model complexity by no means ensures better temporal transferability and that calibration of the models using a scale factor can be a practical alternative to fitting new models. PMID:23510788

  2. Temporal transferability and updating of zonal level accident prediction models.

    PubMed

    Hadayeghi, Alireza; Shalaby, Amer S; Persaud, Bhagwant N; Cheung, Carl

    2006-05-01

    This paper examines the temporal transferability of the zonal accident prediction models by using appropriate evaluation measures of predictive performance to assess whether the relationship between the dependent and independent variables holds reasonably well across time. The two temporal contexts are the years 1996 and 2001, with updated 1996 models being used to predict 2001 accidents in each traffic zone of the City of Toronto. The paper examines alternative updating methods for temporal transfer by imagining that only a sample of 2001 data is available. The sensitivity of the performance of the updated models to the 2001 sample size is explored. The updating procedures examined include the Bayesian updating approach and the application of calibration factors to the 1996 models. Models calibrated for the 2001 samples were also explored, but were found to be inadequate. The results show that the models are not transferable in a strict statistical sense. However, relative measures of transferability indicate that the transferred models yield useful information in the application context. Also, it is concluded that the updated accident models using the calibration factors produce better results for predicting the number of accidents in the year 2001 than using the Bayesian approach. PMID:16414003

  3. Accident prediction models for roads with minor junctions.

    PubMed

    Mountain, L; Fawaz, B; Jarrett, D

    1996-11-01

    The purpose of this study was to develop and validate a method for predicting expected accidents on main roads with minor junctions where traffic counts on the minor approaches are not available. The study was based on data for some 3800 km of highway in the U.K. including more than 5000 minor junctions. The highways consisted of both single and dual-carriageway roads in urban and rural areas. Generalized linear modelling was used to develop regression estimates of expected accidents for six highway categories and an empirical Bayes procedure was used to improve these estimates by combining them with accident counts. Accidents on highway sections were shown to be a non-linear function of exposure and minor junction frequency. For the purposes of estimating expected accidents, while the regression model estimates were shown to be preferable to accident counts, the best results were obtained using the empirical Bayes method. The latter was the only method that produced unbiased estimates of expected accidents for high-risk sites. PMID:9006638

  4. Methodology for fitting and updating predictive accident models with trend.

    PubMed

    Connors, Richard D; Maher, Mike; Wood, Alan; Mountain, Linda; Ropkins, Karl

    2013-07-01

    Reliable predictive accident models (PAMs) (also referred to as Safety Performance Functions (SPFs)) have a variety of important uses in traffic safety research and practice. They are used to help identify sites in need of remedial treatment, in the design of transport schemes to assess safety implications, and to estimate the effectiveness of remedial treatments. The PAMs currently in use in the UK are now quite old; the data used in their development was gathered up to 30 years ago. Many changes have occurred over that period in road and vehicle design, in road safety campaigns and legislation, and the national accident rate has fallen substantially. It seems unlikely that these ageing models can be relied upon to provide accurate and reliable predictions of accident frequencies on the roads today. This paper addresses a number of methodological issues that arise in seeking practical and efficient ways to update PAMs, whether by re-calibration or by re-fitting. Models for accidents on rural single carriageway roads have been chosen to illustrate these issues, including the choice of distributional assumption for overdispersion, the choice of goodness of fit measures, questions of independence between observations in different years, and between links on the same scheme, the estimation of trends in the models, the uncertainty of predictions, as well as considerations about the most efficient and convenient ways to fit the required models. PMID:23612560

  5. Accident prediction model for railway-highway interfaces.

    PubMed

    Oh, Jutaek; Washington, Simon P; Nam, Doohee

    2006-03-01

    Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes. PMID:16297846

  6. Accident prediction model for public highway-rail grade crossings.

    PubMed

    Lu, Pan; Tolliver, Denver

    2016-05-01

    Considerable research has focused on roadway accident frequency analysis, but relatively little research has examined safety evaluation at highway-rail grade crossings. Highway-rail grade crossings are critical spatial locations of utmost importance for transportation safety because traffic crashes at highway-rail grade crossings are often catastrophic with serious consequences. The Poisson regression model has been employed to analyze vehicle accident frequency as a good starting point for many years. The most commonly applied variations of Poisson including negative binomial, and zero-inflated Poisson. These models are used to deal with common crash data issues such as over-dispersion (sample variance is larger than the sample mean) and preponderance of zeros (low sample mean and small sample size). On rare occasions traffic crash data have been shown to be under-dispersed (sample variance is smaller than the sample mean) and traditional distributions such as Poisson or negative binomial cannot handle under-dispersion well. The objective of this study is to investigate and compare various alternate highway-rail grade crossing accident frequency models that can handle the under-dispersion issue. The contributions of the paper are two-fold: (1) application of probability models to deal with under-dispersion issues and (2) obtain insights regarding to vehicle crashes at public highway-rail grade crossings. PMID:26922288

  7. Predicting System Accidents with Model Analysis During Hybrid Simulation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land D.; Throop, David R.

    2002-01-01

    Standard discrete event simulation is commonly used to identify system bottlenecks and starving and blocking conditions in resources and services. The CONFIG hybrid discrete/continuous simulation tool can simulate such conditions in combination with inputs external to the simulation. This provides a means for evaluating the vulnerability to system accidents of a system's design, operating procedures, and control software. System accidents are brought about by complex unexpected interactions among multiple system failures , faulty or misleading sensor data, and inappropriate responses of human operators or software. The flows of resource and product materials play a central role in the hazardous situations that may arise in fluid transport and processing systems. We describe the capabilities of CONFIG for simulation-time linear circuit analysis of fluid flows in the context of model-based hazard analysis. We focus on how CONFIG simulates the static stresses in systems of flow. Unlike other flow-related properties, static stresses (or static potentials) cannot be represented by a set of state equations. The distribution of static stresses is dependent on the specific history of operations performed on a system. We discuss the use of this type of information in hazard analysis of system designs.

  8. Model predictions of wind and turbulence profiles associated with an ensemble of aircraft accidents

    NASA Technical Reports Server (NTRS)

    Williamson, G. G.; Lewellen, W. S.; Teske, M. E.

    1977-01-01

    The feasibility of predicting conditions under which wind/turbulence environments hazardous to aviation operations exist is studied by examining a number of different accidents in detail. A model of turbulent flow in the atmospheric boundary layer is used to reconstruct wind and turbulence profiles which may have existed at low altitudes at the time of the accidents. The predictions are consistent with available flight recorder data, but neither the input boundary conditions nor the flight recorder observations are sufficiently precise for these studies to be interpreted as verification tests of the model predictions.

  9. Combined Prediction Model of Death Toll for Road Traffic Accidents Based on Independent and Dependent Variables

    PubMed Central

    Zhong-xiang, Feng; Shi-sheng, Lu; Wei-hua, Zhang; Nan-nan, Zhang

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability. PMID:25610454

  10. Application of Gray Markov SCGM(1,1)c Model to Prediction of Accidents Deaths in Coal Mining

    PubMed Central

    Lan, Jian-yi; Zhou, Ying

    2014-01-01

    The prediction of mine accident is the basis of aviation safety assessment and decision making. Gray prediction is suitable for such kinds of system objects with few data, short time, and little fluctuation, and Markov chain theory is just suitable for forecasting stochastic fluctuating dynamic process. Analyzing the coal mine accident human error cause, combining the advantages of both Gray prediction and Markov theory, an amended Gray Markov SCGM(1,1)c model is proposed. The gray SCGM(1,1)c model is applied to imitate the development tendency of the mine safety accident, and adopt the amended model to improve prediction accuracy, while Markov prediction is used to predict the fluctuation along the tendency. Finally, the new model is applied to forecast the mine safety accident deaths from 1990 to 2010 in China, and, 2011–2014 coal accidents deaths were predicted. The results show that the new model not only discovers the trend of the mine human error accident death toll but also overcomes the random fluctuation of data affecting precision. It possesses stronger engineering application.

  11. Compartment model for long-term contamination prediction in deciduous fruit trees after a nuclear accident

    SciTech Connect

    Antonopoulos-Domis, M.; Clouvas, A.; Gagianas, A. )

    1990-06-01

    Radiocesium contamination from the Chernobyl accident of different parts (fruits, leaves, and shoots) of selected apricot trees in North Greece was systematically measured in 1987 and 1988. The results are presented and discussed in the framework of a simple compartment model describing the long-term contamination uptake mechanism of deciduous fruit trees after a nuclear accident.

  12. A combined M5P tree and hazard-based duration model for predicting urban freeway traffic accident durations.

    PubMed

    Lin, Lei; Wang, Qian; Sadek, Adel W

    2016-06-01

    The duration of freeway traffic accidents duration is an important factor, which affects traffic congestion, environmental pollution, and secondary accidents. Among previous studies, the M5P algorithm has been shown to be an effective tool for predicting incident duration. M5P builds a tree-based model, like the traditional classification and regression tree (CART) method, but with multiple linear regression models as its leaves. The problem with M5P for accident duration prediction, however, is that whereas linear regression assumes that the conditional distribution of accident durations is normally distributed, the distribution for a "time-to-an-event" is almost certainly nonsymmetrical. A hazard-based duration model (HBDM) is a better choice for this kind of a "time-to-event" modeling scenario, and given this, HBDMs have been previously applied to analyze and predict traffic accidents duration. Previous research, however, has not yet applied HBDMs for accident duration prediction, in association with clustering or classification of the dataset to minimize data heterogeneity. The current paper proposes a novel approach for accident duration prediction, which improves on the original M5P tree algorithm through the construction of a M5P-HBDM model, in which the leaves of the M5P tree model are HBDMs instead of linear regression models. Such a model offers the advantage of minimizing data heterogeneity through dataset classification, and avoids the need for the incorrect assumption of normality for traffic accident durations. The proposed model was then tested on two freeway accident datasets. For each dataset, the first 500 records were used to train the following three models: (1) an M5P tree; (2) a HBDM; and (3) the proposed M5P-HBDM, and the remainder of data were used for testing. The results show that the proposed M5P-HBDM managed to identify more significant and meaningful variables than either M5P or HBDMs. Moreover, the M5P-HBDM had the lowest overall mean

  13. Predicting road accidents: Structural time series approach

    NASA Astrophysics Data System (ADS)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-07-01

    In this paper, the model for occurrence of road accidents in Malaysia between the years of 1970 to 2010 was developed and throughout this model the number of road accidents have been predicted by using the structural time series approach. The models are developed by using stepwise method and the residual of each step has been analyzed. The accuracy of the model is analyzed by using the mean absolute percentage error (MAPE) and the best model is chosen based on the smallest Akaike information criterion (AIC) value. A structural time series approach found that local linear trend model is the best model to represent the road accidents. This model allows level and slope component to be varied over time. In addition, this approach also provides useful information on improving the conventional time series method.

  14. Predictions of structural integrity of steam generator tubes under normal operating, accident, and severe accident conditions

    SciTech Connect

    Majumdar, S.

    1996-09-01

    Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation is confirmed by further tests at high temperatures as well as by finite element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation is confirmed by finite element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure is developed and validated by tests under varying temperature and pressure loading expected during severe accidents.

  15. Review of models applicable to accident aerosols

    SciTech Connect

    Glissmeyer, J.A.

    1983-07-01

    Estimations of potential airborne-particle releases are essential in safety assessments of nuclear-fuel facilities. This report is a review of aerosol behavior models that have potential applications for predicting aerosol characteristics in compartments containing accident-generated aerosol sources. Such characterization of the accident-generated aerosols is a necessary step toward estimating their eventual release in any accident scenario. Existing aerosol models can predict the size distribution, concentration, and composition of aerosols as they are acted on by ventilation, diffusion, gravity, coagulation, and other phenomena. Models developed in the fields of fluid mechanics, indoor air pollution, and nuclear-reactor accidents are reviewed with this nuclear fuel facility application in mind. The various capabilities of modeling aerosol behavior are tabulated and discussed, and recommendations are made for applying the models to problems of differing complexity.

  16. Investigation of adolescent accident predictive variables in hilly regions.

    PubMed

    Mohanty, Malaya; Gupta, Ankit

    2016-09-01

    The study aims to determine the significant personal and environmental factors in predicting the adolescent accidents in the hilly regions taking into account two cities Hamirpur and Dharamshala, which lie at an average elevation of 700--1000 metres above the mean sea level (MSL). Detailed comparisons between the results of 2 cities are also studied. The results are analyzed to provide the list of most significant factors responsible for adolescent accidents. Data were collected from different schools and colleges of the city with the help of a questionnaire survey. Around 690 responses from Hamirpur and 460 responses from Dharamshala were taken for study and analysis. Standard deviations (SD) of various factors affecting accidents were calculated and factors with relatively very low SD were discarded and other variables were considered for correlations. Correlation was developed using Kendall's-tau and chi-square tests and factors those were found significant were used for modelling. They were - the victim's age, the character of road, the speed of vehicle, and the use of helmet for Hamirpur and for Dharamshala, the kind of vehicle involved was an added variable found responsible for adolescent accidents. A logistic regression was performed to know the effect of each category present in a variable on the occurrence of accidents. Though the age and the speed of vehicle were considered to be important factors for accident occurrence according to Indian accident data records, even the use of helmet comes out as a major concern. The age group of 15-18 and 18-21 years were found to be more susceptible to accidents than the higher age groups. Due to the presence of hilly area, the character of road becomes a major concern for cause of accidents and the topography of the area makes the kind of vehicle involved as a major variable for determining the severity of accidents. PMID:26077876

  17. Do Cognitive Models Help in Predicting the Severity of Posttraumatic Stress Disorder, Phobia, and Depression after Motor Vehicle Accidents? A Prospective Longitudinal Study

    ERIC Educational Resources Information Center

    Ehring, Thomas; Ehlers, Anke; Glucksman, Edward

    2008-01-01

    The study investigated the power of theoretically derived cognitive variables to predict posttraumatic stress disorder (PTSD), travel phobia, and depression following injury in a motor vehicle accident (MVA). MVA survivors (N = 147) were assessed at the emergency department on the day of their accident and 2 weeks, 1 month, 3 months, and 6 months…

  18. Predictions of structural integrity of steam generator tubes under normal operating, accident, an severe accident conditions

    SciTech Connect

    Majumdar, S.

    1997-02-01

    Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation was confirmed by further tests at high temperatures, as well as by finite-element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation was confirmed by finite-element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate-sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure was developed and validated by tests under various temperature and pressure loadings that can occur during postulated severe accidents.

  19. Modelling road accidents: An approach using structural time series

    NASA Astrophysics Data System (ADS)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  20. Development of a model to predict flow oscillations in low-flow sodium boiling. [Loss-of-Piping Integrity accidents

    SciTech Connect

    Levin, A.E.; Griffith, P.

    1980-04-01

    Tests performed in a small scale water loop showed that voiding oscillations, similar to those observed in sodium, were present in water, as well. An analytical model, appropriate for either sodium or water, was developed and used to describe the water flow behavior. The experimental results indicate that water can be successfully employed as a sodium simulant, and further, that the condensation heat transfer coefficient varies significantly during the growth and collapse of vapor slugs during oscillations. It is this variation, combined with the temperature profile of the unheated zone above the heat source, which determines the oscillatory behavior of the system. The analytical program has produced a model which qualitatively does a good job in predicting the flow behavior in the wake experiment. The amplitude discrepancies are attributable to experimental uncertainties and model inadequacies. Several parameters (heat transfer coefficient, unheated zone temperature profile, mixing between hot and cold fluids during oscillations) are set by the user. Criteria for the comparison of water and sodium experiments have been developed.

  1. FASTGRASS: A mechanistic model for the prediction of Xe, I, Cs, Te, Ba, and Sr release from nuclear fuel under normal and severe-accident conditions

    SciTech Connect

    Rest, J.; Zawadzki, S.A. )

    1992-09-01

    The primary physical/chemical models that form the basis of the FASTGRASS mechanistic computer model for calculating fission-product release from nuclear fuel are described. Calculated results are compared with test data and the major mechanisms affecting the transport of fission products during steady-state and accident conditions are identified.

  2. An exploration of the utility of mathematical modeling predicting fatigue from sleep/wake history and circadian phase applied in accident analysis and prevention: the crash of Comair Flight 5191.

    PubMed

    Pruchnicki, Shawn A; Wu, Lora J; Belenky, Gregory

    2011-05-01

    On 27 August 2006 at 0606 eastern daylight time (EDT) at Bluegrass Airport in Lexington, KY (LEX), the flight crew of Comair Flight 5191 inadvertently attempted to take off from a general aviation runway too short for their aircraft. The aircraft crashed killing 49 of the 50 people on board. To better understand this accident and to aid in preventing similar accidents, we applied mathematical modeling predicting fatigue-related degradation in performance for the Air Traffic Controller on-duty at the time of the crash. To provide the necessary input to the model, we attempted to estimate circadian phase and sleep/wake histories for the Captain, First Officer, and Air Traffic Controller. We were able to estimate with confidence the circadian phase for each. We were able to estimate with confidence the sleep/wake history for the Air Traffic Controller, but unable to do this for the Captain and First Officer. Using the sleep/wake history estimates for the Air Traffic Controller as input, the mathematical modeling predicted moderate fatigue-related performance degradation at the time of the crash. This prediction was supported by the presence of what appeared to be fatigue-related behaviors in the Air Traffic Controller during the 30 min prior to and in the minutes after the crash. Our modeling results do not definitively establish fatigue in the Air Traffic Controller as a cause of the accident, rather they suggest that had he been less fatigued he might have detected Comair Flight 5191's lining up on the wrong runway. We were not able to perform a similar analysis for the Captain and First Officer because we were not able to estimate with confidence their sleep/wake histories. Our estimates of sleep/wake history and circadian rhythm phase for the Air Traffic Controller might generalize to other air traffic controllers and to flight crew operating in the early morning hours at LEX. Relative to other times of day, the modeling results suggest an elevated risk of fatigue

  3. Nuclear Facilities Fire Accident Model

    Energy Science and Technology Software Center (ESTSC)

    1999-09-01

    4. NATURE OF PROBLEM SOLVED FIRAC predicts fire-induced flows, thermal and material transport, and radioactive and nonradioactive source terms in a ventilation system. It is designed to predict the radioactive and nonradioactive source terms that lead to gas dynamic, material transport, and heat transfer transients. FIRAC's capabilities are directed toward nuclear fuel cycle facilities and the primary release pathway, the ventilation system. However, it is applicable to other facilities and can be used to modelmore » other airflow pathways within a structure. The basic material transport capability of FIRAC includes estimates of entrainment, convection, deposition, and filtration of material. The interrelated effects of filter plugging, heat transfer, and gas dynamics are also simulated. A ventilation system model includes elements such as filters, dampers, ducts, and blowers connected at nodal points to form networks. A zone-type compartment fire model is incorporated to simulate fire-induced transients within a facility. 5. METHOD OF SOLUTION FIRAC solves one-dimensional, lumped-parameter, compressible flow equations by an implicit numerical scheme. The lumped-parameter method is the basic formulation that describes the gas dynamics system. No spatial distribution of parameters is considered in this approach, but an effect of spatial distribution can be approximated by noding. Network theory, using the lumped parameter method, includes a number of system elements, called branches, joined at certain points, called nodes. Ventilation system components that exhibit flow resistance and inertia, such as dampers, ducts, valves, and filters, and those that exhibit flow potential, such as blowers, are located within the branches of the system. The connection points of branches are nodes for components that have finite volumes, such as rooms, gloveboxes, and plenums, and for boundaries where the volume is practically infinite. All internal nodes, therefore, possess some

  4. Predicting and analyzing the trend of traffic accidents deaths in Iran in 2014 and 2015

    PubMed Central

    Mehmandar, Mohammadreza; Soori, Hamid; Mehrabi, Yadolah

    2016-01-01

    Background: Predicting the trend in traffic accidents deaths and its analysis can be a useful tool for planning and policy-making, conducting interventions appropriate with death trend, and taking the necessary actions required for controlling and preventing future occurrences. Objective: Predicting and analyzing the trend of traffic accidents deaths in Iran in 2014 and 2015. Settings and Design: It was a cross-sectional study. Materials and Methods: All the information related to fatal traffic accidents available in the database of Iran Legal Medicine Organization from 2004 to the end of 2013 were used to determine the change points (multi-variable time series analysis). Using autoregressive integrated moving average (ARIMA) model, traffic accidents death rates were predicted for 2014 and 2015, and a comparison was made between this rate and the predicted value in order to determine the efficiency of the model. Results: From the results, the actual death rate in 2014 was almost similar to that recorded for this year, while in 2015 there was a decrease compared with the previous year (2014) for all the months. A maximum value of 41% was also predicted for the months of January and February, 2015. Conclusion: From the prediction and analysis of the death trends, proper application and continuous use of the intervention conducted in the previous years for road safety improvement, motor vehicle safety improvement, particularly training and culture-fostering interventions, as well as approval and execution of deterrent regulations for changing the organizational behaviors, can significantly decrease the loss caused by traffic accidents. PMID:27308255

  5. Modeling secondary accidents identified by traffic shock waves.

    PubMed

    Junhua, Wang; Boya, Liu; Lanfang, Zhang; Ragland, David R

    2016-02-01

    The high potential for occurrence and the negative consequences of secondary accidents make them an issue of great concern affecting freeway safety. Using accident records from a three-year period together with California interstate freeway loop data, a dynamic method for more accurate classification based on the traffic shock wave detecting method was used to identify secondary accidents. Spatio-temporal gaps between the primary and secondary accident were proven be fit via a mixture of Weibull and normal distribution. A logistic regression model was developed to investigate major factors contributing to secondary accident occurrence. Traffic shock wave speed and volume at the occurrence of a primary accident were explicitly considered in the model, as a secondary accident is defined as an accident that occurs within the spatio-temporal impact scope of the primary accident. Results show that the shock waves originating in the wake of a primary accident have a more significant impact on the likelihood of a secondary accident occurrence than the effects of traffic volume. Primary accidents with long durations can significantly increase the possibility of secondary accidents. Unsafe speed and weather are other factors contributing to secondary crash occurrence. It is strongly suggested that when police or rescue personnel arrive at the scene of an accident, they should not suddenly block, decrease, or unblock the traffic flow, but instead endeavor to control traffic in a smooth and controlled manner. Also it is important to reduce accident processing time to reduce the risk of secondary accident. PMID:26687540

  6. Predicted spatio-temporal dynamics of radiocesium deposited onto forests following the Fukushima nuclear accident

    PubMed Central

    Hashimoto, Shoji; Matsuura, Toshiya; Nanko, Kazuki; Linkov, Igor; Shaw, George; Kaneko, Shinji

    2013-01-01

    The majority of the area contaminated by the Fukushima Dai-ichi nuclear power plant accident is covered by forest. To facilitate effective countermeasure strategies to mitigate forest contamination, we simulated the spatio-temporal dynamics of radiocesium deposited into Japanese forest ecosystems in 2011 using a model that was developed after the Chernobyl accident in 1986. The simulation revealed that the radiocesium inventories in tree and soil surface organic layer components drop rapidly during the first two years after the fallout. Over a period of one to two years, the radiocesium is predicted to move from the tree and surface organic soil to the mineral soil, which eventually becomes the largest radiocesium reservoir within forest ecosystems. Although the uncertainty of our simulations should be considered, the results provide a basis for understanding and anticipating the future dynamics of radiocesium in Japanese forests following the Fukushima accident. PMID:23995073

  7. Characterizing the Severe Turbulence Environments Associated With Commercial Aviation Accidents: A Real-Time Turbulence Model (RTTM) Designed for the Operational Prediction of Hazardous Aviation Turbulence Environments

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Lux, Kevin M.; Cetola, Jeffrey D.; Huffman, Allan W.; Riordan, Allen J.; Slusser, Sarah W.; Lin, Yuh-Lang; Charney, Joseph J.; Waight, Kenneth T.

    2004-01-01

    Real-time prediction of environments predisposed to producing moderate-severe aviation turbulence is studied. We describe the numerical model and its postprocessing system designed for said prediction of environments predisposed to severe aviation turbulence as well as presenting numerous examples of its utility. The numerical model is MASS version 5.13, which is integrated over three different grid matrices in real time on a university work station in support of NASA Langley Research Center s B-757 turbulence research flight missions. The postprocessing system includes several turbulence-related products, including four turbulence forecasting indices, winds, streamlines, turbulence kinetic energy, and Richardson numbers. Additionally, there are convective products including precipitation, cloud height, cloud mass fluxes, lifted index, and K-index. Furthermore, soundings, sounding parameters, and Froude number plots are also provided. The horizontal cross-section plot products are provided from 16 000 to 46 000 ft in 2000-ft intervals. Products are available every 3 hours at the 60- and 30-km grid interval and every 1.5 hours at the 15-km grid interval. The model is initialized from the NWS ETA analyses and integrated two times a day.

  8. Relating aviation service difficulty reports to accident data for safety trend prediction

    SciTech Connect

    Fullwood, R.R.; Hall, R.E.; Martinez-Guridi, G.; Uryasev, S.; Sampath, S.G.

    1996-10-01

    A synthetic model of scheduled-commercial U.S. aviation fatalities was constructed from linear combinations of the time-spectra of critical systems reporting using 5.5 years of Service Difficulty Reports (SDR){sup 2} and Accident Incident Data System (AIDS) records{sup 3}. This model, used to predict near-future trends in aviation accidents, was tested by using the first 36 months of data to construct the synthetic model which was used to predict fatalities during the following eight months. These predictions were tested by comparison with the fatality data. A reliability block diagram (RBD) and third-order extrapolations also were used as predictive models and compared with actuality. The synthetic model was the best predictor because of its use of systems data. Other results of the study are a database of service difficulties for major aviation systems, and a rank ordering of systems according to their contribution to the synthesis. 4 refs., 8 figs., 3 tabs.

  9. An idealized transient model for melt dispersal from reactor cavities during pressurized melt ejection accident scenarios

    SciTech Connect

    Tutu, N.K.

    1991-06-01

    The direct Containment Heating (DCH) calculations require that the transient rate at which the melt is ejected from the reactor cavity during hypothetical pressurized melt ejection accident scenarios be calculated. However, at present no models, that are able to predict the available melt dispersal data from small scale reactor cavity models, are available. In this report, a simple idealized model of the melt dispersal process within a reactor cavity during a pressurized melt ejection accident scenario is presented. The predictions from the model agree reasonably well with the integral data obtained from the melt dispersal experiments using a small scale model of the Surry reactor cavity. 17 refs., 15 figs.

  10. A catastrophe-theory model for simulating behavioral accidents

    SciTech Connect

    Souder, W.E.

    1988-01-01

    Behavioral accidents are a particular type of accident. They are caused by inappropriate individual behaviors and faulty reactions. Catastrophe theory is a means for mathematically modeling the dynamic processes that underlie behavioral accidents. Based on a comprehensive data base of mining accidents, a computerized catastrophe model has been developed by the Bureau of Mines. This model systematically links individual psychological, group behavioral, and mine environmental variables with other accident causing factors. It answers several longstanding questions about why some normally safe behaving persons may spontaneously engage in unsafe acts that have high risks of serious injury. Field tests with the model indicate that it has three imnportant uses: it can be used as a effective training aid for increasing employee safety consciousness; it can be used as a management laboratory for testing decision alternatives and policies; and it can be used to help design the most effective work teams.

  11. Weather and Dispersion Modeling of the Fukushima Daiichi Nuclear Power Station Accident

    NASA Astrophysics Data System (ADS)

    Dunn, Thomas; Businger, Steven

    2014-05-01

    The surface deposition of radioactive material from the accident at the Fukushima Daiichi nuclear power station was investigated for 11 March to 17 March 2011. A coupled weather and dispersion modeling system was developed and simulations of the accident performed using two independent source terms that differed in emission rate and height and in the total amount of radioactive material released. Observations in Japan during the first week of the accident revealed a natural grouping between periods of dry (12-14 March) and wet (15-17 March) weather. The distinct weather regimes served as convenient validation periods for the model predictions. Results show significant differences in the distribution of cumulative surface deposition of 137Cs due to wet and dry removal processes. A comparison of 137Cs deposition predicted by the model with aircraft observations of surface-deposited gamma radiation showed reasonable agreement in surface contamination patterns during the dry phase of the accident for both source terms. It is suggested that this agreement is because of the weather model's ability to simulate the extent and timing of onshore flow associated with a sea breeze circulation that developed around the time of the first reactor explosion. During the wet phase of the accident the pattern is not as well predicted. It is suggested that this discrepancy is because of differences between model predicted and observed precipitation distributions.

  12. Predicting Posttraumatic Stress Symptoms in Children after Road Traffic Accidents

    ERIC Educational Resources Information Center

    Landolt, Markus A.; Vollrath, Margarete; Timm, Karin; Gnehm, Hanspeter E.; Sennhauser, Felix H.

    2005-01-01

    Objective: To prospectively assess the prevalence, course, and predictors of posttraumatic stress symptoms (PTSSs) in children after road traffic accidents (RTAs). Method: Sixty-eight children (6.5-14.5 years old) were interviewed 4-6 weeks and 12 months after an RTA with the Child PTSD Reaction Index (response rate 58.6%). Their mothers (n = 60)…

  13. Battery Life Predictive Model

    Energy Science and Technology Software Center (ESTSC)

    2009-12-31

    The Software consists of a model used to predict battery capacity fade and resistance growth for arbitrary cycling and temperature profiles. It allows the user to extrapolate from experimental data to predict actual life cycle.

  14. Predictive modeling of complications.

    PubMed

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions. PMID:27286683

  15. Wind power prediction models

    NASA Technical Reports Server (NTRS)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  16. Usefulness of high resolution coastal models for operational oil spill forecast: the Full City accident

    NASA Astrophysics Data System (ADS)

    Broström, G.; Carrasco, A.; Hole, L. R.; Dick, S.; Janssen, F.; Mattsson, J.; Berger, S.

    2011-06-01

    Oil spill modeling is considered to be an important decision support system (DeSS) useful for remedial action in case of accidents, as well as for designing the environmental monitoring system that is frequently set up after major accidents. Many accidents take place in coastal areas implying that low resolution basin scale ocean models is of limited use for predicting the trajectories of an oil spill. In this study, we target the oil spill in connection with the Full City accident on the Norwegian south coast and compare three different oil spill models for the area. The result of the analysis is that all models do a satisfactory job. The "standard" operational model for the area is shown to have severe flaws but including an analysis based on a higher resolution model (1.5 km resolution) for the area the model system show results that compare well with observations. The study also shows that an ensemble using three different models is useful when predicting/analyzing oil spill in coastal areas.

  17. Usefulness of high resolution coastal models for operational oil spill forecast: the "Full City" accident

    NASA Astrophysics Data System (ADS)

    Broström, G.; Carrasco, A.; Hole, L. R.; Dick, S.; Janssen, F.; Mattsson, J.; Berger, S.

    2011-11-01

    Oil spill modeling is considered to be an important part of a decision support system (DeSS) for oil spill combatment and is useful for remedial action in case of accidents, as well as for designing the environmental monitoring system that is frequently set up after major accidents. Many accidents take place in coastal areas, implying that low resolution basin scale ocean models are of limited use for predicting the trajectories of an oil spill. In this study, we target the oil spill in connection with the "Full City" accident on the Norwegian south coast and compare operational simulations from three different oil spill models for the area. The result of the analysis is that all models do a satisfactory job. The "standard" operational model for the area is shown to have severe flaws, but by applying ocean forcing data of higher resolution (1.5 km resolution), the model system shows results that compare well with observations. The study also shows that an ensemble of results from the three different models is useful when predicting/analyzing oil spill in coastal areas.

  18. An approach to accidents modeling based on compounds road environments.

    PubMed

    Fernandes, Ana; Neves, Jose

    2013-04-01

    The most common approach to study the influence of certain road features on accidents has been the consideration of uniform road segments characterized by a unique feature. However, when an accident is related to the road infrastructure, its cause is usually not a single characteristic but rather a complex combination of several characteristics. The main objective of this paper is to describe a methodology developed in order to consider the road as a complete environment by using compound road environments, overcoming the limitations inherented in considering only uniform road segments. The methodology consists of: dividing a sample of roads into segments; grouping them into quite homogeneous road environments using cluster analysis; and identifying the influence of skid resistance and texture depth on road accidents in each environment by using generalized linear models. The application of this methodology is demonstrated for eight roads. Based on real data from accidents and road characteristics, three compound road environments were established where the pavement surface properties significantly influence the occurrence of accidents. Results have showed clearly that road environments where braking maneuvers are more common or those with small radii of curvature and high speeds require higher skid resistance and texture depth as an important contribution to the accident prevention. PMID:23376544

  19. Accidents and unpleasant incidents: worry in transport and prediction of travel behavior.

    PubMed

    Backer-Grøndahl, Agathe; Fyhri, Aslak; Ulleberg, Pål; Amundsen, Astrid Helene

    2009-09-01

    Worry on nine different means of transport was measured in a Norwegian sample of 853 respondents. The main aim of the study was to investigate differences in worry about accidents and worry about unpleasant incidents, and how these two sorts of worry relate to various means of transport as well as transport behavior. Factor analyses of worry about accidents suggested a division between rail transport, road transport, and nonmotorized transport, whereas analyses of worry about unpleasant incidents suggested a division between transport modes where you interact with other people and "private" transport modes. Moreover, mean ratings of worry showed that respondents worried more about accidents than unpleasant incidents on private transport modes, and more about unpleasant incidents than accidents on public transport modes. Support for the distinction between worry about accidents and unpleasant incidents was also found when investigating relationships between both types of worry and behavioral adaptations: worry about accidents was more important than worry about unpleasant incidents in relation to behavioral adaptations on private means of transport, whereas the opposite was true for public means of transport. Finally, predictors of worry were investigated. The models of worry about accidents and worry about unpleasant incidents differed as to what predictors turned out significant. Knowledge about peoples' worries on different means of transport is important with regard to understanding and influencing transport and travel behavior, as well as attending to commuters' welfare. PMID:19645756

  20. Predictive models in urology.

    PubMed

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology. PMID:23423686

  1. Atmospheric prediction model survey

    NASA Technical Reports Server (NTRS)

    Wellck, R. E.

    1976-01-01

    As part of the SEASAT Satellite program of NASA, a survey of representative primitive equation atmospheric prediction models that exist in the world today was written for the Jet Propulsion Laboratory. Seventeen models developed by eleven different operational and research centers throughout the world are included in the survey. The surveys are tutorial in nature describing the features of the various models in a systematic manner.

  2. [Predictive models for ART].

    PubMed

    Arvis, P; Guivarc'h-Levêque, A; Varlan, E; Colella, C; Lehert, P

    2013-02-01

    A predictive model is a mathematical expression estimating the probability of pregnancy, by combining predictive variables, or indicators. Its development requires three successive phases: formulation of the model, its validation--internal then external--and the impact study. Its performance is assessed by its discrimination and its calibration. Numerous models were proposed, for spontaneous pregnancies, IUI and IVF, but with rather poor results, and their external validation was seldom carried out and was mainly inconclusive. The impact study-consisting in ascertaining whether their use improves medical practice--was exceptionally done. The ideal ART predictive model is a "Center specific" model, helping physicians to choose between abstention, IUI and IVF, by providing a reliable cumulative rate of pregnancy for each option. This tool would allow to rationalize the practices, by avoiding premature, late, or hopeless treatments. The model would also allow to compare the performances between ART Centers based on objective criteria. Today the best solution is to adjust the existing models to one's own practice, by considering models validated with variables describing the treated population, whilst adjusting the calculation to the Center's performances. PMID:23182786

  3. The modelling of fuel volatilisation in accident conditions

    NASA Astrophysics Data System (ADS)

    Manenc, H.; Mason, P. K.; Kissane, M. P.

    2001-04-01

    For oxidising conditions, at high temperatures, the pressure of uranium vapour species at the fuel surface is predicted to be high. These vapour species can be transported away from the fuel surface, giving rise to significant amounts of volatilised fuel, as has been observed during small-scale experiments and taken into account in different models. Hence, fuel volatilisation must be taken into account in the conduct of a simulated severe accident such as the Phebus FPT-4 experiment. A large-scale in-pile test is designed to investigate the release of fission products and actinides from irradiated UO 2 fuel in a debris bed and molten pool configuration. Best estimate predictions for fuel volatilisation were performed before the test. This analysis was used to assess the maximum possible loading of filters collecting emissions and the consequences for the filter-change schedule. Following successful completion of the experiment, blind post-test analysis is being performed; boundary conditions for the calculations are based on the preliminary post-test analysis with the core degradation code ICARE2 [J.C. Crestia, G. Repetto, S. Ederli, in: Proceedings of the Fourth Technical Seminar on the PHEBUS FP Programme, Marseille, France, 20-22 March 2000]. The general modelling approach is presented here and then illustrated by the analysis of fuel volatilisation in Phebus FPT4 (for which results are not yet available). Effort was made to reduce uncertainties in the calculations by improving the understanding of controlling physical processes and by using critically assessed thermodynamic data to determine uranium vapour pressures. The analysis presented here constitutes a preliminary, blind, post-test estimate of fuel volatilised during the test.

  4. Accident sequence precursor analysis level 2/3 model development

    SciTech Connect

    Lui, C.H.; Galyean, W.J.; Brownson, D.A.

    1997-02-01

    The US Nuclear Regulatory Commission`s Accident Sequence Precursor (ASP) program currently uses simple Level 1 models to assess the conditional core damage probability for operational events occurring in commercial nuclear power plants (NPP). Since not all accident sequences leading to core damage will result in the same radiological consequences, it is necessary to develop simple Level 2/3 models that can be used to analyze the response of the NPP containment structure in the context of a core damage accident, estimate the magnitude of the resulting radioactive releases to the environment, and calculate the consequences associated with these releases. The simple Level 2/3 model development work was initiated in 1995, and several prototype models have been completed. Once developed, these simple Level 2/3 models are linked to the simple Level 1 models to provide risk perspectives for operational events. This paper describes the methods implemented for the development of these simple Level 2/3 ASP models, and the linkage process to the existing Level 1 models.

  5. Advanced accident sequence precursor analysis level 2 models

    SciTech Connect

    Galyean, W.J.; Brownson, D.A.; Rempe, J.L.

    1996-03-01

    The U.S. Nuclear Regulatory Commission Accident Sequence Precursor program pursues the ultimate objective of performing risk significant evaluations on operational events (precursors) occurring in commercial nuclear power plants. To achieve this objective, the Office of Nuclear Regulatory Research is supporting the development of simple probabilistic risk assessment models for all commercial nuclear power plants (NPP) in the U.S. Presently, only simple Level 1 plant models have been developed which estimate core damage frequencies. In order to provide a true risk perspective, the consequences associated with postulated core damage accidents also need to be considered. With the objective of performing risk evaluations in an integrated and consistent manner, a linked event tree approach which propagates the front end results to back end was developed. This approach utilizes simple plant models that analyze the response of the NPP containment structure in the context of a core damage accident, estimate the magnitude and timing of a radioactive release to the environment, and calculate the consequences for a given release. Detailed models and results from previous studies, such as the NUREG-1150 study, are used to quantify these simple models. These simple models are then linked to the existing Level 1 models, and are evaluated using the SAPHIRE code. To demonstrate the approach, prototypic models have been developed for a boiling water reactor, Peach Bottom, and a pressurized water reactor, Zion.

  6. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect

    Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  7. Predicting pediatric posttraumatic stress disorder after road traffic accidents: the role of parental psychopathology.

    PubMed

    Kolaitis, Gerasimos; Giannakopoulos, George; Liakopoulou, Magda; Pervanidou, Panagiota; Charitaki, Stella; Mihas, Constantinos; Ferentinos, Spyros; Papassotiriou, Ioannis; Chrousos, George P; Tsiantis, John

    2011-08-01

    This study examined prospectively the role of parental psychopathology among other predictors in the development and persistence of posttraumatic stress disorder (PTSD) in 57 hospitalized youths aged 7-18 years immediately after a road traffic accident and 1 and 6 months later. Self report questionnaires and semistructured diagnostic interviews were used in all 3 assessments. Neuroendocrine evaluation was performed at the initial assessment. Maternal PTSD symptomatology predicted the development of children's PTSD 1 month after the event, OR = 6.99, 95% CI [1.049, 45.725]; the persistence of PTSD 6 months later was predicted by the child's increased evening salivary cortisol concentrations within 24 hours of the accident, OR = 1.006, 95% CI [1.001, 1.011]. Evaluation of both biological and psychosocial predictors that increase the risk for later development and maintenance of PTSD is important for appropriate early prevention and treatment. PMID:21812037

  8. A catastrophe-theory model for simulating behavioral accidents

    SciTech Connect

    Souder, W.E.

    1988-01-01

    Based on a comprehensive data base of mining accidents, a computerized catastrophe model has been developed by the Bureau of Mines. This model systematically links individual psychological, group behavioral, and mine environmental variables with other accident causing factors. It answers several longstanding questions about why some normally safe behaving persons may spontaneously engage in unsafe acts that have high risks of serious injury. Field tests with the model indicate that it has three important uses: It can be used as an effective training aid for increasing employee safety consciousness; it can be used as a management laboratory for testing decision alternatives and policies; and it can be used to help design the most effective work teams.

  9. Modelling the oil spill track from Prestige-Nassau accident

    NASA Astrophysics Data System (ADS)

    Montero, P.; Leitao, P.; Penabad, E.; Balseiro, C. F.; Carracedo, P.; Braunschweig, F.; Fernandes, R.; Gomez, B.; Perez-Munuzuri, V.; Neves, R.

    2003-04-01

    On November 13th 2002, the tank ship Prestige-Nassau sent a SOS signal. The hull of the ship was damaged producing an oil spill in front of the Galician coast (NW Spain). The damaged ship took north direction spilling more fuel and affecting the western Galician coast. After this, it changed its track to south. At this first stage of the accident, the ship spilt around 10000 Tm in 19th at the Galician Bank, at 133 NM of Galician coast. From the very beginning, a monitoring and forecasting of the first slick was developed. Afterwards, since southwesternly winds are frequent in wintertime, the slick from the initial spill started to move towards the Galician coast. This drift movement was followed by overflights. With the aim of forecasting the place and arriving date to the coast, some simulations with two different models were developed. The first one was a very simple drift model forced with the surface winds generated by ARPS operational model (1) at MeteoGalicia (regional weather forecast service). The second one was a more complex hydrodynamic model, MOHID2000 (2,3), developed by MARETEC GROUP (Instituto Superior Técnico de Lisboa) in collaboration with GFNL (Grupo de Física Non Lineal, Universidade de Santiago de Compostela). On November 28th, some tarballs appeared at south of main slick. This observations could be explained taking into account the below surface water movement following Ekman dynamic. Some new simulations with the aim of understanding better the physic underlying these observations were performed. Agreed between observations and simulations was achieved. We performed simulations with and without slope current previously calculated by other authors, showing that this current can only introduce subtle differences in the slick's arriving point to the coast and introducing wind as the primary forcing. (1) A two-dimensional particle tracking model for pollution dispersion in A Coruña and Vigo Rias (NW Spain). M. Gómez-Gesteira, P. Montero, R

  10. An application of probabilistic safety assessment methods to model aircraft systems and accidents

    SciTech Connect

    Martinez-Guridi, G.; Hall, R.E.; Fullwood, R.R.

    1998-08-01

    A case study modeling the thrust reverser system (TRS) in the context of the fatal accident of a Boeing 767 is presented to illustrate the application of Probabilistic Safety Assessment methods. A simplified risk model consisting of an event tree with supporting fault trees was developed to represent the progression of the accident, taking into account the interaction between the TRS and the operating crew during the accident, and the findings of the accident investigation. A feasible sequence of events leading to the fatal accident was identified. Several insights about the TRS and the accident were obtained by applying PSA methods. Changes proposed for the TRS also are discussed.

  11. WHEN MODEL MEETS REALITY – A REVIEW OF SPAR LEVEL 2 MODEL AGAINST FUKUSHIMA ACCIDENT

    SciTech Connect

    Zhegang Ma

    2013-09-01

    The Standardized Plant Analysis Risk (SPAR) models are a set of probabilistic risk assessment (PRA) models used by the Nuclear Regulatory Commission (NRC) to evaluate the risk of operations at U.S. nuclear power plants and provide inputs to risk informed regulatory process. A small number of SPAR Level 2 models have been developed mostly for feasibility study purpose. They extend the Level 1 models to include containment systems, group plant damage states, and model containment phenomenology and accident progression in containment event trees. A severe earthquake and tsunami hit the eastern coast of Japan in March 2011 and caused significant damages on the reactors in Fukushima Daiichi site. Station blackout (SBO), core damage, containment damage, hydrogen explosion, and intensive radioactivity release, which have been previous analyzed and assumed as postulated accident progression in PRA models, now occurred with various degrees in the multi-units Fukushima Daiichi site. This paper reviews and compares a typical BWR SPAR Level 2 model with the “real” accident progressions and sequences occurred in Fukushima Daiichi Units 1, 2, and 3. It shows that the SPAR Level 2 model is a robust PRA model that could very reasonably describe the accident progression for a real and complicated nuclear accident in the world. On the other hand, the comparison shows that the SPAR model could be enhanced by incorporating some accident characteristics for better representation of severe accident progression.

  12. Dynamic modelling of radionuclide uptake by marine biota: application to the Fukushima nuclear power plant accident.

    PubMed

    Vives i Batlle, Jordi

    2016-01-01

    The dynamic model D-DAT was developed to study the dynamics of radionuclide uptake and turnover in biota and sediments in the immediate aftermath of the Fukushima accident. This dynamics is determined by the interplay between the residence time of radionuclides in seawater/sediments and the biological half-lives of elimination by the biota. The model calculates time-variable activity concentration of (131)I, (134)Cs, (137)Cs and (90)Sr in seabed sediment, fish, crustaceans, molluscs and macroalgae from surrounding activity concentrations in seawater, with which to derive internal and external dose rates. A central element of the model is the inclusion of dynamic transfer of radionuclides to/from sediments by factorising the depletion of radionuclides adsorbed onto suspended particulates, molecular diffusion, pore water mixing and bioturbation, represented by a simple set of differential equations coupled with the biological uptake/turnover processes. In this way, the model is capable of reproducing activity concentration in sediment more realistically. The model was used to assess the radiological impact of the Fukushima accident on marine biota in the acute phase of the accident. Sediment and biota activity concentrations are within the wide range of actual monitoring data. Activity concentrations in marine biota are thus shown to be better calculated by a dynamic model than with the simpler equilibrium approach based on concentration factors, which tends to overestimate for the acute accident period. Modelled dose rates from external exposure from sediment are also significantly below equilibrium predictions. The model calculations confirm previous studies showing that radioactivity levels in marine biota have been generally below the levels necessary to cause a measurable effect on populations. The model was used in mass-balance mode to calculate total integrated releases of 103, 30 and 3 PBq for (131)I, (137)Cs and (90)Sr, reasonably in line with previous

  13. Markov Model of Severe Accident Progression and Management

    SciTech Connect

    Bari, R.A.; Cheng, L.; Cuadra,A.; Ginsberg,T.; Lehner,J.; Martinez-Guridi,G.; Mubayi,V.; Pratt,W.T.; Yue, M.

    2012-06-25

    The earthquake and tsunami that hit the nuclear power plants at the Fukushima Daiichi site in March 2011 led to extensive fuel damage, including possible fuel melting, slumping, and relocation at the affected reactors. A so-called feed-and-bleed mode of reactor cooling was initially established to remove decay heat. The plan was to eventually switch over to a recirculation cooling system. Failure of feed and bleed was a possibility during the interim period. Furthermore, even if recirculation was established, there was a possibility of its subsequent failure. Decay heat has to be sufficiently removed to prevent further core degradation. To understand the possible evolution of the accident conditions and to have a tool for potential future hypothetical evaluations of accidents at other nuclear facilities, a Markov model of the state of the reactors was constructed in the immediate aftermath of the accident and was executed under different assumptions of potential future challenges. This work was performed at the request of the U.S. Department of Energy to explore 'what-if' scenarios in the immediate aftermath of the accident. The work began in mid-March and continued until mid-May 2011. The analysis had the following goals: (1) To provide an overall framework for describing possible future states of the damaged reactors; (2) To permit an impact analysis of 'what-if' scenarios that could lead to more severe outcomes; (3) To determine approximate probabilities of alternative end-states under various assumptions about failure and repair times of cooling systems; (4) To infer the reliability requirements of closed loop cooling systems needed to achieve stable core end-states and (5) To establish the importance for the results of the various cooling system and physical phenomenological parameters via sensitivity calculations.

  14. Mathematical model to predict drivers' reaction speeds.

    PubMed

    Long, Benjamin L; Gillespie, A Isabella; Tanaka, Martin L

    2012-02-01

    Mental distractions and physical impairments can increase the risk of accidents by affecting a driver's ability to control the vehicle. In this article, we developed a linear mathematical model that can be used to quantitatively predict drivers' performance over a variety of possible driving conditions. Predictions were not limited only to conditions tested, but also included linear combinations of these tests conditions. Two groups of 12 participants were evaluated using a custom drivers' reaction speed testing device to evaluate the effect of cell phone talking, texting, and a fixed knee brace on the components of drivers' reaction speed. Cognitive reaction time was found to increase by 24% for cell phone talking and 74% for texting. The fixed knee brace increased musculoskeletal reaction time by 24%. These experimental data were used to develop a mathematical model to predict reaction speed for an untested condition, talking on a cell phone with a fixed knee brace. The model was verified by comparing the predicted reaction speed to measured experimental values from an independent test. The model predicted full braking time within 3% of the measured value. Although only a few influential conditions were evaluated, we present a general approach that can be expanded to include other types of distractions, impairments, and environmental conditions. PMID:22431214

  15. Advanced accident sequence precursor analysis level 1 models

    SciTech Connect

    Sattison, M.B.; Thatcher, T.A.; Knudsen, J.K.; Schroeder, J.A.; Siu, N.O.

    1996-03-01

    INEL has been involved in the development of plant-specific Accident Sequence Precursor (ASP) models for the past two years. These models were developed for use with the SAPHIRE suite of PRA computer codes. They contained event tree/linked fault tree Level 1 risk models for the following initiating events: general transient, loss-of-offsite-power, steam generator tube rupture, small loss-of-coolant-accident, and anticipated transient without scram. Early in 1995 the ASP models were revised based on review comments from the NRC and an independent peer review. These models were released as Revision 1. The Office of Nuclear Regulatory Research has sponsored several projects at the INEL this fiscal year to further enhance the capabilities of the ASP models. Revision 2 models incorporates more detailed plant information into the models concerning plant response to station blackout conditions, information on battery life, and other unique features gleaned from an Office of Nuclear Reactor Regulation quick review of the Individual Plant Examination submittals. These models are currently being delivered to the NRC as they are completed. A related project is a feasibility study and model development of low power/shutdown (LP/SD) and external event extensions to the ASP models. This project will establish criteria for selection of LP/SD and external initiator operational events for analysis within the ASP program. Prototype models for each pertinent initiating event (loss of shutdown cooling, loss of inventory control, fire, flood, seismic, etc.) will be developed. A third project concerns development of enhancements to SAPHIRE. In relation to the ASP program, a new SAPHIRE module, GEM, was developed as a specific user interface for performing ASP evaluations. This module greatly simplifies the analysis process for determining the conditional core damage probability for a given combination of initiating events and equipment failures or degradations.

  16. Development of hydrogeological modelling approaches for assessment of consequences of hazardous accidents at nuclear power plants

    SciTech Connect

    Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A.

    1994-07-01

    This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs.

  17. A simplified model for calculating atmospheric radionuclide transport and early health effects from nuclear reactor accidents

    SciTech Connect

    Madni, I.K.; Cazzoli, E.G.; Khatib-Rahbar, M.

    1995-11-01

    During certain hypothetical severe accidents in a nuclear power plant, radionuclides could be released to the environment as a plume. Prediction of the atmospheric dispersion and transport of these radionuclides is important for assessment of the risk to the public from such accidents. A simplified PC-based model was developed that predicts time-integrated air concentration of each radionuclide at any location from release as a function of time integrated source strength using the Gaussian plume model. The solution procedure involves direct analytic integration of air concentration equations over time and position, using simplified meteorology. The formulation allows for dry and wet deposition, radioactive decay and daughter buildup, reactor building wake effects, the inversion lid effect, plume rise due to buoyancy or momentum, release duration, and grass height. Based on air and ground concentrations of the radionuclides, the early dose to an individual is calculated via cloudshine, groundshine, and inhalation. The model also calculates early health effects based on the doses. This paper presents aspects of the model that would be of interest to the prediction of environmental flows and their public consequences.

  18. Markov Model of Accident Progression at Fukushima Daiichi

    SciTech Connect

    Cuadra A.; Bari R.; Cheng, L-Y; Ginsberg, T.; Lehner, J.; Martinez-Guridi, G.; Mubayi, V.; Pratt, T.; Yue, M.

    2012-11-11

    On March 11, 2011, a magnitude 9.0 earthquake followed by a tsunami caused loss of offsite power and disabled the emergency diesel generators, leading to a prolonged station blackout at the Fukushima Daiichi site. After successful reactor trip for all operating reactors, the inability to remove decay heat over an extended period led to boil-off of the water inventory and fuel uncovery in Units 1-3. A significant amount of metal-water reaction occurred, as evidenced by the quantities of hydrogen generated that led to hydrogen explosions in the auxiliary buildings of the Units 1 & 3, and in the de-fuelled Unit 4. Although it was assumed that extensive fuel damage, including fuel melting, slumping, and relocation was likely to have occurred in the core of the affected reactors, the status of the fuel, vessel, and drywell was uncertain. To understand the possible evolution of the accident conditions at Fukushima Daiichi, a Markov model of the likely state of one of the reactors was constructed and executed under different assumptions regarding system performance and reliability. The Markov approach was selected for several reasons: It is a probabilistic model that provides flexibility in scenario construction and incorporates time dependence of different model states. It also readily allows for sensitivity and uncertainty analyses of different failure and repair rates of cooling systems. While the analysis was motivated by a need to gain insight on the course of events for the damaged units at Fukushima Daiichi, the work reported here provides a more general analytical basis for studying and evaluating severe accident evolution over extended periods of time. This work was performed at the request of the U.S. Department of Energy to explore 'what-if' scenarios in the immediate aftermath of the accidents.

  19. Probabilistic microcell prediction model

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2002-06-01

    A microcell is a cell with 1-km or less radius which is suitable for heavily urbanized area such as a metropolitan city. This paper deals with the microcell prediction model of propagation loss which uses probabilistic techniques. The RSL (Receive Signal Level) is the factor which can evaluate the performance of a microcell and the LOS (Line-Of-Sight) component and the blockage loss directly effect on the RSL. We are combining the probabilistic method to get these performance factors. The mathematical methods include the CLT (Central Limit Theorem) and the SPC (Statistical Process Control) to get the parameters of the distribution. This probabilistic solution gives us better measuring of performance factors. In addition, it gives the probabilistic optimization of strategies such as the number of cells, cell location, capacity of cells, range of cells and so on. Specially, the probabilistic optimization techniques by itself can be applied to real-world problems such as computer-networking, human resources and manufacturing process.

  20. Uncertainty and sensitivity analysis of food pathway results with the MACCS Reactor Accident Consequence Model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the food pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 87 imprecisely-known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, milk growing season dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, area dependent cost, crop disposal cost, milk disposal cost, condemnation area, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: fraction of cesium deposition on grain fields that is retained on plant surfaces and transferred directly to grain, maximum allowable ground concentrations of Cs-137 and Sr-90 for production of crops, ground concentrations of Cs-134, Cs-137 and I-131 at which the disposal of milk will be initiated due to accidents that occur during the growing season, ground concentrations of Cs-134, I-131 and Sr-90 at which the disposal of crops will be initiated due to accidents that occur during the growing season, rate of depletion of Cs-137 and Sr-90 from the root zone, transfer of Sr-90 from soil to legumes, transfer of Cs-137 from soil to pasture, transfer of cesium from animal feed to meat, and the transfer of cesium, iodine and strontium from animal feed to milk.

  1. ON PREDICTION AND MODEL VALIDATION

    SciTech Connect

    M. MCKAY; R. BECKMAN; K. CAMPBELL

    2001-02-01

    Quantification of prediction uncertainty is an important consideration when using mathematical models of physical systems. This paper proposes a way to incorporate ''validation data'' in a methodology for quantifying uncertainty of the mathematical predictions. The report outlines a theoretical framework.

  2. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    SciTech Connect

    Carbajo, J.J.

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  3. ATMOSPHERIC MODELING IN SUPPORT OF A ROADWAY ACCIDENT

    SciTech Connect

    Buckley, R.; Hunter, C.

    2010-10-21

    The United States Forest Service-Savannah River (USFS) routinely performs prescribed fires at the Savannah River Site (SRS), a Department of Energy (DOE) facility located in southwest South Carolina. This facility covers {approx}800 square kilometers and is mainly wooded except for scattered industrial areas containing facilities used in managing nuclear materials for national defense and waste processing. Prescribed fires of forest undergrowth are necessary to reduce the risk of inadvertent wild fires which have the potential to destroy large areas and threaten nuclear facility operations. This paper discusses meteorological observations and numerical model simulations from a period in early 2002 of an incident involving an early-morning multicar accident caused by poor visibility along a major roadway on the northern border of the SRS. At the time of the accident, it was not clear if the limited visibility was due solely to fog or whether smoke from a prescribed burn conducted the previous day just to the northwest of the crash site had contributed to the visibility. Through use of available meteorological information and detailed modeling, it was determined that the primary reason for the low visibility on this night was fog induced by meteorological conditions.

  4. a Simplified Methodology for the Prediction of the Small Break Loss-Of Accident.

    NASA Astrophysics Data System (ADS)

    Ward, Leonard William

    1988-12-01

    This thesis describes a complete methodology which has allowed for the development of a faster than real time computer program designed to simulate a small break loss -of-coolant accident in the primary system of a pressurized water reactor. By developing an understanding of the major phenomenon governing the small break LOCA fluid response, the system model representation can be greatly simplified leading to a very fast executing transient system blowdown code. Because of the fast execution times, the CULSETS code, or Columbia University Loss-of-Coolant Accident and System Excursion Transient Simulator code, is ideal for performing parametric studies of Emergency Core Cooling system or assessing the consequences of the many operator actions performed to place the system in a long term cooling mode following a small break LOCA. While the methodology was designed with specific application to the small break loss-of-coolant accident, it can also be used to simulate loss-of-feedwater, steam line breaks, and steam generator tube rupture events. The code is easily adaptable to a personal computer and could also be modified to provide the primary and secondary system responses to supply the required inputs to a simulator for a pressurized water reactor.

  5. Health effects model for nuclear power plant accident consequence analysis. Part I. Introduction, integration, and summary. Part II. Scientific basis for health effects models

    SciTech Connect

    Evans, J.S.; Moeller, D.W.; Cooper, D.W.

    1985-07-01

    Analysis of the radiological health effects of nuclear power plant accidents requires models for predicting early health effects, cancers and benign thyroid nodules, and genetic effects. Since the publication of the Reactor Safety Study, additional information on radiological health effects has become available. This report summarizes the efforts of a program designed to provide revised health effects models for nuclear power plant accident consequence modeling. The new models for early effects address four causes of mortality and nine categories of morbidity. The models for early effects are based upon two parameter Weibull functions. They permit evaluation of the influence of dose protraction and address the issue of variation in radiosensitivity among the population. The piecewise-linear dose-response models used in the Reactor Safety Study to predict cancers and thyroid nodules have been replaced by linear and linear-quadratic models. The new models reflect the most recently reported results of the follow-up of the survivors of the bombings of Hiroshima and Nagasaki and permit analysis of both morbidity and mortality. The new models for genetic effects allow prediction of genetic risks in each of the first five generations after an accident and include information on the relative severity of various classes of genetic effects. The uncertainty in modeloling radiological health risks is addressed by providing central, upper, and lower estimates of risks. An approach is outlined for summarizing the health consequences of nuclear power plant accidents. 298 refs., 9 figs., 49 tabs.

  6. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  7. Simulation Study of Traffic Accidents in Bidirectional Traffic Models

    NASA Astrophysics Data System (ADS)

    Moussa, Najem

    Conditions for the occurrence of bidirectional collisions are developed based on the Simon-Gutowitz bidirectional traffic model. Three types of dangerous situations can occur in this model. We analyze those corresponding to head-on collision; rear-end collision and lane-changing collision. Using Monte Carlo simulations, we compute the probability of the occurrence of these collisions for different values of the oncoming cars' density. It is found that the risk of collisions is important when the density of cars in one lane is small and that of the other lane is high enough. The influence of different proportions of heavy vehicles is also studied. We found that heavy vehicles cause an important reduction of traffic flow on the home lane and provoke an increase of the risk of car accidents.

  8. Uncertainty and sensitivity analysis of early exposure results with the MACCS Reactor Accident Consequence Model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; McKay, M.D.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 34 imprecisely known input variables on the following reactor accident consequences are studied: number of early fatalities, number of cases of prodromal vomiting, population dose within 10 mi of the reactor, population dose within 1000 mi of the reactor, individual early fatality probability within 1 mi of the reactor, and maximum early fatality distance. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: scaling factor for horizontal dispersion, dry deposition velocity, inhalation protection factor for nonevacuees, groundshine shielding factor for nonevacuees, early fatality hazard function alpha value for bone marrow exposure, and scaling factor for vertical dispersion.

  9. A statistical model for predicting muscle performance

    NASA Astrophysics Data System (ADS)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  10. Uncertainty and sensitivity analysis of chronic exposure results with the MACCS reactor accident consequence model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the chronic exposure pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 75 imprecisely known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, water ingestion dose, milk growing season dose, long-term groundshine dose, long-term inhalation dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, total latent cancer fatalities, area-dependent cost, crop disposal cost, milk disposal cost, population-dependent cost, total economic cost, condemnation area, condemnation population, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: dry deposition velocity, transfer of cesium from animal feed to milk, transfer of cesium from animal feed to meat, ground concentration of Cs-134 at which the disposal of milk products will be initiated, transfer of Sr-90 from soil to legumes, maximum allowable ground concentration of Sr-90 for production of crops, fraction of cesium entering surface water that is consumed in drinking water, groundshine shielding factor, scale factor defining resuspension, dose reduction associated with decontamination, and ground concentration of 1-131 at which disposal of crops will be initiated due to accidents that occur during the growing season.

  11. Melanoma Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. PREDICTIVE MODELS. Enhanced Oil Recovery Model

    SciTech Connect

    Ray, R.M.

    1992-02-26

    PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1 chemical flooding; 2 carbon dioxide miscible flooding; 3 in-situ combustion; 4 polymer flooding; and 5 steamflood. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes. The IBM PC/AT version includes a plotting capability to produces a graphic picture of the predictive model results.

  13. VICTORIA: A mechanistic model of radionuclide behavior in the reactor coolant system under severe accident conditions

    SciTech Connect

    Heames, T.J. ); Williams, D.A.; Johns, N.A.; Chown, N.M. ); Bixler, N.E.; Grimley, A.J. ); Wheatley, C.J. )

    1990-10-01

    This document provides a description of a model of the radionuclide behavior in the reactor coolant system (RCS) of a light water reactor during a severe accident. This document serves as the user's manual for the computer code called VICTORIA, based upon the model. The VICTORIA code predicts fission product release from the fuel, chemical reactions between fission products and structural materials, vapor and aerosol behavior, and fission product decay heating. This document provides a detailed description of each part of the implementation of the model into VICTORIA, the numerical algorithms used, and the correlations and thermochemical data necessary for determining a solution. A description of the code structure, input and output, and a sample problem are provided. The VICTORIA code was developed upon a CRAY-XMP at Sandia National Laboratories in the USA and a CRAY-2 and various SUN workstations at the Winfrith Technology Centre in England. 60 refs.

  14. Another Look at the Relationship Between Accident- and Encroachment-Based Approaches to Run-Off-the-Road Accidents Modeling

    SciTech Connect

    Miaou, Shaw-Pin

    1997-08-01

    The purpose of this study was to look for ways to combine the strengths of both approaches in roadside safety research. The specific objectives were (1) to present the encroachment-based approach in a more systematic and coherent way so that its limitations and strengths can be better understood from both statistical and engineering standpoints, and (2) to apply the analytical and engineering strengths of the encroachment-based thinking to the formulation of mean functions in accident-based models.

  15. Predictive Models and Computational Embryology

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  16. Predictive Modeling in Race Walking

    PubMed Central

    Wiktorowicz, Krzysztof; Przednowek, Krzysztof; Lassota, Lesław; Krzeszowski, Tomasz

    2015-01-01

    This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers' training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors. PMID:26339230

  17. Prediction of the Possibility a Right-Turn Driving Behavior at Intersection Leads to an Accident by Detecting Deviation of the Situation from Usual when the Behavior is Observed

    NASA Astrophysics Data System (ADS)

    Hayashi, Toshinori; Yamada, Keiichi

    Deviation of driving behavior from usual could be a sign of human error that increases the risk of traffic accidents. This paper proposes a novel method for predicting the possibility a driving behavior leads to an accident from the information on the driving behavior and the situation. In a previous work, a method of predicting the possibility by detecting the deviation of driving behavior from usual one in that situation has been proposed. In contrast, the method proposed in this paper predicts the possibility by detecting the deviation of the situation from usual one when the behavior is observed. An advantage of the proposed method is the number of the required models is independent of the variety of the situations. The method was applied to a problem of predicting accidents by right-turn driving behavior at an intersection, and the performance of the method was evaluated by experiments on a driving simulator.

  18. VICTORIA: A mechanistic model of radionuclide behavior in the reactor coolant system under severe accident conditions. Revision 1

    SciTech Connect

    Heams, T J; Williams, D A; Johns, N A; Mason, A; Bixler, N E; Grimley, A J; Wheatley, C J; Dickson, L W; Osborn-Lee, I; Domagala, P; Zawadzki, S; Rest, J; Alexander, C A; Lee, R Y

    1992-12-01

    The VICTORIA model of radionuclide behavior in the reactor coolant system (RCS) of a light water reactor during a severe accident is described. It has been developed by the USNRC to define the radionuclide phenomena and processes that must be considered in systems-level models used for integrated analyses of severe accident source terms. The VICTORIA code, based upon this model, predicts fission product release from the fuel, chemical reactions involving fission products, vapor and aerosol behavior, and fission product decay heating. Also included is a detailed description of how the model is implemented in VICTORIA, the numerical algorithms used, and the correlations and thermochemical data necessary for determining a solution. A description of the code structure, input and output, and a sample problem are provided.

  19. Model aids cuttings transport prediction

    SciTech Connect

    Gavignet, A.A. ); Sobey, I.J. )

    1989-09-01

    Drilling of highly deviated wells can be complicated by the formation of a thick bed of cuttings at low flow rates. The model proposed in this paper shows what mechanisms control the thickness of such a bed, and the model predictions are compared with experimental results.

  20. A contrail cirrus prediction model

    NASA Astrophysics Data System (ADS)

    Schumann, U.

    2012-05-01

    A new model to simulate and predict the properties of a large ensemble of contrails as a function of given air traffic and meteorology is described. The model is designed for approximate prediction of contrail cirrus cover and analysis of contrail climate impact, e.g. within aviation system optimization processes. The model simulates the full contrail life-cycle. Contrail segments form between waypoints of individual aircraft tracks in sufficiently cold and humid air masses. The initial contrail properties depend on the aircraft. The advection and evolution of the contrails is followed with a Lagrangian Gaussian plume model. Mixing and bulk cloud processes are treated quasi analytically or with an effective numerical scheme. Contrails disappear when the bulk ice content is sublimating or precipitating. The model has been implemented in a "Contrail Cirrus Prediction Tool" (CoCiP). This paper describes the model assumptions, the equations for individual contrails, and the analysis-method for contrail-cirrus cover derived from the optical depth of the ensemble of contrails and background cirrus. The model has been applied for a case study and compared to the results of other models and in-situ contrail measurements. The simple model reproduces a considerable part of observed contrail properties. Mid-aged contrails provide the largest contributions to the product of optical depth and contrail width, important for climate impact.

  1. A crash-prediction model for road tunnels.

    PubMed

    Caliendo, Ciro; De Guglielmo, Maria Luisa; Guida, Maurizio

    2013-06-01

    Considerable research has been carried out into open roads to establish relationships between crashes and traffic flow, geometry of infrastructure and environmental factors, whereas crash-prediction models for road tunnels, have rarely been investigated. In addition different results have been sometimes obtained regarding the effects of traffic and geometry on crashes in road tunnels. However, most research has focused on tunnels where traffic and geometric conditions, as well as driving behaviour, differ from those in Italy. Thus, in this paper crash prediction-models that had not yet been proposed for Italian road tunnels have been developed. For the purpose, a 4-year monitoring period extending from 2006 to 2009 was considered. The tunnels investigated are single-tube ones with unidirectional traffic. The Bivariate Negative Binomial regression model, jointly applied to non-severe crashes (accidents involving material-damage only) and severe crashes (fatal and injury accidents only), was used to model the frequency of accident occurrence. The year effect on severe crashes was also analyzed by the Random Effects Binomial regression model and the Negative Multinomial regression model. Regression parameters were estimated by the Maximum Likelihood Method. The Cumulative Residual Method was used to test the adequacy of the regression model through the range of annual average daily traffic per lane. The candidate set of variables was: tunnel length (L), annual average daily traffic per lane (AADTL), percentage of trucks (%Tr), number of lanes (NL), and the presence of a sidewalk. Both for non-severe crashes and severe crashes, prediction-models showed that significant variables are: L, AADTL, %Tr, and NL. A significant year effect consisting in a systematic reduction of severe crashes over time was also detected. The analysis developed in this paper appears to be useful for many applications such as the estimation of accident reductions due to improvement in existing

  2. Estimation Of 137Cs Using Atmospheric Dispersion Models After A Nuclear Reactor Accident

    NASA Astrophysics Data System (ADS)

    Simsek, V.; Kindap, T.; Unal, A.; Pozzoli, L.; Karaca, M.

    2012-04-01

    Nuclear energy will continue to have an important role in the production of electricity in the world as the need of energy grows up. But the safety of power plants will always be a question mark for people because of the accidents happened in the past. Chernobyl nuclear reactor accident which happened in 26 April 1986 was the biggest nuclear accident ever. Because of explosion and fire large quantities of radioactive material was released to the atmosphere. The release of the radioactive particles because of accident affected not only its region but the entire Northern hemisphere. But much of the radioactive material was spread over west USSR and Europe. There are many studies about distribution of radioactive particles and the deposition of radionuclides all over Europe. But this was not true for Turkey especially for the deposition of radionuclides released after Chernobyl nuclear reactor accident and the radiation doses received by people. The aim of this study is to determine the radiation doses received by people living in Turkish territory after Chernobyl nuclear reactor accident and use this method in case of an emergency. For this purpose The Weather Research and Forecasting (WRF) Model was used to simulate meteorological conditions after the accident. The results of WRF which were for the 12 days after accident were used as input data for the HYSPLIT model. NOAA-ARL's (National Oceanic and Atmospheric Administration Air Resources Laboratory) dispersion model HYSPLIT was used to simulate the 137Cs distrubition. The deposition values of 137Cs in our domain after Chernobyl Nuclear Reactor Accident were between 1.2E-37 Bq/m2 and 3.5E+08 Bq/m2. The results showed that Turkey was affected because of the accident especially the Black Sea Region. And the doses were calculated by using GENII-LIN which is multipurpose health physics code.

  3. What do saliency models predict?

    PubMed Central

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  4. CFD modeling of debris melting phenomena during late phase Candu 6 severe accident

    SciTech Connect

    Nicolici, S.; Dupleac, D.; Prisecaru, I.

    2012-07-01

    The objective of this paper was to study the phase change of the debris formed on the Candu 6 calandria bottom in a postulated accident sequence. The molten pool and crust formation were studied employing the Ansys-Fluent code. The 3D model using Large Eddy Simulation (LES) predicts the conjugate, radiative and convective heat transfer inside and from the corium pool. LES simulations require a very fine grid to capture the crust formation and the free convection flow. This aspect (fine mesh requirement) correlated with the long transient has imposed the use of a slice from the 3D calandria geometry in order not to exceed the computing resources. The preliminary results include heat transfer coefficients, temperature profiles and heat fluxes through calandria wall. From the safety point of view it is very important to maintain a heat flux through the wall below the CHF assuring the integrity of the calandria vessel. This can be achieved by proper cooling of the tank water which contains the vessel. Also, transient duration can be estimated being important in developing guidelines for severe accidents management. The debris physical structure and material properties have large uncertainties in the temperature range of interest. Thus, further sensitivity studies should be carried out in order to better understand the influence of these parameters on this complex phenomenon. (authors)

  5. Innovative approach to modeling accident response of Gravel Gerties

    SciTech Connect

    Kramer, M.; McClure, P.; Sullivan, H.

    1997-08-01

    Recent safety analyses at nuclear explosive facilities have renewed interest in the accident phenomenology associated with explosions in nuclear explosive cells, which are commonly referred to as {open_quotes}Gravel Gerties.{close_quotes} The cells are used for the assembly and disassembly of nuclear explosives and are located in the Device Assembly Facility (DAF) at the Nevada Test Site (NTS) and at the Pantex facility. The cells are designed to mitigate the release of special nuclear material to the environment in the event of a detonation of high explosive within the Gravel Gertie. Although there are some subtle differences between the cells of DAF and Pantex, their general design, geometry, and configuration are similar. The cells consist of a round room approximately 10.4 m in diameter and 5.2 m high enclosed by 0.3-m-thick concrete. Each cell has a wire-rope cantenary roof overlain with gravel. The gravel is approximately 6.9 m deep at the center of the roof and decreases toward the outer edge of the cell. The cell is connected to a corridor and subsequent rooms through an interlocking blast door. In the event of a accidental explosion involving significant amounts of high explosive, the roof structure is lifted by the force of the explosion, the supporting cables break, the gravel is lifted by the blast (resulting in rapid venting of the cell), and the gravel roof collapses, filling the cell. The lifting and subsequent collapse of the gravel, which acts much like a piston, is very challenging to model.

  6. A method for modeling and analysis of directed weighted accident causation network (DWACN)

    NASA Astrophysics Data System (ADS)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Ding, Jing

    2015-11-01

    Using complex network theory to analyze accidents is effective to understand the causes of accidents in complex systems. In this paper, a novel method is proposed to establish directed weighted accident causation network (DWACN) for the Rail Accident Investigation Branch (RAIB) in the UK, which is based on complex network and using event chains of accidents. DWACN is composed of 109 nodes which denote causal factors and 260 directed weighted edges which represent complex interrelationships among factors. The statistical properties of directed weighted complex network are applied to reveal the critical factors, the key event chains and the important classes in DWACN. Analysis results demonstrate that DWACN has characteristics of small-world networks with short average path length and high weighted clustering coefficient, and display the properties of scale-free networks captured by that the cumulative degree distribution follows an exponential function. This modeling and analysis method can assist us to discover the latent rules of accidents and feature of faults propagation to reduce accidents. This paper is further development on the research of accident analysis methods using complex network.

  7. PREDICTIVE MODELS. Enhanced Oil Recovery Model

    SciTech Connect

    Ray, R.M.

    1992-02-26

    PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1 chemical flooding, where soap-like surfactants are injected into the reservoir to wash out the oil; 2 carbon dioxide miscible flooding, where carbon dioxide mixes with the lighter hydrocarbons making the oil easier to displace; 3 in-situ combustion, which uses the heat from burning some of the underground oil to thin the product; 4 polymer flooding, where thick, cohesive material is pumped into a reservoir to push the oil through the underground rock; and 5 steamflood, where pressurized steam is injected underground to thin the oil. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes.

  8. Highway accident severities and the mixed logit model: an exploratory empirical analysis.

    PubMed

    Milton, John C; Shankar, Venky N; Mannering, Fred L

    2008-01-01

    Many transportation agencies use accident frequencies, and statistical models of accidents frequencies, as a basis for prioritizing highway safety improvements. However, the use of accident severities in safety programming has been often been limited to the locational assessment of accident fatalities, with little or no emphasis being placed on the full severity distribution of accidents (property damage only, possible injury, injury)-which is needed to fully assess the benefits of competing safety-improvement projects. In this paper we demonstrate a modeling approach that can be used to better understand the injury-severity distributions of accidents on highway segments, and the effect that traffic, highway and weather characteristics have on these distributions. The approach we use allows for the possibility that estimated model parameters can vary randomly across roadway segments to account for unobserved effects potentially relating to roadway characteristics, environmental factors, and driver behavior. Using highway-injury data from Washington State, a mixed (random parameters) logit model is estimated. Estimation findings indicate that volume-related variables such as average daily traffic per lane, average daily truck traffic, truck percentage, interchanges per mile and weather effects such as snowfall are best modeled as random-parameters-while roadway characteristics such as the number of horizontal curves, number of grade breaks per mile and pavement friction are best modeled as fixed parameters. Our results show that the mixed logit model has considerable promise as a methodological tool in highway safety programming. PMID:18215557

  9. Predictive Models of Liver Cancer

    EPA Science Inventory

    Predictive models of chemical-induced liver cancer face the challenge of bridging causative molecular mechanisms to adverse clinical outcomes. The latent sequence of intervening events from chemical insult to toxicity are poorly understood because they span multiple levels of bio...

  10. Inter-comparison of dynamic models for radionuclide transfer to marine biota in a Fukushima accident scenario.

    PubMed

    Vives I Batlle, J; Beresford, N A; Beaugelin-Seiller, K; Bezhenar, R; Brown, J; Cheng, J-J; Ćujić, M; Dragović, S; Duffa, C; Fiévet, B; Hosseini, A; Jung, K T; Kamboj, S; Keum, D-K; Kryshev, A; LePoire, D; Maderich, V; Min, B-I; Periáñez, R; Sazykina, T; Suh, K-S; Yu, C; Wang, C; Heling, R

    2016-03-01

    We report an inter-comparison of eight models designed to predict the radiological exposure of radionuclides in marine biota. The models were required to simulate dynamically the uptake and turnover of radionuclides by marine organisms. Model predictions of radionuclide uptake and turnover using kinetic calculations based on biological half-life (TB1/2) and/or more complex metabolic modelling approaches were used to predict activity concentrations and, consequently, dose rates of (90)Sr, (131)I and (137)Cs to fish, crustaceans, macroalgae and molluscs under circumstances where the water concentrations are changing with time. For comparison, the ERICA Tool, a model commonly used in environmental assessment, and which uses equilibrium concentration ratios, was also used. As input to the models we used hydrodynamic forecasts of water and sediment activity concentrations using a simulated scenario reflecting the Fukushima accident releases. Although model variability is important, the intercomparison gives logical results, in that the dynamic models predict consistently a pattern of delayed rise of activity concentration in biota and slow decline instead of the instantaneous equilibrium with the activity concentration in seawater predicted by the ERICA Tool. The differences between ERICA and the dynamic models increase the shorter the TB1/2 becomes; however, there is significant variability between models, underpinned by parameter and methodological differences between them. The need to validate the dynamic models used in this intercomparison has been highlighted, particularly in regards to optimisation of the model biokinetic parameters. PMID:26717350

  11. A Statistical Approach to Predict the Failure Enthalpy and Reliability of Irradiated PWR Fuel Rods During Reactivity-Initiated Accidents

    SciTech Connect

    Nam, Cheol; Jeong, Yong-Hwan; Jung, Youn-Ho

    2001-11-15

    During the last decade, the failure behavior of high-burnup fuel rods under a reactivity-initiated accident (RIA) condition has been a serious concern since fuel rod failures at low enthalpy have been observed. This has resulted in the reassessment of existing licensing criteria and failure-mode study. To address the issue, a statistics-based methodology is suggested to predict failure probability of irradiated fuel rods under an RIA. Based on RIA simulation results in the literature, a failure enthalpy correlation for an irradiated fuel rod is constructed as a function of oxide thickness, fuel burnup, and pulse width. Using the failure enthalpy correlation, a new concept of ''equivalent enthalpy'' is introduced to reflect the effects of the three primary factors as well as peak fuel enthalpy into a single damage parameter. Moreover, the failure distribution function with equivalent enthalpy is derived, applying a two-parameter Weibull statistical model. Finally, the sensitivity analysis is carried out to estimate the effects of burnup, corrosion, peak fuel enthalpy, pulse width, and cladding materials used.

  12. Predictive Capability Maturity Model (PCMM).

    SciTech Connect

    Swiler, Laura Painton; Knupp, Patrick Michael; Urbina, Angel

    2010-10-01

    Predictive Capability Maturity Model (PCMM) is a communication tool that must include a dicussion of the supporting evidence. PCMM is a tool for managing risk in the use of modeling and simulation. PCMM is in the service of organizing evidence to help tell the modeling and simulation (M&S) story. PCMM table describes what activities within each element are undertaken at each of the levels of maturity. Target levels of maturity can be established based on the intended application. The assessment is to inform what level has been achieved compared to the desired level, to help prioritize the VU activities & to allocate resources.

  13. Cellular automata model simulating traffic car accidents in the on-ramp system

    NASA Astrophysics Data System (ADS)

    Echab, H.; Lakouari, N.; Ez-Zahraouy, H.; Benyoussef, A.

    2015-01-01

    In this paper, using Nagel-Schreckenberg model we study the on-ramp system under the expanded open boundary condition. The phase diagram of the two-lane on-ramp system is computed. It is found that the expanded left boundary insertion strategy enhances the flow in the on-ramp lane. Furthermore, we have studied the probability of the occurrence of car accidents. We distinguish two types of car accidents: the accident at the on-ramp site (Prc) and the rear-end accident in the main road (Pac). It is shown that car accidents at the on-ramp site are more likely to occur when traffic is free on road A. However, the rear-end accidents begin to occur above a critical injecting rate αc1. The influence of the on-ramp length (LB) and position (xC0) on the car accidents probabilities is studied. We found that large LB or xC0 causes an important decrease of the probability Prc. However, only large xC0 provokes an increase of the probability Pac. The effect of the stochastic randomization is also computed.

  14. Computer program predicts thermal and flow transients experienced in a reactor loss- of-flow accident

    NASA Technical Reports Server (NTRS)

    Hale, C. J.

    1967-01-01

    Program analyzes the consequences of a loss-of-flow accident in the primary cooling system of a heterogeneous light-water moderated and cooled nuclear reactor. It produces a temperature matrix 36 x 41 /x,y/ which includes fuel surface temperatures relative to the time the pump power was lost.

  15. Mars solar conjunction prediction modeling

    NASA Astrophysics Data System (ADS)

    Srivastava, Vineet K.; Kumar, Jai; Kulshrestha, Shivali; Kushvah, Badam Singh

    2016-01-01

    During the Mars solar conjunction, telecommunication and tracking between the spacecraft and the Earth degrades significantly. The radio signal degradation depends on the angular separation between the Sun, Earth and probe (SEP), the signal frequency band and the solar activity. All radiometric tracking data types display increased noise and signatures for smaller SEP angles. Due to scintillation, telemetry frame errors increase significantly when solar elongation becomes small enough. This degradation in telemetry data return starts at solar elongation angles of around 5° at S-band, around 2° at X-band and about 1° at Ka-band. This paper presents a mathematical model for predicting Mars superior solar conjunction for any Mars orbiting spacecraft. The described model is simulated for the Mars Orbiter Mission which experienced Mars solar conjunction during May-July 2015. Such a model may be useful to flight projects and design engineers in the planning of Mars solar conjunction operational scenarios.

  16. Object-Oriented Bayesian Networks (OOBN) for Aviation Accident Modeling and Technology Portfolio Impact Assessment

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Ancel, Ersin; Jones, Sharon M.

    2012-01-01

    The concern for reducing aviation safety risk is rising as the National Airspace System in the United States transforms to the Next Generation Air Transportation System (NextGen). The NASA Aviation Safety Program is committed to developing an effective aviation safety technology portfolio to meet the challenges of this transformation and to mitigate relevant safety risks. The paper focuses on the reasoning of selecting Object-Oriented Bayesian Networks (OOBN) as the technique and commercial software for the accident modeling and portfolio assessment. To illustrate the benefits of OOBN in a large and complex aviation accident model, the in-flight Loss-of-Control Accident Framework (LOCAF) constructed as an influence diagram is presented. An OOBN approach not only simplifies construction and maintenance of complex causal networks for the modelers, but also offers a well-organized hierarchical network that is easier for decision makers to exploit the model examining the effectiveness of risk mitigation strategies through technology insertions.

  17. Final safety analysis report for the Galileo Mission: Volume 2: Book 1, Accident model document

    SciTech Connect

    Not Available

    1988-12-15

    The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source. The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence. 27 figs., 11 tabs.

  18. Modeling the early-phase redistribution of radiocesium fallouts in an evergreen coniferous forest after Chernobyl and Fukushima accidents.

    PubMed

    Calmon, P; Gonze, M-A; Mourlon, Ch

    2015-10-01

    Following the Chernobyl accident, the scientific community gained numerous data on the transfer of radiocesium in European forest ecosystems, including information regarding the short-term redistribution of atmospheric fallout onto forest canopies. In the course of international programs, the French Institute for Radiological Protection and Nuclear Safety (IRSN) developed a forest model, named TREE4 (Transfer of Radionuclides and External Exposure in FORest systems), 15 years ago. Recently published papers on a Japanese evergreen coniferous forest contaminated by Fukushima radiocesium fallout provide interesting and quantitative data on radioactive mass fluxes measured within the forest in the months following the accident. The present study determined whether the approach adopted in the TREE4 model provides satisfactory results for Japanese forests or whether it requires adjustments. This study focused on the interception of airborne radiocesium by forest canopy, and the subsequent transfer to the forest floor through processes such as litterfall, throughfall, and stemflow, in the months following the accident. We demonstrated that TREE4 quite satisfactorily predicted the interception fraction (20%) and the canopy-to-soil transfer (70% of the total deposit in 5 months) in the Tochigi forest. This dynamics was similar to that observed in the Höglwald spruce forest. However, the unexpectedly high contribution of litterfall (31% in 5 months) in the Tochigi forest could not be reproduced in our simulations (2.5%). Possible reasons for this discrepancy are discussed; and sensitivity of the results to uncertainty in deposition conditions was analyzed. PMID:26005747

  19. Development of fission-products transport model in severe-accident scenarios for Scdap/Relap5

    NASA Astrophysics Data System (ADS)

    Honaiser, Eduardo Henrique Rangel

    The understanding and estimation of the release of fission products during a severe accident became one of the priorities of the nuclear community after 1980, with the events of the Three-mile Island unit 2 (TMI-2), in 1979, and Chernobyl accidents, in 1986. Since this time, theoretical developments and experiments have shown that the primary circuit systems of light water reactors (LWR) have the potential to attenuate the release of fission products, a fact that had been neglected before. An advanced tool, compatible with nuclear thermal-hydraulics integral codes, is developed to predict the retention and physical evolution of the fission products in the primary circuit of LWRs, without considering the chemistry effects. The tool embodies the state-of-the-art models for the involved phenomena as well as develops new models. The capabilities acquired after the implementation of this tool in the Scdap/Relap5 code can be used to increase the accuracy of probability safety assessment (PSA) level 2, enhance the reactor accident management procedures and design new emergency safety features.

  20. Climate Modeling and Prediction at NSIPP

    NASA Technical Reports Server (NTRS)

    Suarez, Max; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The talk will review modeling and prediction efforts undertaken as part of NASA's Seasonal to Interannual Prediction Project (NSIPP). The focus will be on atmospheric model results, including its use for experimental seasonal prediction and the diagnostic analysis of climate anomalies. The model's performance in coupled experiments with land and atmosphere models will also be discussed.

  1. Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices

    SciTech Connect

    Not Available

    1988-12-15

    This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

  2. Severe accident modeling of a PWR core with different cladding materials

    SciTech Connect

    Johnson, S. C.; Henry, R. E.; Paik, C. Y.

    2012-07-01

    The MAAP v.4 software has been used to model two severe accident scenarios in nuclear power reactors with three different materials as fuel cladding. The TMI-2 severe accident was modeled with Zircaloy-2 and SiC as clad material and a SBO accident in a Zion-like, 4-loop, Westinghouse PWR was modeled with Zircaloy-2, SiC, and 304 stainless steel as clad material. TMI-2 modeling results indicate that lower peak core temperatures, less H 2 (g) produced, and a smaller mass of molten material would result if SiC was substituted for Zircaloy-2 as cladding. SBO modeling results indicate that the calculated time to RCS rupture would increase by approximately 20 minutes if SiC was substituted for Zircaloy-2. Additionally, when an extended SBO accident (RCS creep rupture failure disabled) was modeled, significantly lower peak core temperatures, less H 2 (g) produced, and a smaller mass of molten material would be generated by substituting SiC for Zircaloy-2 or stainless steel cladding. Because the rate of SiC oxidation reaction with elevated temperature H{sub 2}O (g) was set to 0 for this work, these results should be considered preliminary. However, the benefits of SiC as a more accident tolerant clad material have been shown and additional investigation of SiC as an LWR core material are warranted, specifically investigations of the oxidation kinetics of SiC in H{sub 2}O (g) over the range of temperatures and pressures relevant to severe accidents in LWR 's. (authors)

  3. Radiological assessment by compartment model POSEIDON-R of radioactivity released in the ocean following Fukushima Daiichi accident

    NASA Astrophysics Data System (ADS)

    Bezhenar, Roman; Maderich, Vladimir; Heling, Rudie; Jung, Kyung Tae; Myoung, Jung-Goo

    2013-04-01

    The modified compartment model POSEIDON-R (Lepicard et al, 2004), was applied to the North-Western Pacific and adjacent seas. It is for the first time, that a compartment model was used in this region, where 25 Nuclear Power Plants (NPP) are operated. The aim of this study is to perform a radiological assessment of the releases of radioactivity due to the Fukushima Daiichi accident. The model predicts the dispersion of radioactivity in water column and in the sediments, and the transfer of radionuclides throughout the marine food web, and the subsequent doses to the population due to the consumption of fishery products. A generic predictive dynamical food-chain model is used instead of concentration factor (CF) approach. The radionuclide uptake model for fish has as central feature the accumulation of radionuclides in the target tissue. Three layer structure of the water column makes it possible to describe deep-water transport adequately. In total 175 boxes cover the Northwestern Pacific, the East China Sea, and the Yellow Sea and East/Japan Sea. Water fluxes between boxes were calculated by averaging three-dimensional currents obtained by hydrodynamic model ROMS over a 10-years period. Tidal mixing between boxes was parameterized. The model was validated on observation data on the Cs-137 in water for the period 1945-2004. The source terms from nuclear weapon tests are regional source term from the bomb tests on Atoll Enewetak and Atoll Bikini and global deposition from weapons tests. The correlation coefficient between predicted and observed concentrations of Cs-137 in the surface water is 0.925 and RMSE=1.43 Bq/m3. A local-scale coastal box was used according POSEIDON's methodology to describe local processes of activity transport, deposition and food web around the Fukushima Daiichi NPP. The source term to the ocean from the Fukushima accident includes a 10-days release of Cs-134 (5 PBq) and Cs-137 (4 PBq) directly into the ocean and 6 and 5 PBq of Cs-134 and

  4. Investigation of shipping accident injury severity and mortality.

    PubMed

    Weng, Jinxian; Yang, Dong

    2015-03-01

    Shipping movements are operated in a complex and high-risk environment. Fatal shipping accidents are the nightmares of seafarers. With ten years' worldwide ship accident data, this study develops a binary logistic regression model and a zero-truncated binomial regression model to predict the probability of fatal shipping accidents and corresponding mortalities. The model results show that both the probability of fatal accidents and mortalities are greater for collision, fire/explosion, contact, grounding, sinking accidents occurred in adverse weather conditions and darkness conditions. Sinking has the largest effects on the increment of fatal accident probability and mortalities. The results also show that the bigger number of mortalities is associated with shipping accidents occurred far away from the coastal area/harbor/port. In addition, cruise ships are found to have more mortalities than non-cruise ships. The results of this study are beneficial for policy-makers in proposing efficient strategies to prevent fatal shipping accidents. PMID:25617776

  5. Predictive models and computational toxicology.

    PubMed

    Knudsen, Thomas; Martin, Matthew; Chandler, Kelly; Kleinstreuer, Nicole; Judson, Richard; Sipes, Nisha

    2013-01-01

    Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The ToxCast computational toxicology research program was launched by EPA in 2007 and is part of the federal Tox21 consortium to develop a cost-effective approach for efficiently prioritizing the toxicity testing of thousands of chemicals and the application of this information to assessing human toxicology. ToxCast addresses this problem through an integrated workflow using high-throughput screening (HTS) of chemical libraries across more than 650 in vitro assays including biochemical assays, human cells and cell lines, and alternative models such as mouse embryonic stem cells and zebrafish embryo development. The initial phase of ToxCast profiled a library of 309 environmental chemicals, mostly pesticidal actives having rich in vivo data from guideline studies that include chronic/cancer bioassays in mice and rats, multigenerational reproductive studies in rats, and prenatal developmental toxicity endpoints in rats and rabbits. The first phase of ToxCast was used to build models that aim to determine how well in vivo animal effects can be predicted solely from the in vitro data. Phase I is now complete and both the in vitro data (ToxCast) and anchoring in vivo database (ToxRefDB) have been made available to the public (http://actor.epa.gov/). As Phase II of ToxCast is now underway, the purpose of this chapter is to review progress to date with ToxCast predictive modeling, using specific examples on developmental and reproductive effects in rats and rabbits with lessons learned during Phase I. PMID:23138916

  6. Predictive Modeling of Tokamak Configurations*

    NASA Astrophysics Data System (ADS)

    Casper, T. A.; Lodestro, L. L.; Pearlstein, L. D.; Bulmer, R. H.; Jong, R. A.; Kaiser, T. B.; Moller, J. M.

    2001-10-01

    The Corsica code provides comprehensive toroidal plasma simulation and design capabilities with current applications [1] to tokamak, reversed field pinch (RFP) and spheromak configurations. It calculates fixed and free boundary equilibria coupled to Ohm's law, sources, transport models and MHD stability modules. We are exploring operations scenarios for both the DIII-D and KSTAR tokamaks. We will present simulations of the effects of electron cyclotron heating (ECH) and current drive (ECCD) relevant to the Quiescent Double Barrier (QDB) regime on DIII-D exploring long pulse operation issues. KSTAR simulations using ECH/ECCD in negative central shear configurations explore evolution to steady state while shape evolution studies during current ramp up using a hyper-resistivity model investigate startup scenarios and limitations. Studies of high bootstrap fraction operation stimulated by recent ECH/ECCD experiments on DIIID will also be presented. [1] Pearlstein, L.D., et al, Predictive Modeling of Axisymmetric Toroidal Configurations, 28th EPS Conference on Controlled Fusion and Plasma Physics, Madeira, Portugal, June 18-22, 2001. * Work performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

  7. Predictive Modeling of Cardiac Ischemia

    NASA Technical Reports Server (NTRS)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  8. Using meteorological ensembles for atmospheric dispersion modelling of the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Périllat, Raphaël; Korsakissok, Irène; Mallet, Vivien; Mathieu, Anne; Sekiyama, Thomas; Didier, Damien; Kajino, Mizuo; Igarashi, Yasuhito; Adachi, Kouji

    2016-04-01

    Dispersion models are used in response to an accidental release of radionuclides of the atmosphere, to infer mitigation actions, and complement field measurements for the assessment of short and long term environmental and sanitary impacts. However, the predictions of these models are subject to important uncertainties, especially due to input data, such as meteorological fields or source term. This is still the case more than four years after the Fukushima disaster (Korsakissok et al., 2012, Girard et al., 2014). In the framework of the SAKURA project, an MRI-IRSN collaboration, a meteorological ensemble of 20 members designed by MRI (Sekiyama et al. 2013) was used with IRSN's atmospheric dispersion models. Another ensemble, retrieved from ECMWF and comprising 50 members, was also used for comparison. The MRI ensemble is 3-hour assimilated, with a 3-kilometers resolution, designed to reduce the meteorological uncertainty in the Fukushima case. The ECMWF is a 24-hour forecast with a coarser grid, representative of the uncertainty of the data available in a crisis context. First, it was necessary to assess the quality of the ensembles for our purpose, to ensure that their spread was representative of the uncertainty of meteorological fields. Using meteorological observations allowed characterizing the ensembles' spread, with tools such as Talagrand diagrams. Then, the uncertainty was propagated through atmospheric dispersion models. The underlying question is whether the output spread is larger than the input spread, that is, whether small uncertainties in meteorological fields can produce large differences in atmospheric dispersion results. Here again, the use of field observations was crucial, in order to characterize the spread of the ensemble of atmospheric dispersion simulations. In the case of the Fukushima accident, gamma dose rates, air activities and deposition data were available. Based on these data, selection criteria for the ensemble members were

  9. Modelling of conspicuity-related motorcycle accidents in Seremban and Shah Alam, Malaysia.

    PubMed

    Radin, U R; Mackay, M G; Hills, B L

    1996-05-01

    Preliminary analysis of the short-term impact of a running headlights intervention revealed that there has been a significant drop in conspicuity-related motorcycle accidents in the pilot areas, Seremban and Shah Alam, Malaysia. This paper attempts to look in more detail at conspicuity-related accidents involving motorcycles. The aim of the analysis was to establish a statistical model to describe the relationship between the frequency of conspicuity-related motorcycle accidents and a range of explanatory variables so that new insights can be obtained into the effects of introducing a running headlight campaign and regulation. The exogenous variables in this analysis include the influence of time trends, changes in the recording and analysis system, the effect of fasting activities during Ramadhan and the "Balik Kampong" culture, a seasonal cultural-religious holiday activity unique to Malaysia. The model developed revealed that the running headlight intervention reduced the conspicuity-related motorcycle accidents by about 29%. It is concluded that the intervention has been successful in improving conspicuity-related motorcycle accidents in Malaysia. PMID:8799436

  10. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  11. Future synergism in diving accident management: The Singapore model.

    PubMed

    Chong, Si Jack; Liang, Weihao; Kim, Soo Jang; Kang, Wee Lee

    2010-03-01

    The popularity of diving as a leisure activity has been an increasing trend in recent years. With the rise of this sport inevitably comes increasing numbers and risk of diving-related injuries and demand for professional medical treatment of such injuries. Concurrently, with hyperbaric oxygen therapy (HBOT) being more readily available, new applications for HBOT have been proven for the treatment of various medical conditions. In Singapore, diving and hyperbaric medicine was largely a military medicine specialty and its practice confined to the Singapore Armed Forces for many years. The new Hyperbaric and Diving Medicine Centre set up in Singapore General Hospital (SGH) offers an excellent opportunity for collaboration between the Singapore Navy Medical Service (NMS) and SGH. This combines the expertise in the field of diving and hyperbaric medicine that NMS provides, with the resources and specialized services available at SGH. This collaboration was officially formalized by the recent signing of a Memorandum of Understanding between the two organisations. The partnership will allow both organisations to leverage on each other's strengths and enhance the development of research and training capabilities. This collaboration will also be an important step towards formal recognition and accreditation of diving and hyperbaric medicine as a medical subspecialty in the foreseeable future, thus helping to develop and promote diving and hyperbaric medicine in Singapore. This synergistic approach in diving accident management will also promote and establish Singapore as a leader in the field of diving and hyperbaric medicine in the region. PMID:23111838

  12. A dynamic model for evaluating radionuclide distribution in forests from nuclear accidents.

    PubMed

    Schell, W R; Linkov, I; Myttenaere, C; Morel, B

    1996-03-01

    The Chernobyl Nuclear Power Plant accident in 1986 caused radionuclide contamination in most countries in Eastern and Western Europe. A prime example is Belarus where 23% of the total land area received chronic levels; about 1.5 x 10(6) ha of forested lands were contaminated with 40--190 kBq m-2 and 2.5 x 10(4) ha received greater than 1,480 kBq m-2 of 137Cs and other long-lived radionuclides such as 90Sr and 239,240Pu. Since the radiological dose to the forest ecosystem will tend to accumulate over long time periods (decades to centuries), we need to determine what countermeasures can be taken to limit this dose so that the affected regions can, once again, safely provide habitat and natural forest products. To address some of these problems, our initial objective is to formulate a generic model, FORESTPATH, which describes the major kinetic processes and pathways of radionuclide movement in forests and natural ecosystems and which can be used to predict future radionuclide concentrations. The model calculates the time-dependent radionuclide concentrations in different compartments of the forest ecosystem based on the information available on residence half-times in two forest types: coniferous and deciduous. The results show that the model reproduces well the radionuclide cycling pattern found in the literature for deciduous and coniferous forests. Variability analysis was used to access the relative importance of specific parameter values in the generic model performance. The FORESTPASTH model can be easily adjusted for site-specific applications. PMID:8609024

  13. A dynamic model for evaluating radionuclide distribution in forests from nuclear accidents

    SciTech Connect

    Schell, W.R.; Linkov, I.; Myttenaere, C.

    1996-03-01

    The Chernobyl Nuclear Power Plant accident in 1986 caused radionuclide contamination in most countries in Eastern and Western Europe. A prime example is Belarus where 23% of the total land area received chronic levels; about 1.5 X 10{sup 6} ha of forested lands were contaminated with 40-190 kBq m{sup -2} and 2.5 X 10{sup 4} ha received greater than 1,480 kBq m{sup -2} of {sup 137}Cs and other long-lived radionuclides such as {sup 90}Sr and {sup 239,240}Pu. Since the radiological dose to the forest ecosystem will tend to accumulate over long time periods (decades to centuries), we need to determine what countermeasures can be taken to limit this dose so that the affected regions can, once again, safely provide habitat and natural forest products. To address some of these problems, our initial objective is to formulate a generic model, FORESTPATH, which describes the major kinetic processes and pathways of radionuclide movement in forests and natural ecosystems and which can be used to predict future radionuclide concentrations. The model calculates the time-dependent radionuclide concentrations in different compartments of the forest ecosystem based on the information available on residence half-times in two forest types: coniferous and deciduous. The results show that the model reproduces well the radionuclide cycling pattern found in the literature for deciduous and coniferous forests. Variability analysis was used to access the relative importance of specific parameter values in the generic model performance. The FORESTPASTH model can be easily adjusted for site-specific applications. 92 refs., 5 figs., 6 tabs.

  14. A dynamic model to estimate the activity concentration and whole body dose rate of marine biota as consequences of a nuclear accident.

    PubMed

    Keum, Dong-Kwon; Jun, In; Kim, Byeong-Ho; Lim, Kwang-Muk; Choi, Yong-Ho

    2015-02-01

    This paper describes a dynamic compartment model (K-BIOTA-DYN-M) to assess the activity concentration and whole body dose rate of marine biota as a result of a nuclear accident. The model considers the transport of radioactivity between the marine biota through the food chain, and applies the first order kinetic model for the sedimentation of radionuclides from seawater onto sediment. A set of ordinary differential equations representing the model are simultaneously solved to calculate the activity concentration of the biota and the sediment, and subsequently the dose rates, given the seawater activity concentration. The model was applied to investigate the long-term effect of the Fukushima nuclear accident on the marine biota using (131)I, (134)Cs, and, (137)Cs activity concentrations of seawater measured for up to about 2.5 years after the accident at two locations in the port of the Fukushima Daiichi Nuclear Power Station (FDNPS) which was the most highly contaminated area. The predicted results showed that the accumulated dose for 3 months after the accident was about 4-4.5Gy, indicating the possibility of occurrence of an acute radiation effect in the early phase after the Fukushima accident; however, the total dose rate for most organisms studied was usually below the UNSCEAR (United Nations Scientific Committee on the Effects of Atomic Radiation)'s bench mark level for chronic exposure except for the initial phase of the accident, suggesting a very limited radiological effect on the marine biota at the population level. The predicted Cs sediment activity by the first-order kinetic model for the sedimentation was in a good agreement with the measured activity concentration. By varying the ecological parameter values, the present model was able to predict the very scattered (137)Cs activity concentrations of fishes measured in the port of FDNPS. Conclusively, the present dynamic model can be usefully applied to estimate the activity concentration and whole

  15. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential

  16. [Guilty victims: a model to perpetuate impunity for work-related accidents].

    PubMed

    Vilela, Rodolfo Andrade Gouveia; Iguti, Aparecida Mari; Almeida, Ildeberto Muniz

    2004-01-01

    This article analyzes reports and data from the investigation of severe and fatal work-related accidents by the Regional Institute of Criminology in Piracicaba, São Paulo State, Brazil. Some 71 accident investigation reports were analyzed from 1998, 1999, and 2000. Accidents involving machinery represented 38.0% of the total, followed by high falls (15.5%), and electric shocks (11.3%). The reports conclude that 80.0% of the accidents are caused by "unsafe acts" committed by workers themselves, while the lack of safety or "unsafe conditions" account for only 15.5% of cases. Victims are blamed even in situations involving high risk in which not even minimum safety conditions are adopted, thus favoring employers' interests. Such conclusions reflect traditional reductionist explanatory models, in which accidents are viewed as simple, unicausal phenomena, generally focused on slipups and errors by the workers themselves. Despite criticism in recent decades from the technical and academic community, this concept is still hegemonic, thus jeopardizing the development of preventive policies and the improvement of work conditions. PMID:15073638

  17. Multilevel modelling for the regional effect of enforcement on road accidents.

    PubMed

    Yannis, George; Papadimitriou, Eleonora; Antoniou, Constantinos

    2007-07-01

    This paper investigates the effect of the intensification of Police enforcement on the number of road accidents at national and regional level in Greece, focusing on one of the most important road safety violations: drinking-and-driving. Multilevel negative binomial models are developed to describe the effect of the intensification of alcohol enforcement on the reduction of road accidents in different regions of Greece. Moreover, two approaches are explored as far as regional clustering is concerned: the first one concerns an ad hoc geographical clustering and the second one is based on the results of mathematical cluster analysis through demographic, transport and road safety characteristics. Results indicate that there are significant spatial dependences among road accidents and enforcement. Additionally, it is shown that these dependences are more efficiently interpreted when regions are determined on the basis of qualitative similarities than on the basis of geographical adjacency. PMID:17274938

  18. Input-output model for MACCS nuclear accident impacts estimation¹

    SciTech Connect

    Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  19. A nuclear plant accident diagnosis method to support prediction of errors of commission

    SciTech Connect

    Chang, Y. H. J.; Coyne, K.; Mosleh, A.

    2006-07-01

    The identification and mitigation of operator errors of commission (EOCs) continue to be a major focus of nuclear plant human reliability research. Current Human Reliability Analysis (HRA) methods for predicting EOCs generally rely on the availability of operating procedures or extensive use of expert judgment. Consequently, an analysis for EOCs cannot easily be performed for actions that may be taken outside the scope of the operating procedures. Additionally, current HRA techniques rarely capture an operator's 'creative' problem-solving behavior. However, a nuclear plant operator knowledge base developed for the use with the IDAC (Information, Decision, and Action in Crew context) cognitive model shows potential for addressing these limitations. This operator knowledge base currently includes an event-symptom diagnosis matrix for a pressurized water reactor (PWR) nuclear plant. The diagnosis matrix defines a probabilistic relationship between observed symptoms and plant events that models the operator's heuristic process for classifying a plant state. Observed symptoms are obtained from a dynamic thermal-hydraulic plant model and can be modified to account for the limitations of human perception and cognition. A fuzzy-logic inference technique is used to calculate the operator's confidence, or degree of belief, that a given plant event has occurred based on the observed symptoms. An event diagnosis can be categorized as either: (a) a generalized flow imbalance of basic thermal-hydraulic properties (e.g., a mass or energy flow imbalance in the reactor coolant system), or (b) a specific event type, such as a steam generator tube rupture or a reactor trip. When an operator is presented with incomplete or contradictory information, this diagnosis approach provides a means to identify situations where an operator might be misled to perform unsafe actions based on an incorrect diagnosis. This knowledge base model could also support identification of potential EOCs when

  20. Modeling and sensitivity analysis of transport and deposition of radionuclides from the Fukushima Daiichi accident

    NASA Astrophysics Data System (ADS)

    Hu, X.; Li, D.; Huang, H.; Shen, S.; Bou-Zeid, E.

    2014-01-01

    The atmospheric transport and ground deposition of radioactive isotopes 131I and 137Cs during and after the Fukushima Daiichi Nuclear Power Plant (FDNPP) accident (March 2011) are investigated using the Weather Research and Forecasting/Chemistry (WRF/Chem) model. The aim is to assess the skill of WRF in simulating these processes and the sensitivity of the model's performance to various parameterizations of unresolved physics. The WRF/Chem model is first upgraded by implementing a radioactive decay term into the advection-diffusion solver and adding three parameterizations for dry deposition and two parameterizations for wet deposition. Different microphysics and horizontal turbulent diffusion schemes are then tested for their ability to reproduce observed meteorological conditions. Subsequently, the influence on the simulated transport and deposition of the characteristics of the emission source, including the emission rate, the gas partitioning of 131I and the size distribution of 137Cs, is examined. The results show that the model can predict the wind fields and rainfall realistically. The ground deposition of the radionuclides can also potentially be captured well but it is very sensitive to the emission characterization. It is found that the total deposition is most influenced by the emission rate for both 131I and 137Cs; while it is less sensitive to the dry deposition parameterizations. Moreover, for 131I, the deposition is also sensitive to the microphysics schemes, the horizontal diffusion schemes, gas partitioning and wet deposition parameterizations; while for 137Cs, the deposition is very sensitive to the microphysics schemes and wet deposition parameterizations, and it is also sensitive to the horizontal diffusion schemes and the size distribution.

  1. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    NASA Technical Reports Server (NTRS)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  2. Accident Sequence Precursor Program Large Early Release Frequency Model Development

    SciTech Connect

    Brown, T.D.; Brownson, D.A.; Duran, F.A.; Gregory, J.J.; Rodrick, E.G.

    1999-01-04

    The objectives for the ASP large early release frequency (LERF) model development work is to build a Level 2 containment response model that would capture all of the events necessary to define LERF as outlined in Regulatory Guide 1.174, can be directly interfaced with the existing Level 1 models, is technically correct, can be readily modified to incorporate new information or to represent another plant, and can be executed in SAPHIRE. The ASP LERF models being developed will meet these objectives while providing the NRC with the capability to independently assess the risk impact of plant-specific changes proposed by the utilities that change the nuclear power plants' licensing basis. Together with the ASP Level 1 models, the ASP LERF models provide the NRC with the capability of performing equipment and event assessments to determine their impact on a plant's LERF for internal events during power operation. In addition, the ASP LERF models are capable of being updated to reflect changes in information regarding the system operations and phenomenological events, and of being updated to assess the potential for early fatalities for each LERF sequence. As the ASP Level 1 models evolve to include more analysis capabilities, the LERF models will also be refined to reflect the appropriate level of detail needed to demonstrate the new capabilities. An approach was formulated for the development of detailed LERF models using the NUREG-1150 APET models as a guide. The modifications to the SAPHIRE computer code have allowed the development of these detailed models and the ability to analyze these models in a reasonable time. Ten reference LERF plant models, including six PWR models and four BWR models, which cover a wide variety of containment and nuclear steam supply systems designs, will be complete in 1999. These reference models will be used as the starting point for developing the LERF models for the remaining nuclear power plants.

  3. Generation IV benchmarking of TRISO fuel performance models under accident conditions. Modeling input data

    SciTech Connect

    Blaise Collin

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document

  4. Predictive models of radiative neutrino masses

    NASA Astrophysics Data System (ADS)

    Julio, J.

    2016-06-01

    We discuss two models of radiative neutrino mass generation. The first model features one-loop Zee model with Z4 symmetry. The second model is the two-loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.

  5. How to Establish Clinical Prediction Models

    PubMed Central

    Bang, Heejung

    2016-01-01

    A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice. PMID:26996421

  6. How to Establish Clinical Prediction Models.

    PubMed

    Lee, Yong Ho; Bang, Heejung; Kim, Dae Jung

    2016-03-01

    A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice. PMID:26996421

  7. Future missions studies: Combining Schatten's solar activity prediction model with a chaotic prediction model

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.

    1991-01-01

    K. Schatten (1991) recently developed a method for combining his prediction model with our chaotic model. The philosophy behind this combined model and his method of combination is explained. Because the Schatten solar prediction model (KS) uses a dynamo to mimic solar dynamics, accurate prediction is limited to long-term solar behavior (10 to 20 years). The Chaotic prediction model (SA) uses the recently developed techniques of nonlinear dynamics to predict solar activity. It can be used to predict activity only up to the horizon. In theory, the chaotic prediction should be several orders of magnitude better than statistical predictions up to that horizon; beyond the horizon, chaotic predictions would theoretically be just as good as statistical predictions. Therefore, chaos theory puts a fundamental limit on predictability.

  8. Phase-Change Modelling in Severe Nuclear Accidents

    NASA Astrophysics Data System (ADS)

    Pain, Christopher; Pavlidis, Dimitrios; Xie, Zhihua; Percival, James; Gomes, Jefferson; Matar, Omar; Moatamedi, Moji; Tehrani, Ali; Jones, Alan; Smith, Paul

    2014-11-01

    This paper describes progress on a consistent approach for multi-phase flow modelling with phase-change. Although, the developed methods are general purpose the applications presented here cover core melt phenomena at the lower vessel head. These include corium pool formation, coolability and solidification. With respect to external cooling, comparison with the LIVE experiments (from Karlsruhe) is undertaken. Preliminary re-flooding simulation results are also presented. These include water injection into porous media (debris bed) and boiling. Numerical simulations follow IRSN's PEARL experimental programme on quenching/re-flooding. The authors wish to thank Prof. Timothy Haste of IRSN. Dr. D. Pavlidis is funded by EPSRC Consortium ``Computational Modelling for Advanced Nuclear Plants,'' Grant Number EP/I003010/1.

  9. Initial VHTR accident scenario classification: models and data.

    SciTech Connect

    Vilim, R. B.; Feldman, E. E.; Pointer, W. D.; Wei, T. Y. C.; Nuclear Engineering Division

    2005-09-30

    Nuclear systems codes are being prepared for use as computational tools for conducting performance/safety analyses of the Very High Temperature Reactor. The thermal-hydraulic codes are RELAP5/ATHENA for one-dimensional systems modeling and FLUENT and/or Star-CD for three-dimensional modeling. We describe a formal qualification framework, the development of Phenomena Identification and Ranking Tables (PIRTs), the initial filtering of the experiment databases, and a preliminary screening of these codes for use in the performance/safety analyses. In the second year of this project we focused on development of PIRTS. Two events that result in maximum fuel and vessel temperatures, the Pressurized Conduction Cooldown (PCC) event and the Depressurized Conduction Cooldown (DCC) event, were selected for PIRT generation. A third event that may result in significant thermal stresses, the Load Change event, is also selected for PIRT generation. Gas reactor design experience and engineering judgment were used to identify the important phenomena in the primary system for these events. Sensitivity calculations performed with the RELAP5 code were used as an aid to rank the phenomena in order of importance with respect to the approach of plant response to safety limits. The overall code qualification methodology was illustrated by focusing on the Reactor Cavity Cooling System (RCCS). The mixed convection mode of heat transfer and pressure drop is identified as an important phenomenon for Reactor Cavity Cooling System (RCCS) operation. Scaling studies showed that the mixed convection mode is likely to occur in the RCCS air duct during normal operation and during conduction cooldown events. The RELAP5/ATHENA code was found to not adequately treat the mixed convection regime. Readying the code will require adding models for the turbulent mixed convection regime while possibly performing new experiments for the laminar mixed convection regime. Candidate correlations for the turbulent

  10. Chernobyl and Fukushima nuclear accidents: what has changed in the use of atmospheric dispersion modeling?

    PubMed

    Benamrane, Y; Wybo, J-L; Armand, P

    2013-12-01

    The threat of a major accidental or deliberate event that would lead to hazardous materials emission in the atmosphere is a great cause of concern to societies. This is due to the potential large scale of casualties and damages that could result from the release of explosive, flammable or toxic gases from industrial plants or transport accidents, radioactive material from nuclear power plants (NPPs), and chemical, biological, radiological or nuclear (CBRN) terrorist attacks. In order to respond efficiently to such events, emergency services and authorities resort to appropriate planning and organizational patterns. This paper focuses on the use of atmospheric dispersion modeling (ADM) as a support tool for emergency planning and response, to assess the propagation of the hazardous cloud and thereby, take adequate counter measures. This paper intends to illustrate the noticeable evolution in the operational use of ADM tools over 25 y and especially in emergency situations. This study is based on data available in scientific publications and exemplified using the two most severe nuclear accidents: Chernobyl (1986) and Fukushima (2011). It appears that during the Chernobyl accident, ADM were used few days after the beginning of the accident mainly in a diagnosis approach trying to reconstruct what happened, whereas 25 y later, ADM was also used during the first days and weeks of the Fukushima accident to anticipate the potentially threatened areas. We argue that the recent developments in ADM tools play an increasing role in emergencies and crises management, by supporting stakeholders in anticipating, monitoring and assessing post-event damages. However, despite technological evolutions, its prognostic and diagnostic use in emergency situations still arise many issues. PMID:24077309

  11. Evaluating the Predictive Value of Growth Prediction Models

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  12. Incorporating uncertainty in predictive species distribution modelling

    PubMed Central

    Beale, Colin M.; Lennon, Jack J.

    2012-01-01

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates. PMID:22144387

  13. Development of a Gravid Uterus Model for the Study of Road Accidents Involving Pregnant Women.

    PubMed

    Auriault, F; Thollon, L; Behr, M

    2016-01-01

    Car accident simulations involving pregnant women are well documented in the literature and suggest that intra-uterine pressure could be responsible for the phenomenon of placental abruption, underlining the need for a realistic amniotic fluid model, including fluid-structure interactions (FSI). This study reports the development and validation of an amniotic fluid model using an Arbitrary Lagrangian Eulerian formulation in the LS-DYNA environment. Dedicated to the study of the mechanisms responsible for fetal injuries resulting from road accidents, the fluid model was validated using dynamic loading tests. Drop tests were performed on a deformable water-filled container at acceleration levels that would be experienced in a gravid uterus during a frontal car collision at 25 kph. During the test device braking phase, container deformation induced by inertial effects and FSI was recorded by kinematic analysis. These tests were then simulated in the LS-DYNA environment to validate a fluid model under dynamic loading, based on the container deformations. Finally, the coupling between the amniotic fluid model and an existing finite-element full-body pregnant woman model was validated in terms of pressure. To do so, experimental test results performed on four postmortem human surrogates (PMHS) (in which a physical gravid uterus model was inserted) were used. The experimental intra-uterine pressure from these tests was compared to intra uterine pressure from a numerical simulation performed under the same loading conditions. Both free fall numerical and experimental responses appear strongly correlated. The relationship between the amniotic fluid model and pregnant woman model provide intra-uterine pressure values correlated with the experimental test responses. The use of an Arbitrary Lagrangian Eulerian formulation allows the analysis of FSI between the amniotic fluid and the gravid uterus during a road accident involving pregnant women. PMID:26592419

  14. Simulation Modeling Requirements for Loss-of-Control Accident Prevention of Turboprop Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Crider, Dennis; Foster, John V.

    2012-01-01

    In-flight loss of control remains the leading contributor to aviation accident fatalities, with stall upsets being the leading causal factor. The February 12, 2009. Colgan Air, Inc., Continental Express flight 3407 accident outside Buffalo, New York, brought this issue to the forefront of public consciousness and resulted in recommendations from the National Transportation Safety Board to conduct training that incorporates stalls that are fully developed and develop simulator standards to support such training. In 2010, Congress responded to this accident with Public Law 11-216 (Section 208), which mandates full stall training for Part 121 flight operations. Efforts are currently in progress to develop recommendations on implementation of stall training for airline pilots. The International Committee on Aviation Training in Extended Envelopes (ICATEE) is currently defining simulator fidelity standards that will be necessary for effective stall training. These recommendations will apply to all civil transport aircraft including straight-wing turboprop aircraft. Government-funded research over the previous decade provides a strong foundation for stall/post-stall simulation for swept-wing, conventional tail jets to respond to this mandate, but turboprops present additional and unique modeling challenges. First among these challenges is the effect of power, which can provide enhanced flow attachment behind the propellers. Furthermore, turboprops tend to operate for longer periods in an environment more susceptible to ice. As a result, there have been a significant number of turboprop accidents as a result of the early (lower angle of attack) stalls in icing. The vulnerability of turboprop configurations to icing has led to studies on ice accumulation and the resulting effects on flight behavior. Piloted simulations of these effects have highlighted the important training needs for recognition and mitigation of icing effects, including the reduction of stall margins

  15. Modeling and Predicting Pesticide Exposures

    EPA Science Inventory

    Models provide a means for representing a real system in an understandable way. They take many forms, beginning with conceptual models that explain the way a system works, such as delineation of all the factors and parameters of how a pesticide particle moves in the air after a s...

  16. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  17. Incorporating model uncertainty into spatial predictions

    SciTech Connect

    Handcock, M.S.

    1996-12-31

    We consider a modeling approach for spatially distributed data. We are concerned with aspects of statistical inference for Gaussian random fields when the ultimate objective is to predict the value of the random field at unobserved locations. However the exact statistical model is seldom known before hand and is usually estimated from the very same data relative to which the predictions are made. Our objective is to assess the effect of the fact that the model is estimated, rather than known, on the prediction and the associated prediction uncertainty. We describe a method for achieving this objective. We, in essence, consider the best linear unbiased prediction procedure based on the model within a Bayesian framework. These ideas are implemented for the spring temperature over the region in the northern United States based on the stations in the United States historical climatological network reported in Karl, Williams, Quinlan & Boden.

  18. COMPARING SAFE VS. AT-RISK BEHAVIORAL DATA TO PREDICT ACCIDENTS

    SciTech Connect

    Jeffrey C. Joe

    2001-11-01

    The Safety Observations Achieve Results (SOAR) program at the Idaho National Laboratory (INL) encourages employees to perform in-field observations of each other’s behaviors. One purpose for performing these observations is that it gives the observers the opportunity to correct, if needed, their co-worker’s at-risk work practices and habits (i.e., behaviors). The underlying premise of doing this is that major injuries (e.g., OSHA-recordable events) are prevented from occurring because the lower level at-risk behaviors are identified and corrected before they can propagate into culturally accepted unsafe behaviors that result in injuries or fatalities. However, unlike other observation programs, SOAR also emphasizes positive reinforcement for safe behaviors observed. The underlying premise of doing this is that positive reinforcement of safe behaviors helps establish a strong positive safety culture. Since the SOAR program collects both safe and at-risk leading indicator data, this provides a unique opportunity to assess and compare the two kinds of data in terms of their ability to predict future adverse safety events. This paper describes the results of analyses performed on SOAR data to assess their relative predictive ability. Implications are discussed.

  19. Piloted Simulation of a Model-Predictive Automated Recovery System

    NASA Technical Reports Server (NTRS)

    Liu, James (Yuan); Litt, Jonathan; Sowers, T. Shane; Owens, A. Karl; Guo, Ten-Huei

    2014-01-01

    This presentation describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  20. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    PubMed

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. PMID:24878693

  1. Validation of advanced NSSS simulator model for loss-of-coolant accidents

    SciTech Connect

    Kao, S.P.; Chang, S.K.; Huang, H.C.

    1995-09-01

    The replacement of the NSSS (Nuclear Steam Supply System) model on the Millstone 2 full-scope simulator has significantly increased its fidelity to simulate adverse conditions in the RCS. The new simulator NSSS model is a real-time derivative of the Nuclear Plant Analyzer by ABB. The thermal-hydraulic model is a five-equation, non-homogeneous model for water, steam, and non-condensible gases. The neutronic model is a three-dimensional nodal diffusion model. In order to certify the new NSSS model for operator training, an extensive validation effort has been performed by benchmarking the model performance against RELAP5/MOD2. This paper presents the validation results for the cases of small-and large-break loss-of-coolant accidents (LOCA). Detailed comparisons in the phenomena of reflux-condensation, phase separation, and two-phase natural circulation are discussed.

  2. Prediction of Severe Eye Injuries in Automobile Accidents: Static and Dynamic Rupture Pressure of the Eye

    PubMed Central

    Kennedy, Eric A.; Voorhies, Katherine D.; Herring, Ian P.; Rath, Amber L.; Duma, Stefan M.

    2004-01-01

    The purpose of this paper is to determine the static and dynamic rupture pressures of 20 human and 20 porcine eyes. This study found the static test results show an average rupture pressure for porcine eyes of 1.00 ± 0.18 MPa while the average rupture pressure for human eyes was 0.36 ± 0.20 MPa. For dynamic loading, the average porcine rupture pressure was 1.64 ± 0.32 MPa, and the average rupture pressure for human eyes was 0.91 ± 0.29 MPa. Significant differences are found between average rupture pressures from all four groups of tests (p = 0.01). A risk function has been developed and predicts a 50% risk of globe rupture at 1.02 MPa, 1.66 MPa, 0.35 MPa, and 0.90 MPa internal pressure for porcine static, porcine dynamic, human static, and human dynamic loading conditions, respectively. PMID:15319124

  3. Predictive Modeling in Adult Education

    ERIC Educational Resources Information Center

    Lindner, Charles L.

    2011-01-01

    The current economic crisis, a growing workforce, the increasing lifespan of workers, and demanding, complex jobs have made organizations highly selective in employee recruitment and retention. It is therefore important, to the adult educator, to develop models of learning that better prepare adult learners for the workplace. The purpose of…

  4. Liver Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Cervical Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Pancreatic Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Prostate Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. Ovarian Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  9. Lung Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Bladder Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Testicular Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Colorectal Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Breast Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Esophageal Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Predictive modelling of boiler fouling

    SciTech Connect

    Not Available

    1991-01-01

    The primary objective of this work is the development of a comprehensive numerical model describing the time evolution of fouling under realistic heat exchanger conditions. As fouling is a complex interaction of gas flow, mineral transport and adhesion mechanisms, understanding and subsequently improved controlling of fouling achieved via appropriate manipulation of the various coupled, nonlinear processes in a complex fluid mechanics environment will undoubtedly help reduce the substantial operating costs incurred by the utilities annually, as well as afford greater flexibility in coal selection and reduce the emission of various pollutants. In a more specialized context, the numerical model to be developed as part of this activity will be used as a tool to address the interaction of the various mechanisms controlling deposit development in specific regimes or correlative relationships. These should prove of direct use to the coal burning industry. 11 figs.

  16. Predictive modelling of boiler fouling

    SciTech Connect

    Not Available

    1992-01-01

    The primary objective of this work is the development of a comprehensive numerical model describing the time evolution of fouling under realistic heat exchanger conditions. As fouling is complex interaction of gas flow, mineral transport and adhesion mechanisms, understanding and subsequently improved controlling of fouling achieved via appropriate manipulation of the various coupled, nonlinear processes in a complex fluid mechanics environment will undoubtedly help reduce the substantial operating costs incurred by the utilities annually, as well as afford greater flexibility in coal selection and reduce the emission of various pollutants. In a more specialized context, the numerical model to be developed as part of this activity will be used as a tool to address the interaction of the various mechanisms controlling deposit development in specific regimes or correlative relationships. These should prove of direct use to the coal burning industry.

  17. Predictive modelling of boiler fouling

    SciTech Connect

    Not Available

    1991-01-01

    The primary objective of this work is the development of a comprehensive numerical model describing the time evolution of fouling under realistic heat exchanger conditions. As fouling is a complex interaction of gas flow, mineral transport and adhesion mechanisms, understanding and subsequently improved controlling of fouling achieved via appropriate manipulation of the various coupled, nonlinear processes in a complex fluid mechanics environment will undoubtedly help reduce the substantial operating costs incurred by the utilities annually, as well as afford greater flexibility in coal selection and reduce the emission of various pollutants. In a more specialized context, the numerical model to be developed as part of this activity will be used as a tool to address the interaction of the various mechanisms controlling deposit development in specific regimes or correlative relationships. These should prove of direct use to the coal burning industry.

  18. Irma multisensor predictive signature model

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Flynn, David S.; Wellfare, Michael R.; Richards, Mike; Prestwood, Lee

    1995-06-01

    The Irma synthetic signature model was one of the first high resolution synthetic infrared (IR) target and background signature models to be developed for tactical air-to-surface weapon scenarios. Originally developed in 1980 by the Armament Directorate of the Air Force Wright Laboratory (WL/MN), the Irma model was used exclusively to generate IR scenes for smart weapons research and development. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser channel. This two channel version, Irma 3.0, was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model which supported correlated frame-to-frame imagery. This and other improvements were released in Irma 2.2. Recently, Irma 3.2, a passive IR/millimeter wave (MMW) code, was completed. Currently, upgrades are underway to include an active MMW channel. Designated Irma 4.0, this code will serve as a cornerstone of sensor fusion research in the laboratory from 6.1 concept development to 6.3 technology demonstration programs for precision guided munitions. Several significant milestones have been reached in this development process and are demonstrated. The Irma 4.0 software design has been developed and interim results are available. Irma is being developed to facilitate multi-sensor smart weapons research and development. It is currently in distribution to over 80 agencies within the U.S. Air Force, U.S. Army, U.S. Navy, ARPA, NASA, Department of Transportation, academia, and industry.

  19. Predicting and Modeling RNA Architecture

    PubMed Central

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  20. Preliminary Design Report for Modeling of Hydrogen Uptake in Fuel Rod Cladding During Severe Accidents

    SciTech Connect

    Siefken, Larry James

    1999-02-01

    Preliminary designs are described for models of hydrogen and oxygen uptake in fuel rod cladding during severe accidents. Calculation of the uptake involves the modeling of seven processes: (1) diffusion of oxygen from the bulk gas into the boundary layer at the external cladding surface, (2) diffusion from the boundary layer into the oxide layer, (3) diffusion from the inner surface of the oxide layer into the metallic part of the cladding, (4) uptake of hydrogen in the event that the clad-ding oxide layer is dissolved in a steam-starved region, (5) embrittlement of cladding due to hydrogen uptake, (6) cracking of cladding during quenching due to its embrittlement and (7) release of hydrogen from the cladding after cracking of the cladding. An integral diffusion method is described for calculating the diffusion processes in the cladding. Experimental results are presented that show a rapid uptake of hydrogen in the event of dissolution of the oxide layer and a rapid release of hydrogen in the event of cracking of the oxide layer. These experimental results are used as a basis for calculating the rate of hydrogen uptake and the rate of hydrogen release. The uptake of hydrogen is limited to the equilibrium solubility calculated by applying Sievert's law. The uptake of hydrogen is an exothermic reaction that accelerates the heatup of a fuel rod. An embrittlement criteria is described that accounts for hydrogen and oxygen concentration and the extent of oxidation. A design is described for implementing the models for hydrogen and oxygen uptake and cladding embrittlement into the programming framework of the SCDAP/RELAP5 code. A test matrix is described for assessing the impact of the proposed models on the calculated behavior of fuel rods in severe accident conditions. This report is a revision and reissue of the report entitled; "Preliminary Design Report for Modeling of Hydrogen Uptake in Fuel Rod Cladding During Severe Accidents."

  1. A Course in... Model Predictive Control.

    ERIC Educational Resources Information Center

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  2. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  3. [Accidents in a population of 350 adolescents and young adults: circumstances, risk factors and prediction of recurrence].

    PubMed

    Marcelli, Daniel; Ingrand, Pierre; Delamour, Magali; Ingrand, Isabelle

    2010-06-01

    Accidents among adolescents and young adults are a public health issue, and present two main characteristics: a strong association with sporting activities, and frequent recurrence. Sports accidents are generally relatively benign, but they show a marked tendency to recur Young people engaging in sporting activities do not generally exhibit psychological traits different from the general population. In contrast, the other types of accident, and particularly domestic and traffic accidents, appear to have specific features: they are often more serious, but above all they are associated with psychopathologic features, including depression, anxiety, disorders due to life events, and thrill-seeking These psychopathological features are strongly associated with recurrence. The authors describe a simple self-administered questionnaire (ECARR) designed to assess the risk of accident recurrence in this population. PMID:21513131

  4. Light-Weight Radioisotope Heater Unit final safety analysis report (LWRHU-FSAR): Volume 2: Accident Model Document (AMD)

    SciTech Connect

    Johnson, E.W.

    1988-10-01

    The purpose of this volume of the LWRHU SAR, the Accident Model Document (AMD), are to: Identify all malfunctions, both singular and multiple, which can occur during the complete mission profile that could lead to release outside the clad of the radioisotopic material contained therein; Provide estimates of occurrence probabilities associated with these various accidents; Evaluate the response of the LWRHU (or its components) to the resultant accident environments; and Associate the potential event history with test data or analysis to determine the potential interaction of the released radionuclides with the biosphere.

  5. A model for the release, dispersion and environmental impact of a postulated reactor accident from a submerged commercial nuclear power plant

    NASA Astrophysics Data System (ADS)

    Bertch, Timothy Creston

    1998-12-01

    Nuclear power plants are inherently suitable for submerged applications and could provide power to the shore power grid or support future underwater applications. The technology exists today and the construction of a submerged commercial nuclear power plant may become desirable. A submerged reactor is safer to humans because the infinite supply of water for heat removal, particulate retention in the water column, sedimentation to the ocean floor and inherent shielding of the aquatic environment would significantly mitigate the effects of a reactor accident. A better understanding of reactor operation in this new environment is required to quantify the radioecological impact and to determine the suitability of this concept. The impact of release to the environment from a severe reactor accident is a new aspect of the field of marine radioecology. Current efforts have been centered on radioecological impacts of nuclear waste disposal, nuclear weapons testing fallout and shore nuclear plant discharges. This dissertation examines the environmental impact of a severe reactor accident in a submerged commercial nuclear power plant, modeling a postulated site on the Atlantic continental shelf adjacent to the United States. This effort models the effects of geography, decay, particle transport/dispersion, bioaccumulation and elimination with associated dose commitment. The use of a source term equivalent to the release from Chernobyl allows comparison between the impacts of that accident and the postulated submerged commercial reactor plant accident. All input parameters are evaluated using sensitivity analysis. The effect of the release on marine biota is determined. Study of the pathways to humans from gaseous radionuclides, consumption of contaminated marine biota and direct exposure as contaminated water reaches the shoreline is conducted. The model developed by this effort predicts a significant mitigation of the radioecological impact of the reactor accident release

  6. Predictive modelling of boiler fouling

    SciTech Connect

    Not Available

    1992-01-01

    In this reporting period, efforts were initiated to supplement the comprehensive flow field description obtained from the RNG-Spectral Element Simulations by incorporating, in a general framework, appropriate modules to model particle and condensable species transport to the surface. Specifically, a brief survey of the literature revealed the following possible mechanisms for transporting different ash constituents from the host gas to boiler tubes as deserving prominence in building the overall comprehensive model: (1) Flame-volatilized species, chiefly sulfates, are deposited on cooled boiler tubes via the mechanism of classical vapor diffusion. This mechanism is more efficient than the particulate ash deposition, and as a result there is usually an enrichment of condensable salts, chiefly sulfates, in boiler deposits; (2) Particle diffusion (Brownian motion) may account for deposition of some fine particles below 0. 1 mm in diameter in comparison with the mechanism of vapor diffusion and particle depositions, however, the amount of material transported to the tubes via this route is probably small. (3) Eddy diffusion, thermophoretic and electrophoretic deposition mechanisms are likely to have a marked influence in transporting 0.1 to 5[mu]m particles from the host gas to cooled boiler tubes; (4) Inertial impaction is the dominant mechanism in transporting particles above 5[mu]m in diameter to water and steam tubes in pulverized coal fired boiler, where the typical flue gas velocity is between 10 to 25 m/s. Particles above 10[mu]m usually have kinetic energies in excess of what can be dissipated at impact (in the absence of molten sulfate or viscous slag deposit), resulting in their entrainment in the host gas.

  7. Accuracy assessment of landslide prediction models

    NASA Astrophysics Data System (ADS)

    Othman, A. N.; Mohd, W. M. N. W.; Noraini, S.

    2014-02-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones.

  8. Estimation of the time-dependent radioactive source-term from the Fukushima nuclear power plant accident using atmospheric transport modelling

    NASA Astrophysics Data System (ADS)

    Schoeppner, M.; Plastino, W.; Budano, A.; De Vincenzi, M.; Ruggieri, F.

    2012-04-01

    Several nuclear reactors at the Fukushima Dai-ichi power plant have been severely damaged from the Tōhoku earthquake and the subsequent tsunami in March 2011. Due to the extremely difficult on-site situation it has been not been possible to directly determine the emissions of radioactive material. However, during the following days and weeks radionuclides of 137-Caesium and 131-Iodine (amongst others) were detected at monitoring stations throughout the world. Atmospheric transport models are able to simulate the worldwide dispersion of particles accordant to location, time and meteorological conditions following the release. The Lagrangian atmospheric transport model Flexpart is used by many authorities and has been proven to make valid predictions in this regard. The Flexpart software has first has been ported to a local cluster computer at the Grid Lab of INFN and Department of Physics of University of Roma Tre (Rome, Italy) and subsequently also to the European Mediterranean Grid (EUMEDGRID). Due to this computing power being available it has been possible to simulate the transport of particles originating from the Fukushima Dai-ichi plant site. Using the time series of the sampled concentration data and the assumption that the Fukushima accident was the only source of these radionuclides, it has been possible to estimate the time-dependent source-term for fourteen days following the accident using the atmospheric transport model. A reasonable agreement has been obtained between the modelling results and the estimated radionuclide release rates from the Fukushima accident.

  9. Analysis 320 coal mine accidents using structural equation modeling with unsafe conditions of the rules and regulations as exogenous variables.

    PubMed

    Zhang, Yingyu; Shao, Wei; Zhang, Mengjia; Li, Hejun; Yin, Shijiu; Xu, Yingjun

    2016-07-01

    Mining has been historically considered as a naturally high-risk industry worldwide. Deaths caused by coal mine accidents are more than the sum of all other accidents in China. Statistics of 320 coal mine accidents in Shandong province show that all accidents contain indicators of "unsafe conditions of the rules and regulations" with a frequency of 1590, accounting for 74.3% of the total frequency of 2140. "Unsafe behaviors of the operator" is another important contributory factor, which mainly includes "operator error" and "venturing into dangerous places." A systems analysis approach was applied by using structural equation modeling (SEM) to examine the interactions between the contributory factors of coal mine accidents. The analysis of results leads to three conclusions. (i) "Unsafe conditions of the rules and regulations," affect the "unsafe behaviors of the operator," "unsafe conditions of the equipment," and "unsafe conditions of the environment." (ii) The three influencing factors of coal mine accidents (with the frequency of effect relation in descending order) are "lack of safety education and training," "rules and regulations of safety production responsibility," and "rules and regulations of supervision and inspection." (iii) The three influenced factors (with the frequency in descending order) of coal mine accidents are "venturing into dangerous places," "poor workplace environment," and "operator error." PMID:27085591

  10. Model for halftone color prediction from microstructure

    NASA Astrophysics Data System (ADS)

    Agar, A. U.

    2000-12-01

    In this work, we take a microstructure model based approach to the problem of color prediction of halftones created using an inkjet printer. We assume absorption and scattering of light through the colorant layers and model the subsurface light scattering in the substrate by a Gaussian point spread function. We restrict our analysis to transparent substrates. To model the absorption and scattering of light through the colorant layers, we employ the Kubelka-Munk color mixing mode. To model the scattering in the substrate and to predict the spectral distribution, we use a wavelength dependent version of the reflection prediction model developed by Ruckdeschel and Hauser. Using spectral distributions and ink weight measurements for transparencies completely and homogeneously coated with colorants, we compute the absorption and scattering spectra of the colorants using the Kubelka-Munk theory. We train our model using measured spectral distribution and synthesized microstructure images of primary ramps printed on transparent media. For each patch in the primary ramp, we synthesize a high-resolution halftone microstructure image from the halftone bitmap assuming dot profiles with Gaussian roll-offs, form which we compute a high-resolution transmission image using the Kubelka-Munk theory and the absorption and scattering spectra of the colorants. We then convolve this transmission image with the Gaussian point spread function of the transparent substrate to predict the average spectral distribution of the halftone. We use our model to predict the spectral distribution of a secondary ramp printed on the same media.

  11. Predictive modelling of boiler fouling

    SciTech Connect

    Not Available

    1992-01-01

    As this study incorporates in a general framework, appropriate modules to model condensable species transport to the surface along with particles, the need for a suitable solver for the reaction component of the species equations with regard to issues of stability, stiffness, economy, etc. becomes obvious. It is generally agreed in the literature that the major problem associated with the simultaneous integration of large sets of chemical kinetic rate equations is that of stiffness. Although stiffness does not have a simple definition, it is characterized by widely varying time constants. For example, in hydrogen-air combustion, the induction time is of the order of microseconds whereas the nitric oxide formation time is of the order of milliseconds. These widely different time constants present classical methods (such as the popular explicit Runge-Kutta method) with the following difficulty: to ensure stability of the numerical solution, these methods are restricted to using very short time steps that are determined by the smallest time constant. However, the time for all chemical species to reach near-equilibrium values is determined by the longest time constant. As a result, classical methods require excessive amounts of computer time to solve stiff systems of ordinary differential equations (ODE's). Several approaches for the solution of stiff ODE's have been proposed. Of all these techniques, the general purpose codes EPISODE and LSODE are regarded as the best available packaged'' codes for the solution of stiff systems of ODE'S. However, although these codes may be the best available for solving an arbitrary systems ODE'S, it may be possible to construct superior methods for solving a particular system of ODE's governing the behavior of a specific problem. In this view, an exponentially fitted method, CREK1D, deserves a special mention and is described briefly.

  12. Radiological health effects models for nuclear power plant accident consequence analysis.

    PubMed

    Evans, J S; Moeller, D W

    1989-04-01

    Improved health effects models have been developed for assessing the early effects, late somatic effects and genetic effects that might result from low-LET radiation exposures to populations following a major accident in a nuclear power plant. All the models have been developed in such a way that the dynamics of population risks can be analyzed. Estimates of life years lost and the duration of illnesses were generated and a framework recommended for summarizing health impacts. Uncertainty is addressed by providing models for upper, central and lower estimates of most effects. The models are believed to be a significant improvement over the models used in the U.S. Nuclear Regulatory Commission's Reactor Safety Study, and they can easily be modified to reflect advances in scientific understanding of the health effects of ionizing radiation. PMID:2925380

  13. Predictive Validation of an Influenza Spread Model

    PubMed Central

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  14. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    SciTech Connect

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.

  15. Prediction of PARP Inhibition with Proteochemometric Modelling and Conformal Prediction.

    PubMed

    Cortés-Ciriano, Isidro; Bender, Andreas; Malliavin, Thérèse

    2015-06-01

    Poly(ADP-ribose) polymerases (PARPs) play a key role in DNA damage repair. PARP inhibitors act as chemo- and radio- sensitizers and thus potentiate the cytotoxicity of DNA damaging agents. Although PARP inhibitors are currently investigated as chemotherapeutic agents, their cross-reactivity with other members of the PARP family remains unclear. Here, we apply Proteochemometric Modelling (PCM) to model the activity of 181 compounds on 12 human PARPs. We demonstrate that PCM (R0 (2) test =0.65-0.69; RMSEtest =0.95-1.01 °C) displays higher performance on the test set (interpolation) than Family QSAR and Family QSAM (Tukey's HSD, α 0.05), and outperforms Inductive Transfer knowledge among targets (Tukey's HSD, α 0.05). We benchmark the predictive signal of 8 amino acid and 11 full-protein sequence descriptors, obtaining that all of them (except for SOCN) perform at the same level of statistical significance (Tukey's HSD, α 0.05). The extrapolation power of PCM to new compounds (RMSE=1.02±0.80 °C) and targets (RMSE=1.03±0.50 °C) is comparable to interpolation, although the extrapolation ability is not uniform across the chemical and the target space. For this reason, we also provide confidence intervals calculated with conformal prediction. In addition, we present the R package conformal, which permits the calculation of confidence intervals for regression and classification caret models. PMID:27490382

  16. A review and test of predictive models for the bioaccumulation of radiostrontium in fish.

    PubMed

    Smith, J T; Sasina, N V; Kryshev, A I; Belova, N V; Kudelsky, A V

    2009-11-01

    Empirical relations between the (90)Sr concentration factor (CF) and the calcium concentration in freshwater aquatic systems have previously been determined in studies based on data obtained prior to the Chernobyl accident. The purpose of the present research is to review and compare these models, and to test them against a database of post-Chernobyl measurements from rivers and lakes in Ukraine, Russia, Belarus and Finland. It was found that two independently developed models, based on pre-Chernobyl empirical data, are in close agreement with each other, and with empirical data. Testing of both models against new data obtained after the Chernobyl accident confirms the models' predictive ability. An investigation of the influence of fish size on (90)Sr accumulation showed no significant relationship, though the data set was somewhat limited. PMID:19656592

  17. A simplified model for calculating early offsite consequences from nuclear reactor accidents

    SciTech Connect

    Madni, I.K.; Cazzoli, E.G.; Khatib-Rahbar, M.

    1988-07-01

    A personal computer-based model, SMART, has been developed that uses an integral approach for calculating early offsite consequences from nuclear reactor accidents. The solution procedure uses simplified meteorology and involves direct analytic integration of air concentration equations over time and position. This is different from the discretization approach currently used in the CRAC2 and MACCS codes. The SMART code is fast-running, thereby providing a valuable tool for sensitivity and uncertainty studies. The code was benchmarked against both MACCS version 1.4 and CRAC2. Results of benchmarking and detailed sensitivity/uncertainty analyses using SMART are presented. 34 refs., 21 figs., 24 tabs.

  18. Solar Weather Event Modelling and Prediction

    NASA Astrophysics Data System (ADS)

    Messerotti, Mauro; Zuccarello, Francesca; Guglielmino, Salvatore L.; Bothmer, Volker; Lilensten, Jean; Noci, Giancarlo; Storini, Marisa; Lundstedt, Henrik

    2009-11-01

    Key drivers of solar weather and mid-term solar weather are reviewed by considering a selection of relevant physics- and statistics-based scientific models as well as a selection of related prediction models, in order to provide an updated operational scenario for space weather applications. The characteristics and outcomes of the considered scientific and prediction models indicate that they only partially cope with the complex nature of solar activity for the lack of a detailed knowledge of the underlying physics. This is indicated by the fact that, on one hand, scientific models based on chaos theory and non-linear dynamics reproduce better the observed features, and, on the other hand, that prediction models based on statistics and artificial neural networks perform better. To date, the solar weather prediction success at most time and spatial scales is far from being satisfactory, but the forthcoming ground- and space-based high-resolution observations can add fundamental tiles to the modelling and predicting frameworks as well as the application of advanced mathematical approaches in the analysis of diachronic solar observations, that are a must to provide comprehensive and homogeneous data sets.

  19. Posterior predictive checking of multiple imputation models.

    PubMed

    Nguyen, Cattram D; Lee, Katherine J; Carlin, John B

    2015-07-01

    Multiple imputation is gaining popularity as a strategy for handling missing data, but there is a scarcity of tools for checking imputation models, a critical step in model fitting. Posterior predictive checking (PPC) has been recommended as an imputation diagnostic. PPC involves simulating "replicated" data from the posterior predictive distribution of the model under scrutiny. Model fit is assessed by examining whether the analysis from the observed data appears typical of results obtained from the replicates produced by the model. A proposed diagnostic measure is the posterior predictive "p-value", an extreme value of which (i.e., a value close to 0 or 1) suggests a misfit between the model and the data. The aim of this study was to evaluate the performance of the posterior predictive p-value as an imputation diagnostic. Using simulation methods, we deliberately misspecified imputation models to determine whether posterior predictive p-values were effective in identifying these problems. When estimating the regression parameter of interest, we found that more extreme p-values were associated with poorer imputation model performance, although the results highlighted that traditional thresholds for classical p-values do not apply in this context. A shortcoming of the PPC method was its reduced ability to detect misspecified models with increasing amounts of missing data. Despite the limitations of posterior predictive p-values, they appear to have a valuable place in the imputer's toolkit. In addition to automated checking using p-values, we recommend imputers perform graphical checks and examine other summaries of the test quantity distribution. PMID:25939490

  20. An improved model for prediction of resuspension.

    PubMed

    Maxwell, Reed M; Anspaugh, Lynn R

    2011-12-01

    A complete, historical dataset is presented of radionuclide resuspension-factors. These data span six orders of magnitude in time (ranging from 0.1 to 73,000 d), encompass more than 300 individual values, and combine observations from events on three continents. These data were then used to derive improved, empirical models that can be used to predict resuspension of trace materials after their deposit on the ground. Data-fitting techniques were used to derive models of various types and an estimate of uncertainty in model prediction. Two models were found to be suitable: a power law and the modified Anspaugh et al. model, which is a double exponential. Though statistically the power-law model provides the best metrics of fit, the modified Anspaugh model is deemed the more appropriate due to its better fit to data at early times and its ease of implementation in terms of closed analytical integrals. PMID:22048490

  1. Predicting Naming Latencies with an Analogical Model

    ERIC Educational Resources Information Center

    Chandler, Steve

    2008-01-01

    Skousen's (1989, Analogical modeling of language, Kluwer Academic Publishers, Dordrecht) Analogical Model (AM) predicts behavior such as spelling pronunciation by comparing the characteristics of a test item (a given input word) to those of individual exemplars in a data set of previously encountered items. While AM and other exemplar-based models…

  2. Mathematical model for predicting human vertebral fracture

    NASA Technical Reports Server (NTRS)

    Benedict, J. V.

    1973-01-01

    Mathematical model has been constructed to predict dynamic response of tapered, curved beam columns in as much as human spine closely resembles this form. Model takes into consideration effects of impact force, mass distribution, and material properties. Solutions were verified by dynamic tests on curved, tapered, elastic polyethylene beam.

  3. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Meier, Susan M.; Nissley, David M.; Sheffler, Keith D.; Cruse, Thomas A.

    1991-01-01

    A thermal barrier coated (TBC) turbine component design system, including an accurate TBC life prediction model, is needed to realize the full potential of available TBC engine performance and/or durability benefits. The objective of this work, which was sponsored in part by NASA, was to generate a life prediction model for electron beam - physical vapor deposited (EB-PVD) zirconia TBC. Specific results include EB-PVD zirconia mechanical and physical properties, coating adherence strength measurements, interfacial oxide growth characteristics, quantitative cyclic thermal spallation life data, and a spallation life model.

  4. The R-γ transition prediction model

    NASA Astrophysics Data System (ADS)

    Goldberg, Uriel C.; Batten, Paul; Peroomian, Oshin; Chakravarthy, Sukumar

    2015-01-01

    The Rt turbulence closure (Goldberg 2003) is coupled with an intermittency transport equation, γ, to enable prediction of laminar-to-turbulent flow by-pass transition. The model is not correlation-based and is completely topography-parameter-free, thus ready for use in parallelized Computational Fluid Dynamics (CFD) solvers based on unstructured book-keeping. Several examples compare the R-γ model's performance with experimental data and with predictions by the Langtry-Menter γ-Reθ transition closure (2009). Like the latter, the R-γ model is very sensitive to freestream turbulence levels, limiting its utility for engineering purposes.

  5. Ice jam flooding: a location prediction model

    NASA Astrophysics Data System (ADS)

    Collins, H. A.

    2009-12-01

    Flooding created by ice jamming is a climatically dependent natural hazard frequently affecting cold regions with disastrous results. Basic known physical characteristics which combine in the landscape to create an ice jam flood are modeled on the Cattaraugus Creek Watershed, located in Western New York State. Terrain analysis of topographic features, and the built environment features is conducted using Geographic Information Systems in order to predict the location of ice jam flooding events. The purpose of this modeling is to establish a broadly applicable Watershed scale model for predicting the probable locations of ice jam flooding.location of historic ice jam flooding events

  6. Low-power and shutdown models for the accident sequence precursor (ASP) program

    SciTech Connect

    Sattison, M.B.; Thatcher, T.A.; Knudsen, J.K.

    1997-02-01

    The US Nuclear Regulatory Commission (NRC) has been using full-power. Level 1, limited-scope risk models for the Accident Sequence Precursor (ASP) program for over fifteen years. These models have evolved and matured over the years, as have probabilistic risk assessment (PRA) and computer technologies. Significant upgrading activities have been undertaken over the past three years, with involvement from the Offices of Nuclear Reactor Regulation (NRR), Analysis and Evaluation of Operational Data (AEOD), and Nuclear Regulatory Research (RES), and several national laboratories. Part of these activities was an RES-sponsored feasibility study investigating the ability to extend the ASP models to include contributors to core damage from events initiated with the reactor at low power or shutdown (LP/SD), both internal events and external events. This paper presents only the LP/SD internal event modeling efforts.

  7. Opportunities for the testing of environmental transport models using data obtained following the Chernobyl accident

    SciTech Connect

    Hoffman, F.O.; Thiessen, K.M.; Watkins, B.

    1996-01-01

    The aftermath of the Chernobyl accident has provided a unique opportunity to collect data sets specifically for the purpose of model testing, and with these data to create scenarios against which environmental transport models may be tested in a format constituting a blind test. This article serves as an introduction to three test scenarios designed for testing models at the process level: (1) surface water contamination with radionuclides initially deposited onto soils; (2) contamination of different aquatic media and biota due to fallout of radionuclides into a body of water; and (3) atmospheric resuspension of radionuclides from contaminated land surfaces. These scenarios are the first such tests to use data sets collected in the former Soviet Union. Interested modelers are invited to participate in the test exercises by making calculations for any of these test scenarios. Information on participation is included. 9 refs.

  8. Opportunities for the testing of environmental transport models using data obtained following the Chernobyl accident.

    PubMed

    Hoffman, F O; Thiessen, K M; Watkins, B

    1996-01-01

    The aftermath of the Chernobyl accident has provided a unique opportunity to collect data sets specifically for the purpose of model testing, and with these data to create scenarios against which environmental transport models may be tested in a format constituting a blind test. This article serves as an introduction to three test scenarios designed for testing models at the process level: (1) surface water contamination with radionuclides initially deposited onto soils; (2) contamination of different aquatic media and biota due to fallout of radionuclides into a body of water; and (3) atmospheric resuspension of radionuclides from contaminated land surfaces. These scenarios are the first such tests to use data sets collected in the former Soviet Union. Interested modelers are invited to participate in the test exercises by making calculations for any of these test scenarios. Information on participation is included. PMID:7499152

  9. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.; Mcknight, R. L.; Cook, T. S.; Hartle, M. S.

    1988-01-01

    This report describes work performed to determine the predominat modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consisted of a low pressure plasma sprayed NiCrAlY bond coat, an air plasma sprayed ZrO2-Y2O3 top coat, and a Rene' 80 substrate. The work was divided into 3 technical tasks. The primary failure mode to be addressed was loss of the zirconia layer through spalling. Experiments showed that oxidation of the bond coat is a significant contributor to coating failure. It was evident from the test results that the species of oxide scale initially formed on the bond coat plays a role in coating degradation and failure. It was also shown that elevated temperature creep of the bond coat plays a role in coating failure. An empirical model was developed for predicting the test life of specimens with selected coating, specimen, and test condition variations. In the second task, a coating life prediction model was developed based on the data from Task 1 experiments, results from thermomechanical experiments performed as part of Task 2, and finite element analyses of the TBC system during thermal cycles. The third and final task attempted to verify the validity of the model developed in Task 2. This was done by using the model to predict the test lives of several coating variations and specimen geometries, then comparing these predicted lives to experimentally determined test lives. It was found that the model correctly predicts trends, but that additional refinement is needed to accurately predict coating life.

  10. Are animal models predictive for humans?

    PubMed Central

    2009-01-01

    It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics. PMID:19146696

  11. Predictive models of implicit and explicit attitudes.

    PubMed

    Perugini, Marco

    2005-03-01

    Explicit attitudes have long been assumed to be central factors influencing behaviour. A recent stream of studies has shown that implicit attitudes, typically measured with the Implicit Association Test (IAT), can also predict a significant range of behaviours. This contribution is focused on testing different predictive models of implicit and explicit attitudes. In particular, three main models can be derived from the literature: (a) additive (the two types of attitudes explain different portion of variance in the criterion), (b) double dissociation (implicit attitudes predict spontaneous whereas explicit attitudes predict deliberative behaviour), and (c) multiplicative (implicit and explicit attitudes interact in influencing behaviour). This paper reports two studies testing these models. The first study (N = 48) is about smoking behaviour, whereas the second study (N = 109) is about preferences for snacks versus fruit. In the first study, the multiplicative model is supported, whereas the double dissociation model is supported in the second study. The results are discussed in light of the importance of focusing on different patterns of prediction when investigating the directive influence of implicit and explicit attitudes on behaviours. PMID:15901390

  12. Effects of improved modeling on best estimate BWR severe accident analysis

    SciTech Connect

    Hyman, C.R.; Ott, L.J.

    1984-01-01

    Since 1981, ORNL has completed best estimate studies analyzing several dominant BWR accident scenarios. These scenarios were identified by early Probabilistic Risk Assessment (PRA) studies and detailed ORNL analysis complements such studies. In performing these studies, ORNL has used the MARCH code extensively. ORNL investigators have identified several deficiencies in early versions of MARCH with regard to BWR modeling. Some of these deficiencies appear to have been remedied by the most recent release of the code. It is the purpose of this paper to identify several of these deficiencies. All the information presented concerns the degraded core thermal/hydraulic analysis associated with each of the ORNL studies. This includes calculations of the containment response. The period of interest is from the time of permanent core uncovery to the end of the transient. Specific objectives include the determination of the extent of core damage and timing of major events (i.e., onset of Zr/H/sub 2/O reaction, initial clad/fuel melting, loss of control blade structure, etc.). As mentioned previously the major analysis tool used thus far was derived from an early version of MARCH. BWRs have unique features which must be modeled for best estimate severe accident analysis. ORNL has developed and incorporated into its version of MARCH several improved models. These include (1) channel boxes and control blades, (2) SRV actuations, (3) vessel water level, (4) multi-node analysis of in-vessel water inventory, (5) comprehensive hydrogen and water properties package, (6) first order correction to the ideal gas law, and (7) separation of fuel and cladding. Ongoing and future modeling efforts are required. These include (1) detailed modeling for the pressure suppression pool, (2) incorporation of B/sub 4/C/steam reaction models, (3) phenomenological model of corium mass transport, and (4) advanced corium/concrete interaction modeling. 10 references, 17 figures, 1 table.

  13. Toward predictive models of mammalian cells.

    PubMed

    Ma'ayan, Avi; Blitzer, Robert D; Iyengar, Ravi

    2005-01-01

    Progress in experimental and theoretical biology is likely to provide us with the opportunity to assemble detailed predictive models of mammalian cells. Using a functional format to describe the organization of mammalian cells, we describe current approaches for developing qualitative and quantitative models using data from a variety of experimental sources. Recent developments and applications of graph theory to biological networks are reviewed. The use of these qualitative models to identify the topology of regulatory motifs and functional modules is discussed. Cellular homeostasis and plasticity are interpreted within the framework of balance between regulatory motifs and interactions between modules. From this analysis we identify the need for detailed quantitative models on the basis of the representation of the chemistry underlying the cellular process. The use of deterministic, stochastic, and hybrid models to represent cellular processes is reviewed, and an initial integrated approach for the development of large-scale predictive models of a mammalian cell is presented. PMID:15869393

  14. A High Precision Prediction Model Using Hybrid Grey Dynamic Model

    ERIC Educational Resources Information Center

    Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro

    2008-01-01

    In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…

  15. Mechanistic prediction of fission product release under normal and accident conditions: key uncertainties that need better resolution

    SciTech Connect

    Rest, J.

    1983-09-01

    A theoretical model has been used for predicting the behavior of fission gas and volatile fission products (VFPs) in UO/sub 2/-base fuels during steady-state and transient conditions. This model represents an attempt to develop an efficient predictive capability for the full range of possible reactor operating conditions. Fission products released from the fuel are assumed to reach the fuel surface by successively diffusing (via atomic and gas-bubble mobility) from the grains to grain faces and then to the grain edges, where the fission products are released through a network of interconnected tunnels of fission-gas induced and fabricated porosity. The model provides for a multi-region calculation and uses only one size class to characterize a distribution of fission gas bubbles.

  16. Mechanistic prediction of fission-product release under normal and accident conditions: key uncertainties that need better resolution. [PWR; BWR

    SciTech Connect

    Rest, J.

    1983-09-01

    A theoretical model has been used for predicting the behavior of fission gas and volatile fission products (VFPs) in UO/sub 2/-base fuels during steady-state and transient conditions. This model represents an attempt to develop an efficient predictive capability for the full range of possible reactor operating conditions. Fission products released from the fuel are assumed to reach the fuel surface by successively diffusing (via atomic and gas-bubble mobility) from the grains to grain faces and then to the grain edges, where the fission products are released through a network of interconnected tunnels of fission-gas induced and fabricated porosity. The model provides for a multi-region calculation and uses only one size class to characterize a distribution of fission gas bubbles.

  17. Predictive model for segmented poly(urea)

    NASA Astrophysics Data System (ADS)

    Gould, P. J.; Cornish, R.; Frankl, P.; Lewtas, I.

    2012-08-01

    Segmented poly(urea) has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM) - a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  18. Modeling & analysis of criticality-induced severe accidents during refueling for the Advanced Neutron Source Reactor

    SciTech Connect

    Georgevich, V.; Kim, S.H.; Taleyarkhan, R.P.; Jackson, S.

    1992-10-01

    This paper describes work done at the Oak Ridge National Laboratory (ORNL) for evaluating the potential and resulting consequences of a hypothetical criticality accident during refueling of the 330-MW Advanced Neutron Source (ANS) research reactor. The development of an analytical capability is described. Modeling and problem formulation were conducted using concepts of reactor neutronic theory for determining power level escalation, coupled with ORIGEN and MELCOR code simulations for radionuclide buildup and containment transport Gaussian plume transport modeling was done for determining off-site radiological consequences. Nuances associated with modeling this blast-type scenario are described. Analysis results for ANS containment response under a variety of postulated scenarios and containment failure modes are presented. It is demonstrated that individuals at the reactor site boundary will not receive doses beyond regulatory limits for any of the containment configurations studied.

  19. Mathematical modeling of ignition of woodlands resulted from accident on the pipeline

    NASA Astrophysics Data System (ADS)

    Perminov, V. A.; Loboda, E. L.; Reyno, V. V.

    2014-11-01

    Accidents occurring at the sites of pipelines, accompanied by environmental damage, economic loss, and sometimes loss of life. In this paper we calculated the sizes of the possible ignition zones in emergency situations on pipelines located close to the forest, accompanied by the appearance of fireballs. In this paper, using the method of mathematical modeling calculates the maximum size of the ignition zones of vegetation as a result of accidental releases of flammable substances. The paper suggested in the context of the general mathematical model of forest fires give a new mathematical setting and method of numerical solution of a problem of a forest fire modeling. The boundary-value problem is solved numerically using the method of splitting according to physical processes. The dependences of the size of the forest fuel for different amounts of leaked flammable substances and moisture content of vegetation.

  20. Multi-Model Ensemble Wake Vortex Prediction

    NASA Technical Reports Server (NTRS)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  1. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    PubMed

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. PMID:22023934

  2. A Predictive Model for Root Caries Incidence.

    PubMed

    Ritter, André V; Preisser, John S; Puranik, Chaitanya P; Chung, Yunro; Bader, James D; Shugars, Daniel A; Makhija, Sonia; Vollmer, William M

    2016-01-01

    This study aimed to find the set of risk indicators best able to predict root caries (RC) incidence in caries-active adults utilizing data from the Xylitol for Adult Caries Trial (X-ACT). Five logistic regression models were compared with respect to their predictive performance for incident RC using data from placebo-control participants with exposed root surfaces at baseline and from two study centers with ancillary data collection (n = 155). Prediction performance was assessed from baseline variables and after including ancillary variables [smoking, diet, use of removable partial dentures (RPD), toothbrush use, income, education, and dental insurance]. A sensitivity analysis added treatment to the models for both the control and treatment participants (n = 301) to predict RC for the control participants. Forty-nine percent of the control participants had incident RC. The model including the number of follow-up years at risk, the number of root surfaces at risk, RC index, gender, race, age, and smoking resulted in the best prediction performance, having the highest AUC and lowest Brier score. The sensitivity analysis supported the primary analysis and gave slightly better performance summary measures. The set of risk indicators best able to predict RC incidence included an increased number of root surfaces at risk and increased RC index at baseline, followed by white race and nonsmoking, which were strong nonsignificant predictors. Gender, age, and increased number of follow-up years at risk, while included in the model, were also not statistically significant. The inclusion of health, diet, RPD use, toothbrush use, income, education, and dental insurance variables did not improve the prediction performance. PMID:27160516

  3. Development of Models for Predicting the Predominant Taste and Odor Compounds in Taihu Lake, China

    PubMed Central

    Sun, Xiaoxue; Deng, Xuwei; Niu, Yuan; Xie, Ping

    2012-01-01

    Taste and odor (T&O) problems, which have adversely affected the quality of water supplied to millions of residents, have repeatedly occurred in Taihu Lake (e.g., a serious odor accident occurred in 2007). Because these accidents are difficult for water resource managers to forecast in a timely manner, there is an urgent need to develop optimum models to predict these T&O problems. For this purpose, various biotic and abiotic environmental parameters were monitored monthly for one year at 30 sites across Taihu Lake. This is the first investigation of this huge lake to sample T&O compounds at the whole-lake level. Certain phytoplankton taxa were important variables in the models; for instance, the concentrations of the particle-bound 2-methylisoborneol (p-MIB) were correlated with the presence of Oscillatoria, whereas those of the p-β-cyclocitral and p-β-ionone were correlated with Microcystis levels. Abiotic factors such as nitrogen (TN, TDN, NO3-N, and NO2-N), pH, DO, COND, COD and Chl-a also contributed significantly to the T&O predictive models. The dissolved (d) T&O compounds were related to both the algal biomass and to certain abiotic environmental factors, whereas the particle-bound (p) T&O compounds were more strongly related to the algal presence. We also tested the validity of these models using an independent data set that was previously collected from Taihu Lake in 2008. In comparing the concentrations of the T&O compounds observed in 2008 with those concentrations predicted from our models, we found that most of the predicted data points fell within the 90% confidence intervals of the observed values. This result supported the validity of these models in the studied system. These models, basing on easily collected environmental data, will be of practical value to the water resource managers of Taihu Lake for evaluating the probability of T&O accidents. PMID:23284835

  4. Influence of the meteorological input on the atmospheric transport modelling with FLEXPART of radionuclides from the Fukushima Daiichi nuclear accident.

    PubMed

    Arnold, D; Maurer, C; Wotawa, G; Draxler, R; Saito, K; Seibert, P

    2015-01-01

    In the present paper the role of precipitation as FLEXPART model input is investigated for one possible release scenario of the Fukushima Daiichi accident. Precipitation data from the European Center for Medium-Range Weather Forecast (ECMWF), the NOAA's National Center for Environmental Prediction (NCEP), the Japan Meteorological Agency's (JMA) mesoscale analysis and a JMA radar-rain gauge precipitation analysis product were utilized. The accident of Fukushima in March 2011 and the following observations enable us to assess the impact of these precipitation products at least for this single case. As expected the differences in the statistical scores are visible but not large. Increasing the ECMWF resolution of all the fields from 0.5° to 0.2° rises the correlation from 0.71 to 0.80 and an overall rank from 3.38 to 3.44. Substituting ECMWF precipitation, while the rest of the variables remains unmodified, by the JMA mesoscale precipitation analysis and the JMA radar gauge precipitation data yield the best results on a regional scale, specially when a new and more robust wet deposition scheme is introduced. The best results are obtained with a combination of ECMWF 0.2° data with precipitation from JMA mesoscale analyses and the modified wet deposition with a correlation of 0.83 and an overall rank of 3.58. NCEP-based results with the same source term are generally poorer, giving correlations around 0.66, and comparatively large negative biases and an overall rank of 3.05 that worsens when regional precipitation data is introduced. PMID:24679678

  5. Predictive coding as a model of cognition.

    PubMed

    Spratling, M W

    2016-08-01

    Previous work has shown that predictive coding can provide a detailed explanation of a very wide range of low-level perceptual processes. It is also widely believed that predictive coding can account for high-level, cognitive, abilities. This article provides support for this view by showing that predictive coding can simulate phenomena such as categorisation, the influence of abstract knowledge on perception, recall and reasoning about conceptual knowledge, context-dependent behavioural control, and naive physics. The particular implementation of predictive coding used here (PC/BC-DIM) has previously been used to simulate low-level perceptual behaviour and the neural mechanisms that underlie them. This algorithm thus provides a single framework for modelling both perceptual and cognitive brain function. PMID:27118562

  6. Using Numerical Models in the Development of Software Tools for Risk Management of Accidents with Oil and Inert Spills

    NASA Astrophysics Data System (ADS)

    Fernandes, R.; Leitão, P. C.; Braunschweig, F.; Lourenço, F.; Galvão, P.; Neves, R.

    2012-04-01

    The increasing ship traffic and maritime transport of dangerous substances make it more difficult to significantly reduce the environmental, economic and social risks posed by potential spills, although the security rules are becoming more restrictive (ships with double hull, etc.) and the surveillance systems are becoming more developed (VTS, AIS). In fact, the problematic associated to spills is and will always be a main topic: spill events are continuously happening, most of them unknown for the general public because of their small scale impact, but with some of them (in a much smaller number) becoming authentic media phenomena in this information era, due to their large dimensions and environmental and social-economic impacts on ecosystems and local communities, and also due to some spectacular or shocking pictures generated. Hence, the adverse consequences posed by these type of accidents, increase the preoccupation of avoiding them in the future, or minimize their impacts, using not only surveillance and monitoring tools, but also increasing the capacity to predict the fate and behaviour of bodies, objects, or substances in the following hours after the accident - numerical models can have now a leading role in operational oceanography applied to safety and pollution response in the ocean because of their predictive potential. Search and rescue operation, oil, inert (ship debris, or floating containers), and HNS (hazardous and noxious substances) spills risk analysis are the main areas where models can be used. Model applications have been widely used in emergency or planning issues associated to pollution risks, and contingency and mitigation measures. Before a spill, in the planning stage, modelling simulations are used in environmental impact studies, or risk maps, using historical data, reference situations, and typical scenarios. After a spill, the use of fast and simple modelling applications allow to understand the fate and behaviour of the spilt

  7. Modeling and Prediction of Fan Noise

    NASA Technical Reports Server (NTRS)

    Envia, Ed

    2008-01-01

    Fan noise is a significant contributor to the total noise signature of a modern high bypass ratio aircraft engine and with the advent of ultra high bypass ratio engines like the geared turbofan, it is likely to remain so in the future. As such, accurate modeling and prediction of the basic characteristics of fan noise are necessary ingredients in designing quieter aircraft engines in order to ensure compliance with ever more stringent aviation noise regulations. In this paper, results from a comprehensive study aimed at establishing the utility of current tools for modeling and predicting fan noise will be summarized. It should be emphasized that these tools exemplify present state of the practice and embody what is currently used at NASA and Industry for predicting fan noise. The ability of these tools to model and predict fan noise is assessed against a set of benchmark fan noise databases obtained for a range of representative fan cycles and operating conditions. Detailed comparisons between the predicted and measured narrowband spectral and directivity characteristics of fan nose will be presented in the full paper. General conclusions regarding the utility of current tools and recommendations for future improvements will also be given.

  8. Preliminary design report for modeling of hydrogen uptake in fuel rod cladding during severe accidents

    SciTech Connect

    Siefken, L.J.

    1998-08-01

    Preliminary designs are described for models of the interaction of Zircaloy and hydrogen and the consequences of this interaction on the behavior of fuel rod cladding during severe accidents. The modeling of this interaction and its consequences involves the modeling of seven processes: (1) diffusion of oxygen from the bulk gas into the boundary layer at the external cladding surface, (2) diffusion from the boundary layer into the oxide layer at the cladding external surface, (3) diffusion from the inner surface of the oxide layer into the metallic part of the cladding, (4) uptake of hydrogen in the event that the cladding oxide layer is dissolved in a steam-starved region, (5) embrittlement of cladding due to hydrogen uptake, (6) cracking of cladding during quenching due to its embrittlement and (7) release of hydrogen from the cladding after cracking of the cladding. An integral diffusion method is described for calculating the diffusion processes in the cladding. Experimental and theoretical results are presented that show the uptake of hydrogen in the event of dissolution of the oxide layer occurs rapidly and that show the release of hydrogen in the event of cracking of the cladding occurs rapidly. These experimental results are used as a basis for calculating the rate of hydrogen uptake and the rate of hydrogen release. The uptake of hydrogen is limited to the equilibrium solubility calculated by applying Sievert`s law. The uptake of hydrogen is an exothermic reaction that accelerates the heatup of a fuel rod. An embrittlement criteria is described that accounts for hydrogen and oxygen concentration and the extent of oxidation. A design is described for implementing the models for Zr-H interaction into the programming framework of the SCDAP/RELAP5 code. A test matrix is described for assessing the impact of the Zr-H interaction models on the calculated behavior of fuel rods in severe accident conditions.

  9. Analysis and predictive modeling of asthma phenotypes.

    PubMed

    Brasier, Allan R; Ju, Hyunsu

    2014-01-01

    Molecular classification using robust biochemical measurements provides a level of diagnostic precision that is unattainable using indirect phenotypic measurements. Multidimensional measurements of proteins, genes, or metabolites (analytes) can identify subtle differences in the pathophysiology of patients with asthma in a way that is not otherwise possible using physiological or clinical assessments. We overview a method for relating biochemical analyte measurements to generate predictive models of discrete (categorical) clinical outcomes, a process referred to as "supervised classification." We consider problems inherent in wide (small n and large p) high-dimensional data, including the curse of dimensionality, collinearity and lack of information content. We suggest methods for reducing the data to the most informative features. We describe different approaches for phenotypic modeling, using logistic regression, classification and regression trees, random forest and nonparametric regression spline modeling. We provide guidance on post hoc model evaluation and methods to evaluate model performance using ROC curves and generalized additive models. The application of validated predictive models for outcome prediction will significantly impact the clinical management of asthma. PMID:24162915

  10. Observation simulation experiments with regional prediction models

    NASA Technical Reports Server (NTRS)

    Diak, George; Perkey, Donald J.; Kalb, Michael; Robertson, Franklin R.; Jedlovec, Gary

    1990-01-01

    Research efforts in FY 1990 included studies employing regional scale numerical models as aids in evaluating potential contributions of specific satellite observing systems (current and future) to numerical prediction. One study involves Observing System Simulation Experiments (OSSEs) which mimic operational initialization/forecast cycles but incorporate simulated Advanced Microwave Sounding Unit (AMSU) radiances as input data. The objective of this and related studies is to anticipate the potential value of data from these satellite systems, and develop applications of remotely sensed data for the benefit of short range forecasts. Techniques are also being used that rely on numerical model-based synthetic satellite radiances to interpret the information content of various types of remotely sensed image and sounding products. With this approach, evolution of simulated channel radiance image features can be directly interpreted in terms of the atmospheric dynamical processes depicted by a model. Progress is being made in a study using the internal consistency of a regional prediction model to simplify the assessment of forced diabatic heating and moisture initialization in reducing model spinup times. Techniques for model initialization are being examined, with focus on implications for potential applications of remote microwave observations, including AMSU and Special Sensor Microwave Imager (SSM/I), in shortening model spinup time for regional prediction.

  11. Regional long-term model of radioactivity dispersion and fate in the Northwestern Pacific and adjacent seas: application to the Fukushima Dai-ichi accident.

    PubMed

    Maderich, V; Bezhenar, R; Heling, R; de With, G; Jung, K T; Myoung, J G; Cho, Y-K; Qiao, F; Robertson, L

    2014-05-01

    The compartment model POSEIDON-R was modified and applied to the Northwestern Pacific and adjacent seas to simulate the transport and fate of radioactivity in the period 1945-2010, and to perform a radiological assessment on the releases of radioactivity due to the Fukushima Dai-ichi accident for the period 2011-2040. The model predicts the dispersion of radioactivity in the water column and in sediments, the transfer of radionuclides throughout the marine food web, and subsequent doses to humans due to the consumption of marine products. A generic predictive dynamic food-chain model is used instead of the biological concentration factor (BCF) approach. The radionuclide uptake model for fish has as a central feature the accumulation of radionuclides in the target tissue. The three layer structure of the water column makes it possible to describe the vertical structure of radioactivity in deep waters. In total 175 compartments cover the Northwestern Pacific, the East China and Yellow Seas and the East/Japan Sea. The model was validated from (137)Cs data for the period 1945-2010. Calculated concentrations of (137)Cs in water, bottom sediments and marine organisms in the coastal compartment, before and after the accident, are in close agreement with measurements from the Japanese agencies. The agreement for water is achieved when an additional continuous flux of 3.6 TBq y(-1) is used for underground leakage of contaminated water from the Fukushima Dai-ichi NPP, during the three years following the accident. The dynamic food web model predicts that due to the delay of the transfer throughout the food web, the concentration of (137)Cs for piscivorous fishes returns to background level only in 2016. For the year 2011, the calculated individual dose rate for Fukushima Prefecture due to consumption of fishery products is 3.6 μSv y(-1). Following the Fukushima Dai-ichi accident the collective dose due to ingestion of marine products for Japan increased in 2011 by a

  12. Modelling language evolution: Examples and predictions.

    PubMed

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines. PMID:24286718

  13. Modelling language evolution: Examples and predictions

    NASA Astrophysics Data System (ADS)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  14. Combining Modeling and Gaming for Predictive Analytics

    SciTech Connect

    Riensche, Roderick M.; Whitney, Paul D.

    2012-08-22

    Many of our most significant challenges involve people. While human behavior has long been studied, there are recent advances in computational modeling of human behavior. With advances in computational capabilities come increases in the volume and complexity of data that humans must understand in order to make sense of and capitalize on these modeling advances. Ultimately, models represent an encapsulation of human knowledge. One inherent challenge in modeling is efficient and accurate transfer of knowledge from humans to models, and subsequent retrieval. The simulated real-world environment of games presents one avenue for these knowledge transfers. In this paper we describe our approach of combining modeling and gaming disciplines to develop predictive capabilities, using formal models to inform game development, and using games to provide data for modeling.

  15. Persistence and predictability in a perfect model

    NASA Technical Reports Server (NTRS)

    Schubert, Siegfried D.; Suarez, Max J.; Schemm, Jae-Kyung

    1992-01-01

    A realistic two-level GCM is used to examine the relationship between predictability and persistence. Predictability is measured by the average divergence of ensembles of solutions starting from perturbed initial conditions, and persistence is defined in terms of the autocorrelation function based on a single long-term model integration. The average skill of the dynamical forecasts is compared with the skill of simple persistence-based statistical forecasts. For initial errors comparable in magnitude to present-day analysis errors, the statistical forecast loses all skill after about one week, reflecting the lifetime of the lowest frequency fluctuations in the model. Large ensemble mean dynamical forecasts would be expected to remain skillful for about 3 wk. The disparity between the skill of the statistical and dynamical forecasts is greater for the higher frequency modes, which have little memory beyond 1 d, yet remain predictable for about 2 wk. The results are analyzed in terms of two characteristic time scales.

  16. An exponential filter model predicts lightness illusions.

    PubMed

    Zeman, Astrid; Brooks, Kevin R; Ghebreab, Sennay

    2015-01-01

    Lightness, or perceived reflectance of a surface, is influenced by surrounding context. This is demonstrated by the Simultaneous Contrast Illusion (SCI), where a gray patch is perceived lighter against a black background and vice versa. Conversely, assimilation is where the lightness of the target patch moves toward that of the bounding areas and can be demonstrated in White's effect. Blakeslee and McCourt (1999) introduced an oriented difference-of-Gaussian (ODOG) model that is able to account for both contrast and assimilation in a number of lightness illusions and that has been subsequently improved using localized normalization techniques. We introduce a model inspired by image statistics that is based on a family of exponential filters, with kernels spanning across multiple sizes and shapes. We include an optional second stage of normalization based on contrast gain control. Our model was tested on a well-known set of lightness illusions that have previously been used to evaluate ODOG and its variants, and model lightness values were compared with typical human data. We investigate whether predictive success depends on filters of a particular size or shape and whether pooling information across filters can improve performance. The best single filter correctly predicted the direction of lightness effects for 21 out of 27 illusions. Combining two filters together increased the best performance to 23, with asymptotic performance at 24 for an arbitrarily large combination of filter outputs. While normalization improved prediction magnitudes, it only slightly improved overall scores in direction predictions. The prediction performance of 24 out of 27 illusions equals that of the best performing ODOG variant, with greater parsimony. Our model shows that V1-style orientation-selectivity is not necessary to account for lightness illusions and that a low-level model based on image statistics is able to account for a wide range of both contrast and assimilation effects

  17. An exponential filter model predicts lightness illusions

    PubMed Central

    Zeman, Astrid; Brooks, Kevin R.; Ghebreab, Sennay

    2015-01-01

    Lightness, or perceived reflectance of a surface, is influenced by surrounding context. This is demonstrated by the Simultaneous Contrast Illusion (SCI), where a gray patch is perceived lighter against a black background and vice versa. Conversely, assimilation is where the lightness of the target patch moves toward that of the bounding areas and can be demonstrated in White's effect. Blakeslee and McCourt (1999) introduced an oriented difference-of-Gaussian (ODOG) model that is able to account for both contrast and assimilation in a number of lightness illusions and that has been subsequently improved using localized normalization techniques. We introduce a model inspired by image statistics that is based on a family of exponential filters, with kernels spanning across multiple sizes and shapes. We include an optional second stage of normalization based on contrast gain control. Our model was tested on a well-known set of lightness illusions that have previously been used to evaluate ODOG and its variants, and model lightness values were compared with typical human data. We investigate whether predictive success depends on filters of a particular size or shape and whether pooling information across filters can improve performance. The best single filter correctly predicted the direction of lightness effects for 21 out of 27 illusions. Combining two filters together increased the best performance to 23, with asymptotic performance at 24 for an arbitrarily large combination of filter outputs. While normalization improved prediction magnitudes, it only slightly improved overall scores in direction predictions. The prediction performance of 24 out of 27 illusions equals that of the best performing ODOG variant, with greater parsimony. Our model shows that V1-style orientation-selectivity is not necessary to account for lightness illusions and that a low-level model based on image statistics is able to account for a wide range of both contrast and assimilation effects

  18. Time prediction model of subway transfer.

    PubMed

    Zhou, Yuyang; Yao, Lin; Gong, Yi; Chen, Yanyan

    2016-01-01

    Walking time prediction aims to deduce waiting time and travel time for passengers and provide a quantitative basis for the subway schedule management. This model is founded based on transfer passenger flow and type of pedestrian facilities. Chaoyangmen station in Beijing was taken as the learning set to obtain the relationship between transfer walking speed and passenger volume. The sectional passenger volume of different facilities was calculated related to the transfer passage classification. Model parameters were computed by curve fitting with respect to various pedestrian facilities. The testing set contained four transfer stations with large passenger volume. It is validated that the established model is effective and practical. The proposed model offers a real-time prediction method with good applicability. It can provide transfer scheme reference for passengers, meanwhile, improve the scheduling and management of the subway operation. PMID:26835224

  19. Advancements in predictive plasma formation modeling

    NASA Astrophysics Data System (ADS)

    Purvis, Michael A.; Schafgans, Alexander; Brown, Daniel J. W.; Fomenkov, Igor; Rafac, Rob; Brown, Josh; Tao, Yezheng; Rokitski, Slava; Abraham, Mathew; Vargas, Mike; Rich, Spencer; Taylor, Ted; Brandt, David; Pirati, Alberto; Fisher, Aaron; Scott, Howard; Koniges, Alice; Eder, David; Wilks, Scott; Link, Anthony; Langer, Steven

    2016-03-01

    We present highlights from plasma simulations performed in collaboration with Lawrence Livermore National Labs. This modeling is performed to advance the rate of learning about optimal EUV generation for laser produced plasmas and to provide insights where experimental results are not currently available. The goal is to identify key physical processes necessary for an accurate and predictive model capable of simulating a wide range of conditions. This modeling will help to drive source performance scaling in support of the EUV Lithography roadmap. The model simulates pre-pulse laser interaction with the tin droplet and follows the droplet expansion into the main pulse target zone. Next, the interaction of the expanded droplet with the main laser pulse is simulated. We demonstrate the predictive nature of the code and provide comparison with experimental results.

  20. DKIST Polarization Modeling and Performance Predictions

    NASA Astrophysics Data System (ADS)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  1. Predictive performance models and multiple task performance

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  2. Predictive Modeling of the CDRA 4BMS

    NASA Technical Reports Server (NTRS)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  3. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  4. Cognitive modeling to predict video interpretability

    NASA Astrophysics Data System (ADS)

    Young, Darrell L.; Bakir, Tariq

    2011-06-01

    Processing framework for cognitive modeling to predict video interpretability is discussed. Architecture consists of spatiotemporal video preprocessing, metric computation, metric normalization, pooling of like metric groups with masking adjustments, multinomial logistic pooling of Minkowski pooled groups of similar quality metrics, and estimation of confidence interval of final result.

  5. A Predictive Model for MSSW Student Success

    ERIC Educational Resources Information Center

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  6. Nearshore Operational Model for Rip Current Predictions

    NASA Astrophysics Data System (ADS)

    Sembiring, L. E.; Van Dongeren, A. R.; Van Ormondt, M.; Winter, G.; Roelvink, J.

    2012-12-01

    A coastal operational model system can serve as a tool in order to monitor and predict coastal hazards, and to acquire up-to-date information on coastal state indicators. The objective of this research is to develop a nearshore operational model system for the Dutch coast focusing on swimmer safety. For that purpose, an operational model system has been built which can predict conditions up to 48 hours ahead. The model system consists of three different nested model domain covering The North Sea, The Dutch coastline, and one local model which is the area of interest. Three different process-based models are used to simulate physical processes within the system: SWAN to simulate wave propagation, Delft3D-Flow for hydraulics flow simulation, and XBeach for the nearshore models. The SWAN model is forced by wind fields from operational HiRLAM, as well as two dimensional wave spectral data from WaveWatch 3 Global as the ocean boundaries. The Delft3D Flow model is forced by assigning the boundaries with tidal constants for several important astronomical components as well as HiRLAM wind fields. For the local XBeach model, up-to-date bathymetry will be obtained by assimilating model computation and Argus video data observation. A hindcast is carried out on the Continental Shelf Model, covering the North Sea and nearby Atlantic Ocean, for the year 2009. Model skills are represented by several statistical measures such as rms error and bias. In general the results show that the model system exhibits a good agreement with field data. For SWAN results, integral significant wave heights are predicted well by the model for all wave buoys considered, with rms errors ranging from 0.16 m for the month of May with observed mean significant wave height of 1.08 m, up to rms error of 0.39 m for the month of November, with observed mean significant wave height of 1.91 m. However, it is found that the wave model slightly underestimates the observation for the period of June, especially

  7. Analyzing the causation of a railway accident based on a complex network

    NASA Astrophysics Data System (ADS)

    Ma, Xin; Li, Ke-Ping; Luo, Zi-Yan; Zhou, Jin

    2014-02-01

    In this paper, a new model is constructed for the causation analysis of railway accident based on the complex network theory. In the model, the nodes are defined as various manifest or latent accident causal factors. By employing the complex network theory, especially its statistical indicators, the railway accident as well as its key causations can be analyzed from the overall perspective. As a case, the “7.23” China—Yongwen railway accident is illustrated based on this model. The results show that the inspection of signals and the checking of line conditions before trains run played an important role in this railway accident. In conclusion, the constructed model gives a theoretical clue for railway accident prediction and, hence, greatly reduces the occurrence of railway accidents.

  8. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl NPP accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-03-01

    The coupled model LMDzORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5°×1.25°, and the same grid stretched over Europe to reach a resolution of 0.45°×0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels, respectively, extending up to mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 vertical levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The best choice for the model validation was the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986. This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. However, the best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to Atlas), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of

  9. Disease Prediction Models and Operational Readiness

    PubMed Central

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey; Noonan, Christine; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-01-01

    The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6), spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A component of this review is the identification of verification and validation (V&V) methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology Readiness

  10. Disease prediction models and operational readiness.

    PubMed

    Corley, Courtney D; Pullum, Laura L; Hartley, David M; Benedum, Corey; Noonan, Christine; Rabinowitz, Peter M; Lancaster, Mary J

    2014-01-01

    The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6), spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A component of this review is the identification of verification and validation (V&V) methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology Readiness

  11. Can contaminant transport models predict breakthrough?

    USGS Publications Warehouse

    Peng, Wei-Shyuan; Hampton, Duane R.; Konikow, Leonard F.; Kambham, Kiran; Benegar, Jeffery J.

    2000-01-01

    A solute breakthrough curve measured during a two-well tracer test was successfully predicted in 1986 using specialized contaminant transport models. Water was injected into a confined, unconsolidated sand aquifer and pumped out 125 feet (38.3 m) away at the same steady rate. The injected water was spiked with bromide for over three days; the outflow concentration was monitored for a month. Based on previous tests, the horizontal hydraulic conductivity of the thick aquifer varied by a factor of seven among 12 layers. Assuming stratified flow with small dispersivities, two research groups accurately predicted breakthrough with three-dimensional (12-layer) models using curvilinear elements following the arc-shaped flowlines in this test. Can contaminant transport models commonly used in industry, that use rectangular blocks, also reproduce this breakthrough curve? The two-well test was simulated with four MODFLOW-based models, MT3D (FD and HMOC options), MODFLOWT, MOC3D, and MODFLOW-SURFACT. Using the same 12 layers and small dispersivity used in the successful 1986 simulations, these models fit almost as accurately as the models using curvilinear blocks. Subtle variations in the curves illustrate differences among the codes. Sensitivities of the results to number and size of grid blocks, number of layers, boundary conditions, and values of dispersivity and porosity are briefly presented. The fit between calculated and measured breakthrough curves degenerated as the number of layers and/or grid blocks decreased, reflecting a loss of model predictive power as the level of characterization lessened. Therefore, the breakthrough curve for most field sites can be predicted only qualitatively due to limited characterization of the hydrogeology and contaminant source strength.

  12. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.

    1984-01-01

    In order to fully exploit thermal barrier coatings (TBCs) on turbine components and achieve the maximum performance benefit, the knowledge and understanding of TBC failure mechanisms must be increased and the means to predict coating life developed. The proposed program will determine the predominant modes of TBC system degradation and then develop and verify life prediction models accounting for those degradation modes. The successful completion of the program will have dual benefits: the ability to take advantage of the performance benefits offered by TBCs, and a sounder basis for making future improvements in coating behavior.

  13. Model predictive control of constrained LPV systems

    NASA Astrophysics Data System (ADS)

    Yu, Shuyou; Böhm, Christoph; Chen, Hong; Allgöwer, Frank

    2012-06-01

    This article considers robust model predictive control (MPC) schemes for linear parameter varying (LPV) systems in which the time-varying parameter is assumed to be measured online and exploited for feedback. A closed-loop MPC with a parameter-dependent control law is proposed first. The parameter-dependent control law reduces conservativeness of the existing results with a static control law at the cost of higher computational burden. Furthermore, an MPC scheme with prediction horizon '1' is proposed to deal with the case of asymmetric constraints. Both approaches guarantee recursive feasibility and closed-loop stability if the considered optimisation problem is feasible at the initial time instant.

  14. Hidden Markov models for threat prediction fusion

    NASA Astrophysics Data System (ADS)

    Ross, Kenneth N.; Chaney, Ronald D.

    2000-04-01

    This work addresses the often neglected, but important problem of Level 3 fusion or threat refinement. This paper describes algorithms for threat prediction and test results from a prototype threat prediction fusion engine. The threat prediction fusion engine selectively models important aspects of the battlespace state using probability-based methods and information obtained from lower level fusion engines. Our approach uses hidden Markov models of a hierarchical threat state to find the most likely Course of Action (CoA) for the opposing forces. Decision tress use features derived from the CoA probabilities and other information to estimate the level of threat presented by the opposing forces. This approach provides the user with several measures associated with the level of threat, including: probability that the enemy is following a particular CoA, potential threat presented by the opposing forces, and likely time of the threat. The hierarchical approach used for modeling helps us efficiently represent the battlespace with a structure that permits scaling the models to larger scenarios without adding prohibitive computational costs or sacrificing model fidelity.

  15. Genetic models of homosexuality: generating testable predictions

    PubMed Central

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  16. ENSO Prediction using Vector Autoregressive Models

    NASA Astrophysics Data System (ADS)

    Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.

    2013-12-01

    A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.

  17. A predictive model of human performance.

    NASA Technical Reports Server (NTRS)

    Walters, R. F.; Carlson, L. D.

    1971-01-01

    An attempt is made to develop a model describing the overall responses of humans to exercise and environmental stresses for prediction of exhaustion vs an individual's physical characteristics. The principal components of the model are a steady state description of circulation and a dynamic description of thermal regulation. The circulatory portion of the system accepts changes in work load and oxygen pressure, while the thermal portion is influenced by external factors of ambient temperature, humidity and air movement, affecting skin blood flow. The operation of the model is discussed and its structural details are given.

  18. Prediction failure of a wolf landscape model

    USGS Publications Warehouse

    Mech, L.D.

    2006-01-01

    I compared 101 wolf (Canis lupus) pack territories formed in Wisconsin during 1993-2004 to the logistic regression predictive model of Mladenoff et al. (1995, 1997, 1999). Of these, 60% were located in putative habitat suitabilities 50% remained unoccupied by known packs after 24 years of recolonization. This model was a poor predictor of wolf re-colonizing locations in Wisconsin, apparently because it failed to consider the adaptability of wolves. Such models should be used cautiously in wolf-management or restoration plans.

  19. STELLA Experiment: Design and Model Predictions

    SciTech Connect

    Kimura, W. D.; Babzien, M.; Ben-Zvi, I.; Campbell, L. P.; Cline, D. B.; Fiorito, R. B.; Gallardo, J. C.; Gottschalk, S. C.; He, P.; Kusche, K. P.; Liu, Y.; Pantell, R. H.; Pogorelsky, I. V.; Quimby, D. C.; Robinson, K. E.; Rule, D. W.; Sandweiss, J.; Skaritka, J.; van Steenbergen, A.; Steinhauer, L. C.; Yakimenko, V.

    1998-07-05

    The STaged ELectron Laser Acceleration (STELLA) experiment will be one of the first to examine the critical issue of staging the laser acceleration process. The BNL inverse free electron laser (EEL) will serve as a prebuncher to generate {approx} 1 {micro}m long microbunches. These microbunches will be accelerated by an inverse Cerenkov acceleration (ICA) stage. A comprehensive model of the STELLA experiment is described. This model includes the EEL prebunching, drift and focusing of the microbunches into the ICA stage, and their subsequent acceleration. The model predictions will be presented including the results of a system error study to determine the sensitivity to uncertainties in various system parameters.

  20. Retrospective dosimetry with the MAX/EGS4 exposure model for the radiological accident in Nesvizh-Belarus

    NASA Astrophysics Data System (ADS)

    Santos, A. M.; Kramer, R.; Brayner, C. A.; Khoury, H. J.; Vieira, J. W.

    2007-09-01

    On October 26, 1991 a fatal radiological accident occurred in a 60Co irradiation facility in the town of Nesvizh in Belarus. Following a jam in the product transport system, the operator entered the facility to clear the fault. On entering the irradiation room the operator bypassed a number of safety features, which prevented him from perceiving that the source rack was in the irradiation position. After the accident average whole body absorbed doses between 8 and 16 Gy have been determined by TLD measurements, by isodose rate distributions, by biological dosimetry and by ESR measurements of clothes and teeth. In an earlier investigation the MAX/EGS4 exposure model had been used to calculate absorbed dose distributions for the radiological accident in Yanango/Peru, which actually represented the simulation of exposure from a point source on the surface of the body. After updating the phantom as well as the Monte Carlo code, the MAX/EGS4 exposure model was used to calculate the absorbed dose distribution for the worker involved in the radiological accident in Nesvizh/Belarus. For this purpose, the arms of the MAX phantom had to be raised above the head, and a rectangular 60Co source was designed to represent the source rack used in the irradiation facility. Average organ absorbed doses, depth-absorbed doses, maximum absorbed dose and average whole body absorbed dose have been calculated and compared with the corresponding data given in the IAEA report of the accident.

  1. Predicting freakish sea state with an operational third generation wave model

    NASA Astrophysics Data System (ADS)

    Waseda, T.; In, K.; Kiyomatsu, K.; Tamura, H.; Miyazawa, Y.; Iyama, K.

    2013-11-01

    Understanding of freak wave generation mechanism has advanced and the community has reached to a consensus that spectral geometry plays an important role. Numerous marine accident cases were studied and revealed that the narrowing of the directional spectrum is a good indicator of dangerous sea. However, the estimation of the directional spectrum depends on the performance of the third generation wave model. In this work, a well-studied marine accident case in Japan in 1980 (Onomichi-Maru incident) is revisited and the sea states are hind-casted using both the DIA and SRIAM nonlinear source terms. The result indicates that the temporal evolution of the basic parameters (directional spreading and frequency bandwidth) agree reasonably well between the two schemes and therefore most commonly used DIA method is qualitatively sufficient to predict freakish sea state. The analyses revealed that in the case of Onomichi-Maru, a moving gale system caused the spectrum to grow in energy with limited down-shifting at the accident site. This conclusion contradicts the marine inquiry report speculating that the two swell systems crossed at the accident site. The unimodal wave system grew under strong influence of local wind with a peculiar energy transfer.

  2. Predicting freakish sea state with an operational third-generation wave model

    NASA Astrophysics Data System (ADS)

    Waseda, T.; In, K.; Kiyomatsu, K.; Tamura, H.; Miyazawa, Y.; Iyama, K.

    2014-04-01

    The understanding of freak wave generation mechanisms has advanced and the community has reached a consensus that spectral geometry plays an important role. Numerous marine accident cases were studied and revealed that the narrowing of the directional spectrum is a good indicator of dangerous sea. However, the estimation of the directional spectrum depends on the performance of the third-generation wave model. In this work, a well-studied marine accident case in Japan in 1980 (Onomichi-Maru incident) is revisited and the sea states are hindcasted using both the DIA (discrete interaction approximation) and SRIAM (Simplified Research Institute of Applied Mechanics) nonlinear source terms. The result indicates that the temporal evolution of the basic parameters (directional spreading and frequency bandwidth) agree reasonably well between the two schemes and therefore the most commonly used DIA method is qualitatively sufficient to predict freakish sea state. The analyses revealed that in the case of Onomichi-Maru, a moving gale system caused the spectrum to grow in energy with limited downshifting at the accident's site. This conclusion contradicts the marine inquiry report speculating that the two swell systems crossed at the accident's site. The unimodal wave system grew under strong influence of local wind with a peculiar energy transfer.

  3. Modeling operator actions during a small break loss-of-coolant accident in a Babcock and Wilcox nuclear power plant

    SciTech Connect

    Ghan, L.S.; Ortiz, M.G.

    1991-12-31

    A small break loss-of-accident (SBLOCA) in a typical Babcock and Wilcox (B&W) nuclear power plant was modeled using RELAP5/MOD3. This work was performed as part of the United States Regulatory Commission`s (USNRC) Code, Scaling, Applicability and Uncertainty (CSAU) study. The break was initiated by severing one high pressure injection (HPI) line at the cold leg. Thus, the small break was further aggravated by reduced HPI flow. Comparisons between scoping runs with minimal operator action, and full operator action, clearly showed that the operator plays a key role in recovering the plant. Operator actions were modeled based on the emergency operating procedures (EOPs) and the Technical Bases Document for the EOPs. The sequence of operator actions modeled here is only one of several possibilities. Different sequences of operator actions are possible for a given accident because of the subjective decisions the operator must make when determining the status of the plant, hence, which branch of the EOP to follow. To assess the credibility of the modeled operator actions, these actions and results of the simulated accident scenario were presented to operator examiners who are familiar with B&W nuclear power plants. They agreed that, in general, the modeled operator actions conform to the requirements set forth in the EOPs and are therefore plausible. This paper presents the method for modeling the operator actions and discusses the simulated accident scenario from the viewpoint of operator actions.

  4. Applying hierarchical loglinear models to nonfatal underground coal mine accidents for safety management.

    PubMed

    Onder, Mustafa; Onder, Seyhan; Adiguzel, Erhan

    2014-01-01

    Underground mining is considered to be one of the most dangerous industries and mining remains the most hazardous occupation. Categorical analysis of accident records may present valuable information for preventing accidents. In this study, hierarchical loglinear analysis was applied to occupational injuries that occurred in an underground coal mine. The main factors affecting the accidents were defined as occupation, area, reason, accident time and part of body affected. By considering subfactors of the main factors, multiway contingency tables were prepared and, thus, the probabilities that might affect nonfatal injuries were investigated. At the end of the study, important accident risk factors and job groups with a high probability of being exposed to those risk factors were determined. This article presents important information on decreasing the number accidents in underground coal mines. PMID:24934420

  5. Health effects models for nuclear power plant accident consequence analysis: Low LET radiation

    SciTech Connect

    Evans, J.S. . School of Public Health)

    1990-01-01

    This report describes dose-response models intended to be used in estimating the radiological health effects of nuclear power plant accidents. Models of early and continuing effects, cancers and thyroid nodules, and genetic effects are provided. Weibull dose-response functions are recommended for evaluating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary, and gastrointestinal syndromes -- are considered. In addition, models are included for assessing the risks of several nonlethal early and continuing effects -- including prodromal vomiting and diarrhea, hypothyroidism and radiation thyroiditis, skin burns, reproductive effects, and pregnancy losses. Linear and linear-quadratic models are recommended for estimating cancer risks. Parameters are given for analyzing the risks of seven types of cancer in adults -- leukemia, bone, lung, breast, gastrointestinal, thyroid, and other.'' The category, other'' cancers, is intended to reflect the combined risks of multiple myeloma, lymphoma, and cancers of the bladder, kidney, brain, ovary, uterus and cervix. Models of childhood cancers due to in utero exposure are also developed. For most cancers, both incidence and mortality are addressed. The models of cancer risk are derived largely from information summarized in BEIR III -- with some adjustment to reflect more recent studies. 64 refs., 18 figs., 46 tabs.

  6. Progresses in tritium accident modelling in the frame of IAEA EMRAS II

    SciTech Connect

    Galeriu, D.; Melintescu, A.

    2015-03-15

    The assessment of the environmental impact of tritium release from nuclear facilities is a topic of interest in many countries. In the IAEA's Environmental Modelling for Radiation Safety (EMRAS I) programme, progresses for routine releases were done and in the EMRAS II programme a dedicated working group (WG 7 - Tritium Accidents) focused on the potential accidental releases (liquid and atmospheric pathways). The progresses achieved in WG 7 were included in a complex report - a technical document of IAEA covering both liquid and atmospheric accidental release consequences. A brief description of the progresses achieved in the frame of EMRAS II WG 7 is presented. Important results have been obtained concerning washout rate, the deposition on the soil of HTO and HT, the HTO uptake by leaves and the subsequent conversion to OBT (organically bound tritium) during daylight. Further needs of the processes understanding and the experimental efforts are emphasised.

  7. Dynamic modeling of physical phenomena for probabilistic assessment of spent fuel accidents

    SciTech Connect

    Benjamin, A.S.

    1997-11-01

    If there should be an accident involving drainage of all the water from a spent fuel pool, the fuel elements will heat up until the heat produced by radioactive decay is balanced by that removed by natural convection to air, thermal radiation, and other means. If the temperatures become high enough for the cladding or other materials to ignite due to rapid oxidation, then some of the fuel might melt, leading to an undesirable release of radioactive materials. The amount of melting is dependent upon the fuel loading configuration and its age, the oxidation and melting characteristics of the materials, and the potential effectiveness of recovery actions. The authors have developed methods for modeling the pertinent physical phenomena and integrating the results with a probabilistic treatment of the uncertainty distributions. The net result is a set of complementary cumulative distribution functions for the amount of fuel melted.

  8. Predicting Consequences of Technological Disasters from Natural Hazard Events: Challenges and Opportunities Associated with Industrial Accident Data Sources

    NASA Astrophysics Data System (ADS)

    Wood, M.

    2009-04-01

    The increased focus on the possibility of technological accidents caused by natural events (Natech) is foreseen to continue for years to come. In this case, experts in prevention, mitigation and preparation activities associated with natural events will increasingly need to borrow data and expertise traditionally associated with the technological fields to carry out the work. An important question is how useful is the data for understanding consequences from such natech events. Data and case studies provided on major industrial accidents tend to focus on lessons learned for re-engineering the process. While consequence data are reported at least nominally in most reports, their precision, quality and completeness is often lacking. Consequences that are often or sometimes available but not provided can include severity and type of injuries, distance of victims from the source, exposure measurements, volume of the release, population in potentially affected zones, and weather conditions. Yet these are precisely the type of data that will aid natural hazard experts in land-use planning and emergency response activities when a Natech event may be foreseen. This work discusses the results of a study of consequence data from accidents involving toxic releases reported in the EU's MARS accident database. The study analysed the precision, quality and completeness of three categories of consequence data reported: the description of health effects, consequence assessment and chemical risk assessment factors, and emergency response information. This work reports on the findings from this study and discusses how natural hazards experts might interact with industrial accident experts to promote more consistent and accurate reporting of the data that will be useful in consequence-based activities.

  9. Urban daytime traffic noise prediction models.

    PubMed

    da Paz, Elaine Carvalho; Zannin, Paulo Henrique Trombetta

    2010-04-01

    An evaluation was made of the acoustic environment generated by an urban highway using in situ measurements. Based on the data collected, a mathematical model was designed for the main sound levels (L (eq), L (10), L (50), and L (90)) as a function of the correlation between sound levels and between the equivalent sound pressure level and traffic variables. Four valid groups of mathematical models were generated to calculate daytime sound levels, which were statistically validated. It was found that the new models can be considered as accurate as other models presented in the literature to assess and predict daytime traffic noise, and that they stand out and differ from the existing models described in the literature thanks to two characteristics, namely, their linearity and the application of class intervals. PMID:19353296

  10. TRACE/PARCS Core Modeling of a BWR/5 for Accident Analysis of ATWS Events

    SciTech Connect

    Cuadra A.; Baek J.; Cheng, L.; Aronson, A.; Diamond, D.; Yarsky, P.

    2013-11-10

    The TRACE/PARCS computational package [1, 2] isdesigned to be applicable to the analysis of light water reactor operational transients and accidents where the coupling between the neutron kinetics (PARCS) and the thermal-hydraulics and thermal-mechanics (TRACE) is important. TRACE/PARCS has been assessed for itsapplicability to anticipated transients without scram(ATWS) [3]. The challenge, addressed in this study, is to develop a sufficiently rigorous input model that would be acceptable for use in ATWS analysis. Two types of ATWS events were of interest, a turbine trip and a closure of main steam isolation valves (MSIVs). In the first type, initiated by turbine trip, the concern is that the core will become unstable and large power oscillations will occur. In the second type,initiated by MSIV closure,, the concern is the amount of energy being placed into containment and the resulting emergency depressurization. Two separate TRACE/PARCS models of a BWR/5 were developed to analyze these ATWS events at MELLLA+ (maximum extended load line limit plus)operating conditions. One model [4] was used for analysis of ATWS events leading to instability (ATWS-I);the other [5] for ATWS events leading to emergency depressurization (ATWS-ED). Both models included a large portion of the nuclear steam supply system and controls, and a detailed core model, presented henceforth.

  11. Disease Prediction Models and Operational Readiness

    SciTech Connect

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  12. Validated predictive modelling of the environmental resistome.

    PubMed

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  13. Validated predictive modelling of the environmental resistome

    PubMed Central

    Amos, Gregory CA; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-01-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  14. Structural evaluation of electrosleeved tubes under severe accident transients.

    SciTech Connect

    Majumdar, S.

    1999-11-12

    A flow stress model was developed for predicting failure of Electrosleeved PWR steam generator tubing under severe accident transients. The Electrosleeve, which is nanocrystalline pure nickel, loses its strength at temperatures greater than 400 C during severe accidents because of grain growth. A grain growth model and the Hall-Petch relationship were used to calculate the loss of flow stress as a function of time and temperature during the accident. Available tensile test data as well as high temperature failure tests on notched Electrosleeved tube specimens were used to derive the basic parameters of the failure model. The model was used to predict the failure temperatures of Electrosleeved tubes with axial cracks in the parent tube during postulated severe accident transients.

  15. Probabilistic prediction models for aggregate quarry siting

    USGS Publications Warehouse

    Robinson, G.R., Jr.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  16. Predictive Modeling of the CDRA 4BMS

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  17. Constructing predictive models of human running.

    PubMed

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-01

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. PMID:25505131

  18. Constructing predictive models of human running

    PubMed Central

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-01-01

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. PMID:25505131

  19. A predictive geologic model of radon occurrence

    SciTech Connect

    Gregg, L.T. )

    1990-01-01

    Earlier work by LeGrand on predictive geologic models for radon focused on hydrogeologic aspects of radon transport from a given uranium/radium source in a fractured crystalline rock aquifer, and included submodels for bedrock lithology (uranium concentration), topographic slope, and water-table behavior and characteristics. LeGrand's basic geologic model has been modified and extended into a submodel for crystalline rocks (Blue Ridge and Piedmont Provinces) and a submodel for sedimentary rocks (Valley and Ridge and Coastal Plain Provinces). Each submodel assigns a ranking of 1 to 15 to the bedrock type, based on (a) known or supposed uranium/thorium content, (b) petrography/lithology, and (c) structural features such as faults, shear or breccia zones, diabase dikes, and jointing/fracturing. The bedrock ranking is coupled with a generalized soil/saprolite model which ranks soil/saprolite type and thickness from 1 to 10. A given site is thus assessed a ranking of 1 to 150 as a guide to its potential for high radon occurrence in the upper meter or so of soil. Field trials of the model are underway, comparing model predictions with measured soil-gas concentrations of radon.

  20. Computer Model Predicts the Movement of Dust

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A new computer model of the atmosphere can now actually pinpoint where global dust events come from, and can project where they're going. The model may help scientists better evaluate the impact of dust on human health, climate, ocean carbon cycles, ecosystems, and atmospheric chemistry. Also, by seeing where dust originates and where it blows people with respiratory problems can get advanced warning of approaching dust clouds. 'The model is physically more realistic than previous ones,' said Mian Chin, a co-author of the study and an Earth and atmospheric scientist at Georgia Tech and the Goddard Space Flight Center (GSFC) in Greenbelt, Md. 'It is able to reproduce the short term day-to-day variations and long term inter-annual variations of dust concentrations and distributions that are measured from field experiments and observed from satellites.' The above images show both aerosols measured from space (left) and the movement of aerosols predicted by computer model for the same date (right). For more information, read New Computer Model Tracks and Predicts Paths Of Earth's Dust Images courtesy Paul Giroux, Georgia Tech/NASA Goddard Space Flight Center

  1. GASFLOW: A computational model to analyze accidents in nuclear containment and facility buildings

    SciTech Connect

    Travis, J.R. ); Nichols, B.D.; Wilson, T.L.; Lam, K.L.; Spore, J.W.; Niederauer, G.F. )

    1993-01-01

    GASFLOW is a finite-volume computer code that solves the time-dependent, compressible Navier-Stokes equations for multiple gas species. The fluid-dynamics algorithm is coupled to the chemical kinetics of combusting liquids or gases to simulate diffusion or propagating flames in complex geometries of nuclear containment or confinement and facilities' buildings. Fluid turbulence is calculated to enhance the transport and mixing of gases in rooms and volumes that may be connected by a ventilation system. The ventilation system may consist of extensive ductwork, filters, dampers or valves, and fans. Condensation and heat transfer to walls, floors, ceilings, and internal structures are calculated to model the appropriate energy sinks. Solid and liquid aerosol behavior is simulated to give the time and space inventory of radionuclides. The solution procedure of the governing equations is a modified Los Alamos ICE'd-ALE methodology. Complex facilities can be represented by separate computational domains (multiblocks) that communicate through overlapping boundary conditions. The ventilation system is superimposed throughout the multiblock mesh. Gas mixtures and aerosols are transported through the free three-dimensional volumes and the restricted one-dimensional ventilation components as the accident and fluid flow fields evolve. Combustion may occur if sufficient fuel and reactant or oxidizer are present and have an ignition source. Pressure and thermal loads on the building, structural components, and safety-related equipment can be determined for specific accident scenarios. GASFLOW calculations have been compared with large oil-pool fire tests in the 1986 HDR containment test T52.14, which is a 3000-kW fire experiment. The computed results are in good agreement with the observed data.

  2. Predictive Computational Modeling of Chromatin Folding

    NASA Astrophysics Data System (ADS)

    di Pierro, Miichele; Zhang, Bin; Wolynes, Peter J.; Onuchic, Jose N.

    In vivo, the human genome folds into well-determined and conserved three-dimensional structures. The mechanism driving the folding process remains unknown. We report a theoretical model (MiChroM) for chromatin derived by using the maximum entropy principle. The proposed model allows Molecular Dynamics simulations of the genome using as input the classification of loci into chromatin types and the presence of binding sites of loop forming protein CTCF. The model was trained to reproduce the Hi-C map of chromosome 10 of human lymphoblastoid cells. With no additional tuning the model was able to predict accurately the Hi-C maps of chromosomes 1-22 for the same cell line. Simulations show unknotted chromosomes, phase separation of chromatin types and a preference of chromatin of type A to sit at the periphery of the chromosomes.

  3. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  4. Sensitivity study of the wet deposition schemes in the modelling of the Fukushima accident.

    NASA Astrophysics Data System (ADS)

    Quérel, Arnaud; Quélo, Denis; Roustan, Yelva; Mathieu, Anne; Kajino, Mizuo; Sekiyama, Thomas; Adachi, Kouji; Didier, Damien; Igarashi, Yasuhito

    2016-04-01

    The Fukushima-Daiichi release of radioactivity is a relevant event to study the atmospheric dispersion modelling of radionuclides. Actually, the atmospheric deposition onto the ground may be studied through the map of measured Cs-137 established consecutively to the accident. The limits of detection were low enough to make the measurements possible as far as 250km from the nuclear power plant. This large scale deposition has been modelled with the Eulerian model ldX. However, several weeks of emissions in multiple weather conditions make it a real challenge. Besides, these measurements are accumulated deposition of Cs-137 over the whole period and do not inform of deposition mechanisms involved: in-cloud, below-cloud, dry deposition. A comprehensive sensitivity analysis is performed in order to understand wet deposition mechanisms. It has been shown in a previous study (Quérel et al, 2016) that the choice of the wet deposition scheme has a strong impact on the assessment of the deposition patterns. Nevertheless, a "best" scheme could not be highlighted as it depends on the selected criteria: the ranking differs according to the statistical indicators considered (correlation, figure of merit in space and factor 2). A possibility to explain the difficulty to discriminate between several schemes was the uncertainties in the modelling, resulting from the meteorological data for instance. Since the move of the plume is not properly modelled, the deposition processes are applied with an inaccurate activity in the air. In the framework of the SAKURA project, an MRI-IRSN collaboration, new meteorological fields at higher resolution (Sekiyama et al., 2013) were provided and allows to reconsider the previous study. An updated study including these new meteorology data is presented. In addition, a focus on several releases causing deposition in located areas during known period was done. This helps to better understand the mechanisms of deposition involved following the

  5. Applying STAMP in Accident Analysis

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy; Daouk, Mirna; Dulac, Nicolas; Marais, Karen

    2003-01-01

    Accident models play a critical role in accident investigation and analysis. Most traditional models are based on an underlying chain of events. These models, however, have serious limitations when used for complex, socio-technical systems. Previously, Leveson proposed a new accident model (STAMP) based on system theory. In STAMP, the basic concept is not an event but a constraint. This paper shows how STAMP can be applied to accident analysis using three different views or models of the accident process and proposes a notation for describing this process.

  6. Progress towards a PETN Lifetime Prediction Model

    SciTech Connect

    Burnham, A K; Overturf III, G E; Gee, R; Lewis, P; Qiu, R; Phillips, D; Weeks, B; Pitchimani, R; Maiti, A; Zepeda-Ruiz, L; Hrousis, C

    2006-09-11

    Dinegar (1) showed that decreases in PETN surface area causes EBW detonator function times to increase. Thermal aging causes PETN to agglomerate, shrink, and densify indicating a ''sintering'' process. It has long been a concern that the formation of a gap between the PETN and the bridgewire may lead to EBW detonator failure. These concerns have led us to develop a model to predict the rate of coarsening that occurs with age for thermally driven PETN powder (50% TMD). To understand PETN contributions to detonator aging we need three things: (1) Curves describing function time dependence on specific surface area, density, and gap. (2) A measurement of the critical gap distance for no fire as a function of density and surface area for various wire configurations. (3) A model describing how specific surface area, density and gap change with time and temperature. We've had good success modeling high temperature surface area reduction and function time increase using a phenomenological deceleratory kinetic model based on a distribution of parallel nth-order reactions having evenly spaced activation energies where weighing factors of the reactions follows a Gaussian distribution about the reaction with the mean activation energy (Figure 1). Unfortunately, the mean activation energy derived from this approach is high (typically {approx}75 kcal/mol) so that negligible sintering is predicted for temperatures below 40 C. To make more reliable predictions, we've established a three-part effort to understand PETN mobility. First, we've measured the rates of step movement and pit nucleation as a function of temperature from 30 to 50 C for single crystals. Second, we've measured the evaporation rate from single crystals and powders from 105 to 135 C to obtain an activation energy for evaporation. Third, we've pursued mechanistic kinetic modeling of surface mobility, evaporation, and ripening.

  7. Predictive value of primate models for AIDS.

    PubMed

    Haigwood, Nancy L

    2004-01-01

    A number of obstacles remain in the search for an animal model for HIV infection and pathogenesis that can serve to predict efficacy in humans. HIV-1 fails to replicate and cause disease except in humans or chimpanzees, thereby limiting our ability to evaluate compounds or vaccines prior to human testing. Despite this limitation, nonhuman primate lentivirus models have been established that recapitulate the modes of infection, disease course, and antiviral immunity that is seen in HIV infection of humans. These models have been utilized to understand key aspects of disease and to evaluate concepts in therapies and vaccine development. By necessity, animal models can only be validated after successful trials in humans and the determination of correlates of protection. Because the only vaccine product tested in phase III trials in humans failed to achieve the desired protective threshold, we are as yet unable to validate any of the currently used nonhuman primate models for vaccine research. In the absence of a validated model, many experts in the field have concluded that prophylactic vaccines and therapeutic concepts should bypass primate models, and rely solely upon the systematic testing of each individual and combined vaccine element in human phase I or I/II trials to determine their relative merits. Indeed, a large effort is underway to expand efforts to test all products as part of an international effort termed "The HIV Vaccine Enterprise", with major contributions from the Bill and Melinda Gates Foundation. This Herculean task could potentially be reduced if it were possible to utilize even partially validated nonhuman primate models as part of the screening efforts. The purpose of this article is to review the data from nonhuman primate models that have contributed to our understanding of lentivirus infection and pathogenesis, and to critically evaluate how well these models have predicted outcomes in humans. Key features of the models developed to date are

  8. Oil Spill Detection and Modelling: Preliminary Results for the Cercal Accident

    NASA Astrophysics Data System (ADS)

    da Costa, R. T.; Azevedo, A.; da Silva, J. C. B.; Oliveira, A.

    2013-03-01

    Oil spill research has significantly increased mainly as a result of the severe consequences experienced from industry accidents. Oil spill models are currently able to simulate the processes that determine the fate of oil slicks, playing an important role in disaster prevention, control and mitigation, generating valuable information for decision makers and the population in general. On the other hand, satellite Synthetic Aperture Radar (SAR) imagery has demonstrated significant potential in accidental oil spill detection, when they are accurately differentiated from look-alikes. The combination of both tools can lead to breakthroughs, particularly in the development of Early Warning Systems (EWS). This paper presents a hindcast simulation of the oil slick resulting from the Motor Tanker (MT) Cercal oil spill, listed by the Portuguese Navy as one of the major oil spills in the Portuguese Atlantic Coast. The accident took place nearby Leix˜oes Harbour, North of the Douro River, Porto (Portugal) on the 2nd of October 1994. The oil slick was segmented from available European Remote Sensing (ERS) satellite SAR images, using an algorithm based on a simplified version of the K-means clustering formulation. The image-acquired information, added to the initial conditions and forcings, provided the necessary inputs for the oil spill model. Simulations were made considering the tri-dimensional hydrodynamics in a crossscale domain, from the interior of the Douro River Estuary to the open-ocean on the Iberian Atlantic shelf. Atmospheric forcings (from ECMWF - the European Centre for Medium-Range Weather Forecasts and NOAA - the National Oceanic and Atmospheric Administration), river forcings (from SNIRH - the Portuguese National Information System of the Hydric Resources) and tidal forcings (from LNEC - the National Laboratory for Civil Engineering), including baroclinic gradients (NOAA), were considered. The lack of data for validation purposes only allowed the use of the

  9. Predictive model of radiative neutrino masses

    NASA Astrophysics Data System (ADS)

    Babu, K. S.; Julio, J.

    2014-03-01

    We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: the hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with δCP=π; and the effective mass in neutrinoless double beta decay lies in a narrow range, mββ=(17.6-18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tanβ, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The nonstandard neutral Higgs bosons, if they are moderately heavy, would decay dominantly into μ and τ with prescribed branching ratios. Observable rates for the decays μ →eγ and τ→3μ are predicted if these scalars have masses in the range of 150-500 GeV.

  10. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J. T.

    1986-01-01

    A methodology is established to predict thermal barrier coating life in a environment similar to that experienced by gas turbine airfoils. Experiments were conducted to determine failure modes of the thermal barrier coating. Analytical studies were employed to derive a life prediction model. A review of experimental and flight service components as well as laboratory post evaluations indicates that the predominant mode of TBC failure involves thermomechanical spallation of the ceramic coating layer. This ceramic spallation involves the formation of a dominant crack in the ceramic coating parallel to and closely adjacent to the topologically complex metal ceramic interface. This mechanical failure mode clearly is influenced by thermal exposure effects as shown in experiments conducted to study thermal pre-exposure and thermal cycle-rate effects. The preliminary life prediction model developed focuses on the two major damage modes identified in the critical experiments tasks. The first of these involves a mechanical driving force, resulting from cyclic strains and stresses caused by thermally induced and externally imposed mechanical loads. The second is an environmental driving force based on experimental results, and is believed to be related to bond coat oxidation. It is also believed that the growth of this oxide scale influences the intensity of the mechanical driving force.

  11. A prediction model for Clostridium difficile recurrence

    PubMed Central

    LaBarbera, Francis D.; Nikiforov, Ivan; Parvathenani, Arvin; Pramil, Varsha; Gorrepati, Subhash

    2015-01-01

    Background Clostridium difficile infection (CDI) is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR); however, there is little consensus on the impact of most of the identified risk factors. Methods Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR) from January 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF) to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions We hope that in the future, machine learning algorithms, such as the RF, will see a wider application. PMID:25656667

  12. Modeling and Prediction of Krueger Device Noise

    NASA Technical Reports Server (NTRS)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  13. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.; Cook, T. S.; Kim, K. S.

    1986-01-01

    This is the second annual report of the first 3-year phase of a 2-phase, 5-year program. The objectives of the first phase are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consists of an air plasma sprayed ZrO-Y2O3 top coat, a low pressure plasma sprayed NiCrAlY bond coat, and a Rene' 80 substrate. Task I was to evaluate TBC failure mechanisms. Both bond coat oxidation and bond coat creep have been identified as contributors to TBC failure. Key property determinations have also been made for the bond coat and the top coat, including tensile strength, Poisson's ratio, dynamic modulus, and coefficient of thermal expansion. Task II is to develop TBC life prediction models for the predominant failure modes. These models will be developed based on the results of thermmechanical experiments and finite element analysis. The thermomechanical experiments have been defined and testing initiated. Finite element models have also been developed to handle TBCs and are being utilized to evaluate different TBC failure regimes.

  14. Ground Motion Prediction Models for Caucasus Region

    NASA Astrophysics Data System (ADS)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  15. Gamma-ray Pulsars: Models and Predictions

    NASA Technical Reports Server (NTRS)

    Harding Alice K.; White, Nicholas E. (Technical Monitor)

    2000-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is, dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10(exp 12) - 10(exp 13) G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers of the primary curvature emission around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. Next-generation gamma-ray telescopes sensitive to GeV-TeV emission will provide critical tests of pulsar acceleration and emission mechanisms.

  16. Clinical Predictive Modeling Development and Deployment through FHIR Web Services

    PubMed Central

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction. PMID:26958207

  17. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    PubMed

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction. PMID:26958207

  18. Decadal prediction with a high resolution model

    NASA Astrophysics Data System (ADS)

    Monerie, Paul-Arthur; Valcke, Sophie; Terray, Laurent; Moine, Marie-Pierre

    2016-04-01

    The ability of a high resolution coupled atmosphere-ocean general circulation model (with a horizontal resolution of the quarter degree in the ocean and of about 50 km in the atmosphere) to predict the annual means of temperature, precipitation, sea-ice volume and extent is assessed. Reasonable skill in predicting sea surface temperatures and surface air temperature is obtained, especially over the North Atlantic, the tropical Atlantic and the Indian Ocean. The skill in predicting precipitations is weaker and not significant. The Sea Ice Extent and volume are also reasonably predicted in winter (March) and summer (September). It is however argued that the skill is mainly due to the atmosphere feeding in well-mixed GHGs. The mid-90's subpolar gyre warming is assessed. The model simulates a warming of the North Atlantic Ocean, associated with an increase of the meridional heat transport, a strengthening of the North Atlantic current and a deepening of the mixed layer over the Labrador Sea. The atmosphere plays a role in the warming through a modulation of the North Atlantic Oscillation and a shrinking of the subpolar gyre. At the 3-8 years lead-time, a negative anomaly of pressure, located south of the subpolar gyre is associated with the wind speed decrease over the subpolar gyre. It prevents oceanic heat-loss and favors the northward move, from the subtropical to the subpolar gyre, of anomalously warm and salty water, leading to its warming. We finally argued that the subpolar gyre warming is triggered by the ocean dynamic but the atmosphere can contributes to its sustaining. This work is realised in the framework of the EU FP7 SPECS Project.

  19. Deterioration Prediction Model of Irrigation Facilities by Markov Chain Model

    NASA Astrophysics Data System (ADS)

    Mori, Takehisa; Nishino, Noriyasu; Fujiwara, Tetsuro

    "Stock Management" launched in all over Japan is an activity to use irrigation facilities effectively and to reduce life cycle costs of theirs. Deterioration prediction of the irrigation facility condition is a vital process for the study of maintenance measures and the estimation of maintenance cost. It is important issue to establish the prediction technique with higher accuracy. Thereupon, we established a deterioration prediction model by a statistical method "Markov chain", and analyzed a function diagnosis data of irrigation facilities. As a result, we clarified the deterioration characteristics into each structure type and facilities.

  20. Lagrangian predictability characteristics of an Ocean Model

    NASA Astrophysics Data System (ADS)

    Lacorata, Guglielmo; Palatella, Luigi; Santoleri, Rosalia

    2014-11-01

    The Mediterranean Forecasting System (MFS) Ocean Model, provided by INGV, has been chosen as case study to analyze Lagrangian trajectory predictability by means of a dynamical systems approach. To this regard, numerical trajectories are tested against a large amount of Mediterranean drifter data, used as sample of the actual tracer dynamics across the sea. The separation rate of a trajectory pair is measured by computing the Finite-Scale Lyapunov Exponent (FSLE) of first and second kind. An additional kinematic Lagrangian model (KLM), suitably treated to avoid "sweeping"-related problems, has been nested into the MFS in order to recover, in a statistical sense, the velocity field contributions to pair particle dispersion, at mesoscale level, smoothed out by finite resolution effects. Some of the results emerging from this work are: (a) drifter pair dispersion displays Richardson's turbulent diffusion inside the [10-100] km range, while numerical simulations of MFS alone (i.e., without subgrid model) indicate exponential separation; (b) adding the subgrid model, model pair dispersion gets very close to observed data, indicating that KLM is effective in filling the energy "mesoscale gap" present in MFS velocity fields; (c) there exists a threshold size beyond which pair dispersion becomes weakly sensitive to the difference between model and "real" dynamics; (d) the whole methodology here presented can be used to quantify model errors and validate numerical current fields, as far as forecasts of Lagrangian dispersion are concerned.

  1. Predictions in multifield models of inflation

    SciTech Connect

    Frazer, Jonathan

    2014-01-01

    This paper presents a method for obtaining an analytic expression for the density function of observables in multifield models of inflation with sum-separable potentials. The most striking result is that the density function in general possesses a sharp peak and the location of this peak is only mildly sensitive to the distribution of initial conditions. A simple argument is given for why this result holds for a more general class of models than just those with sum-separable potentials and why for such models, it is possible to obtain robust predictions for observable quantities. As an example, the joint density function of the spectral index and running in double quadratic inflation is computed. For scales leaving the horizon 55 e-folds before the end of inflation, the density function peaks at n{sub s} = 0.967 and α = 0.0006 for the spectral index and running respectively.

  2. Validation of Kp Estimation and Prediction Models

    NASA Astrophysics Data System (ADS)

    McCollough, J. P., II; Young, S. L.; Frey, W.

    2014-12-01

    Specifification and forecast of geomagnetic indices is an important capability for space weather operations. The University Partnering for Operational Support (UPOS) effort at the Applied Physics Laboratory of Johns Hopkins University (JHU/APL) produced many space weather models, including the Kp Predictor and Kp Estimator. We perform a validation of index forecast products against definitive indices computed by the Deutches GeoForschungsZentstrum Potsdam (GFZ). We compute continuous predictant skill scores, as well as 2x2 contingency tables and associated scalar quantities for different index thresholds. We also compute a skill score against a nowcast persistence model. We discuss various sources of error for the models and how they may potentially be improved.

  3. World Meteorological Organization's model simulations of the radionuclide dispersion and deposition from the Fukushima Daiichi nuclear power plant accident.

    PubMed

    Draxler, Roland; Arnold, Dèlia; Chino, Masamichi; Galmarini, Stefano; Hort, Matthew; Jones, Andrew; Leadbetter, Susan; Malo, Alain; Maurer, Christian; Rolph, Glenn; Saito, Kazuo; Servranckx, René; Shimbori, Toshiki; Solazzo, Efisio; Wotawa, Gerhard

    2015-01-01

    Five different atmospheric transport and dispersion model's (ATDM) deposition and air concentration results for atmospheric releases from the Fukushima Daiichi nuclear power plant accident were evaluated over Japan using regional (137)Cs deposition measurements and (137)Cs and (131)I air concentration time series at one location about 110 km from the plant. Some of the ATDMs used the same and others different meteorological data consistent with their normal operating practices. There were four global meteorological analyses data sets available and two regional high-resolution analyses. Not all of the ATDMs were able to use all of the meteorological data combinations. The ATDMs were configured identically as much as possible with respect to the release duration, release height, concentration grid size, and averaging time. However, each ATDM retained its unique treatment of the vertical velocity field and the wet and dry deposition, one of the largest uncertainties in these calculations. There were 18 ATDM-meteorology combinations available for evaluation. The deposition results showed that even when using the same meteorological analysis, each ATDM can produce quite different deposition patterns. The better calculations in terms of both deposition and air concentration were associated with the smoother ATDM deposition patterns. The best model with respect to the deposition was not always the best model with respect to air concentrations. The use of high-resolution mesoscale analyses improved ATDM performance; however, high-resolution precipitation analyses did not improve ATDM predictions. Although some ATDMs could be identified as better performers for either deposition or air concentration calculations, overall, the ensemble mean of a subset of better performing members provided more consistent results for both types of calculations. PMID:24182910

  4. Using Numerical Models in the Development of Software Tools for Risk Management of Accidents with Oil and Inert Spills

    NASA Astrophysics Data System (ADS)

    Fernandes, R.; Leitão, P. C.; Braunschweig, F.; Lourenço, F.; Galvão, P.; Neves, R.

    2012-04-01

    The increasing ship traffic and maritime transport of dangerous substances make it more difficult to significantly reduce the environmental, economic and social risks posed by potential spills, although the security rules are becoming more restrictive (ships with double hull, etc.) and the surveillance systems are becoming more developed (VTS, AIS). In fact, the problematic associated to spills is and will always be a main topic: spill events are continuously happening, most of them unknown for the general public because of their small scale impact, but with some of them (in a much smaller number) becoming authentic media phenomena in this information era, due to their large dimensions and environmental and social-economic impacts on ecosystems and local communities, and also due to some spectacular or shocking pictures generated. Hence, the adverse consequences posed by these type of accidents, increase the preoccupation of avoiding them in the future, or minimize their impacts, using not only surveillance and monitoring tools, but also increasing the capacity to predict the fate and behaviour of bodies, objects, or substances in the following hours after the accident - numerical models can have now a leading role in operational oceanography applied to safety and pollution response in the ocean because of their predictive potential. Search and rescue operation, oil, inert (ship debris, or floating containers), and HNS (hazardous and noxious substances) spills risk analysis are the main areas where models can be used. Model applications have been widely used in emergency or planning issues associated to pollution risks, and contingency and mitigation measures. Before a spill, in the planning stage, modelling simulations are used in environmental impact studies, or risk maps, using historical data, reference situations, and typical scenarios. After a spill, the use of fast and simple modelling applications allow to understand the fate and behaviour of the spilt

  5. Failure behavior of internally pressurized flawed and unflawed steam generator tubing at high temperatures -- Experiments and comparison with model predictions

    SciTech Connect

    Majumdar, S.; Shack, W.J.; Diercks, D.R.; Mruk, K.; Franklin, J.; Knoblich, L.

    1998-03-01

    This report summarizes experimental work performed at Argonne National Laboratory on the failure of internally pressurized steam generator tubing at high temperatures ({le} 700 C). A model was developed for predicting failure of flawed and unflawed steam generator tubes under internal pressure and temperature histories postulated to occur during severe accidents. The model was validated by failure tests on specimens with part-through-wall axial and circumferential flaws of various lengths and depths, conducted under various constant and ramped internal pressure and temperature conditions. The failure temperatures predicted by the model for two temperature and pressure histories, calculated for severe accidents initiated by a station blackout, agree very well with tests performed on both flawed and unflawed specimens.

  6. Permafrost, climate, and change: predictive modelling approach.

    NASA Astrophysics Data System (ADS)

    Anisimov, O.

    2003-04-01

    Predicted by GCMs enhanced warming of the Arctic will lead to discernible impacts on permafrost and northern environment. Mathematical models of different complexity forced by scenarios of climate change may be used to predict such changes. Permafrost models that are currently in use may be divided into four groups: index-based models (e.g. frost index model, N-factor model); models of intermediate complexity based on equilibrium simplified solution of the Stephan problem ("Koudriavtcev's" model and its modifications), and full-scale comprehensive dynamical models. New approach of stochastic modelling came into existence recently and has good prospects for the future. Important task is to compare the ability of the models that are different in complexity, concept, and input data requirements to capture the major impacts of changing climate on permafrost. A progressive increase in the depth of seasonal thawing (often referred to as the active-layer thickness, ALT) could be a relatively short-term reaction to climatic warming. At regional and local scales, it may produce substantial effects on vegetation, soil hydrology and runoff, as the water storage capacity of near-surface permafrost will be changed. Growing public concerns are associated with the impacts that warming of permafrost may have on engineered infrastructure built upon it. At the global scale, increase of ALT could facilitate further climatic change if more greenhouse gases are released when the upper layer of the permafrost thaws. Since dynamic permafrost models require complete set of forcing data that is not readily available on the circumpolar scale, they could be used most effectively in regional studies, while models of intermediate complexity are currently best tools for the circumpolar assessments. Set of five transient scenarios of climate change for the period 1980 - 2100 has been constructed using outputs from GFDL, NCAR, CCC, HadCM, and ECHAM-4 models. These GCMs were selected in the course

  7. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J. T.; Sheffler, K. D.

    1986-01-01

    The objective of this program is to establish a methodology to predict Thermal Barrier Coating (TBC) life on gas turbine engine components. The approach involves experimental life measurement coupled with analytical modeling of relevant degradation modes. The coating being studied is a flight qualified two layer system, designated PWA 264, consisting of a nominal ten mil layer of seven percent yttria partially stabilized zirconia plasma deposited over a nominal five mil layer of low pressure plasma deposited NiCoCrAlY. Thermal barrier coating degradation modes being investigated include: thermomechanical fatigue, oxidation, erosion, hot corrosion, and foreign object damage.

  8. Predictive modelling of boiler fouling. Final report.

    SciTech Connect

    Chatwani, A

    1990-12-31

    A spectral element method embodying Large Eddy Simulation based on Re- Normalization Group theory for simulating Sub Grid Scale viscosity was chosen for this work. This method is embodied in a computer code called NEKTON. NEKTON solves the unsteady, 2D or 3D,incompressible Navier Stokes equations by a spectral element method. The code was later extended to include the variable density and multiple reactive species effects at low Mach numbers, and to compute transport of large particles governed by inertia. Transport of small particles is computed by treating them as trace species. Code computations were performed for a number of test conditions typical of flow past a deep tube bank in a boiler. Results indicate qualitatively correct behavior. Predictions of deposition rates and deposit shape evolution also show correct qualitative behavior. These simulations are the first attempts to compute flow field results at realistic flow Reynolds numbers of the order of 10{sup 4}. Code validation was not done; comparison with experiment also could not be made as many phenomenological model parameters, e.g., sticking or erosion probabilities and their dependence on experimental conditions were not known. The predictions however demonstrate the capability to predict fouling from first principles. Further work is needed: use of large or massively parallel machine; code validation; parametric studies, etc.

  9. Estimating Loss-of-Coolant Accident Frequencies for the Standardized Plant Analysis Risk Models

    SciTech Connect

    S. A. Eide; D. M. Rasmuson; C. L. Atwood

    2008-09-01

    The U.S. Nuclear Regulatory Commission maintains a set of risk models covering the U.S. commercial nuclear power plants. These standardized plant analysis risk (SPAR) models include several loss-of-coolant accident (LOCA) initiating events such as small (SLOCA), medium (MLOCA), and large (LLOCA). All of these events involve a loss of coolant inventory from the reactor coolant system. In order to maintain a level of consistency across these models, initiating event frequencies generally are based on plant-type average performance, where the plant types are boiling water reactors and pressurized water reactors. For certain risk analyses, these plant-type initiating event frequencies may be replaced by plant-specific estimates. Frequencies for SPAR LOCA initiating events previously were based on results presented in NUREG/CR-5750, but the newest models use results documented in NUREG/CR-6928. The estimates in NUREG/CR-6928 are based on historical data from the initiating events database for pressurized water reactor SLOCA or an interpretation of results presented in the draft version of NUREG-1829. The information in NUREG-1829 can be used several ways, resulting in different estimates for the various LOCA frequencies. Various ways NUREG-1829 information can be used to estimate LOCA frequencies were investigated and this paper presents two methods for the SPAR model standard inputs, which differ from the method used in NUREG/CR-6928. In addition, results obtained from NUREG-1829 are compared with actual operating experience as contained in the initiating events database.

  10. Accident investigation

    NASA Technical Reports Server (NTRS)

    Laynor, William G. Bud

    1987-01-01

    The National Transportation Safety Board (NTSB) has attributed wind shear as a cause or contributing factor in 15 accidents involving transport-categroy airplanes since 1970. Nine of these were nonfatal; but the other six accounted for 440 lives. Five of the fatal accidents and seven of the nonfatal accidents involved encounters with convective downbursts or microbursts. Of other accidents, two which were nonfatal were encounters with a frontal system shear, and one which was fatal was the result of a terrain induced wind shear. These accidents are discussed with reference to helping the aircraft to avoid the wind shear or if impossible to help the pilot to get through the wind shear.

  11. Predicting the long-term (137)Cs distribution in Fukushima after the Fukushima Dai-ichi nuclear power plant accident: a parameter sensitivity analysis.

    PubMed

    Yamaguchi, Masaaki; Kitamura, Akihiro; Oda, Yoshihiro; Onishi, Yasuo

    2014-09-01

    than those of the other rivers. Annual sediment outflows from the Abukuma River and the total from the other 13 river basins were calculated as 3.2 × 10(4)-3.1 × 10(5) and 3.4 × 10(4)-2.1 × 10(5)ty(-1), respectively. The values vary between calculation cases because of the critical shear stress, the rainfall factor, and other differences. On the other hand, contributions of those parameters were relatively small for (137)Cs concentration within transported soil. This indicates that the total amount of (137)Cs outflow into the ocean would mainly be controlled by the amount of soil erosion and transport and the total amount of (137)Cs concentration remaining within the basin. Outflows of (137)Cs from the Abukuma River and the total from the other 13 river basins during the first year after the accident were calculated to be 2.3 × 10(11)-3.7 × 10(12) and 4.6 × 10(11)-6.5 × 10(12)Bqy(-1), respectively. The former results were compared with the field investigation results, and the order of magnitude was matched between the two, but the value of the investigation result was beyond the upper limit of model prediction. PMID:24836353

  12. Modeling Reef Hydrodynamics to Predict Coral Bleaching

    NASA Astrophysics Data System (ADS)

    Bird, James; Steinberg, Craig; Hardy, Tom

    2005-11-01

    The aim of this study is to use environmental physics to predict water temperatures around and within coral reefs. Anomalously warm water is the leading cause for mass coral bleaching; thus a clearer understanding of the oceanographic mechanisms that control reef water temperatures will enable better reef management. In March 1998 a major coral bleaching event occurred at Scott Reef, a 40 km-wide lagoon 300 km off the northwest coast of Australia. Meteorological and coral cover observations were collected before, during, and after the event. In this study, two hydrodynamic models are applied to Scott Reef and validated against oceanographic data collected between March and June 2003. The models are then used to hindcast the reef hydrodynamics that led up to the 1998 bleaching event. Results show a positive correlation between poorly mixed regions and bleaching severity.

  13. Audibility-based annoyance prediction modeling

    NASA Astrophysics Data System (ADS)

    Fidell, Sanford; Finegold, Lawrence S.

    1992-04-01

    The effects of rapid onset times and high absolute sound pressures near military training routes (MTR's) including possible startle effects and increased annoyance due to the unpredictable nature of these flights, have been of longstanding concern. A more recent concern is the possibility of increased annoyance due to low ambient noise levels near military flight training operations and differences in expectations about noise exposure in high and low population density areas. This paper describes progress in developing audibility-based methods for predicting the annoyance of noise produced at some distance from aircraft flight tracks. Audibility-based models which take into account near-ground acoustic propagation and ambient noise levels may be useful in assessing environmental impacts of MTR's and Military Operating Areas (MOA's) under some conditions. A prototype Single Event Annoyance Prediction Model (SEAPM) has been developed under USAF sponsorship as an initial effort to address these issues, and work has progressed on a geographic information system (GIS) to produce cartographically referenced representations of aircraft audibility.

  14. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J.; Sheffler, K.

    1984-01-01

    The objective of this program is to develop an integrated life prediction model accounting for all potential life-limiting Thermal Barrier Coating (TBC) degradation and failure modes including spallation resulting from cyclic thermal stress, oxidative degradation, hot corrosion, erosion, and foreign object damage (FOD). The mechanisms and relative importance of the various degradation and failure modes will be determined, and the methodology to predict predominant mode failure life in turbine airfoil application will be developed and verified. An empirically based correlative model relating coating life to parametrically expressed driving forces such as temperature and stress will be employed. The two-layer TBC system being investigated, designated PWA264, currently is in commercial aircraft revenue service. It consists of an inner low pressure chamber plasma-sprayed NiCoCrAlY metallic bond coat underlayer (4 to 6 mils) and an outer air plasma-sprayed 7 w/o Y2O3-ZrO2 (8 to 12 mils) ceramic top layer.

  15. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Sheffler, K. D.; Demasi, J. T.

    1985-01-01

    A methodology was established to predict thermal barrier coating life in an environment simulative of that experienced by gas turbine airfoils. Specifically, work is being conducted to determine failure modes of thermal barrier coatings in the aircraft engine environment. Analytical studies coupled with appropriate physical and mechanical property determinations are being employed to derive coating life prediction model(s) on the important failure mode(s). An initial review of experimental and flight service components indicates that the predominant mode of TBC failure involves thermomechanical spallation of the ceramic coating layer. This ceramic spallation involves the formation of a dominant crack in the ceramic coating parallel to and closely adjacent to the metal-ceramic interface. Initial results from a laboratory test program designed to study the influence of various driving forces such as temperature, thermal cycle frequency, environment, and coating thickness, on ceramic coating spalling life suggest that bond coat oxidation damage at the metal-ceramic interface contributes significantly to thermomechanical cracking in the ceramic layer. Low cycle rate furnace testing in air and in argon clearly shows a dramatic increase of spalling life in the non-oxidizing environments.

  16. A predictive fitness model for influenza

    NASA Astrophysics Data System (ADS)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  17. Key factors contributing to accident severity rate in construction industry in Iran: a regression modelling approach.

    PubMed

    Soltanzadeh, Ahmad; Mohammadfam, Iraj; Moghimbeigi, Abbas; Ghiasvand, Reza

    2016-03-01

    Construction industry involves the highest risk of occupational accidents and bodily injuries, which range from mild to very severe. The aim of this cross-sectional study was to identify the factors associated with accident severity rate (ASR) in the largest Iranian construction companies based on data about 500 occupational accidents recorded from 2009 to 2013. We also gathered data on safety and health risk management and training systems. Data were analysed using Pearson's chi-squared coefficient and multiple regression analysis. Median ASR (and the interquartile range) was 107.50 (57.24- 381.25). Fourteen of the 24 studied factors stood out as most affecting construction accident severity (p<0.05). These findings can be applied in the design and implementation of a comprehensive safety and health risk management system to reduce ASR. PMID:27092639

  18. Predictive Capability Maturity Model for computational modeling and simulation.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  19. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    PubMed

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology. PMID:23687472

  20. Data driven propulsion system weight prediction model

    NASA Technical Reports Server (NTRS)

    Gerth, Richard J.

    1994-01-01

    The objective of the research was to develop a method to predict the weight of paper engines, i.e., engines that are in the early stages of development. The impetus for the project was the Single Stage To Orbit (SSTO) project, where engineers need to evaluate alternative engine designs. Since the SSTO is a performance driven project the performance models for alternative designs were well understood. The next tradeoff is weight. Since it is known that engine weight varies with thrust levels, a model is required that would allow discrimination between engines that produce the same thrust. Above all, the model had to be rooted in data with assumptions that could be justified based on the data. The general approach was to collect data on as many existing engines as possible and build a statistical model of the engines weight as a function of various component performance parameters. This was considered a reasonable level to begin the project because the data would be readily available, and it would be at the level of most paper engines, prior to detailed component design.

  1. Modeling and sensitivity analysis of transport and deposition of radionuclides from the Fukushima Dai-ichi accident

    NASA Astrophysics Data System (ADS)

    Hu, X.; Li, D.; Huang, H.; Shen, S.; Bou-Zeid, E.

    2014-10-01

    The atmospheric transport and ground deposition of radioactive isotopes 131I and 137Cs during and after the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident (March 2011) are investigated using the Weather Research and Forecasting-Chemistry (WRF-Chem) model. The aim is to assess the skill of WRF in simulating these processes and the sensitivity of the model's performance to various parameterizations of unresolved physics. The WRF-Chem model is first upgraded by implementing a radioactive decay term into the advection-diffusion solver and adding three parameterizations for dry deposition and two parameterizations for wet deposition. Different microphysics and horizontal turbulent diffusion schemes are then tested for their ability to reproduce observed meteorological conditions. Subsequently, the influence of emission characteristics (including the emission rate, the gas partitioning of 131I and the size distribution of 137Cs) on the simulated transport and deposition is examined. The results show that the model can predict the wind fields and rainfall realistically and that the ground deposition of the radionuclides can also be captured reasonably well. The modeled precipitation is largely influenced by the microphysics schemes, while the influence of the horizontal diffusion schemes on the wind fields is subtle. However, the ground deposition of radionuclides is sensitive to both horizontal diffusion schemes and microphysical schemes. Wet deposition dominated over dry deposition at most of the observation stations, but not at all locations in the simulated domain. To assess the sensitivity of the total daily deposition to all of the model physics and inputs, the averaged absolute value of the difference (AAD) is proposed. Based on AAD, the total deposition is mainly influenced by the emission rate for both 131I and 137Cs; while it is not sensitive to the dry deposition parameterizations since the dry deposition is just a minor fraction of the total

  2. Models and numerical methods for the simulation of loss-of-coolant accidents in nuclear reactors

    NASA Astrophysics Data System (ADS)

    Seguin, Nicolas

    2014-05-01

    In view of the simulation of the water flows in pressurized water reactors (PWR), many models are available in the literature and their complexity deeply depends on the required accuracy, see for instance [1]. The loss-of-coolant accident (LOCA) may appear when a pipe is broken through. The coolant is composed by light water in its liquid form at very high temperature and pressure (around 300 °C and 155 bar), it then flashes and becomes instantaneously vapor in case of LOCA. A front of liquid/vapor phase transition appears in the pipes and may propagate towards the critical parts of the PWR. It is crucial to propose accurate models for the whole phenomenon, but also sufficiently robust to obtain relevant numerical results. Due to the application we have in mind, a complete description of the two-phase flow (with all the bubbles, droplets, interfaces…) is out of reach and irrelevant. We investigate averaged models, based on the use of void fractions for each phase, which represent the probability of presence of a phase at a given position and at a given time. The most accurate averaged model, based on the so-called Baer-Nunziato model, describes separately each phase by its own density, velocity and pressure. The two phases are coupled by non-conservative terms due to gradients of the void fractions and by source terms for mechanical relaxation, drag force and mass transfer. With appropriate closure laws, it has been proved [2] that this model complies with all the expected physical requirements: positivity of densities and temperatures, maximum principle for the void fraction, conservation of the mixture quantities, decrease of the global entropy… On the basis of this model, it is possible to derive simpler models, which can be used where the flow is still, see [3]. From the numerical point of view, we develop new Finite Volume schemes in [4], which also satisfy the requirements mentioned above. Since they are based on a partial linearization of the physical

  3. Heuristic Modeling for TRMM Lifetime Predictions

    NASA Technical Reports Server (NTRS)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  4. Modeling and analysis of the unprotected loss-of-flow accident in the Clinch River Breeder Reactor

    SciTech Connect

    Morris, E.E.; Dunn, F.E.; Simms, R.; Gruber, E.E.

    1985-01-01

    The influence of fission-gas-driven fuel compaction on the energetics resulting from a loss-of-flow accident was estimated with the aid of the SAS3D accident analysis code. The analysis was carried out as part of the Clinch River Breeder Reactor licensing process. The TREAT tests L6, L7, and R8 were analyzed to assist in the modeling of fuel motion and the effects of plenum fission-gas release on coolant and clad dynamics. Special, conservative modeling was introduced to evaluate the effect of fission-gas pressure on the motion of the upper fuel pin segment following disruption. For the nominal sodium-void worth, fission-gas-driven fuel compaction did not adversely affect the outcome of the transient. When uncertainties in the sodium-void worth were considered, however, it was found that if fuel compaction occurs, loss-of-flow driven transient overpower phenomenology could not be precluded.

  5. Estimating the magnitude of prediction uncertainties for the APLE model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  6. WASTE-ACC: A computer model for analysis of waste management accidents

    SciTech Connect

    Nabelssi, B.K.; Folga, S.; Kohout, E.J.; Mueller, C.J.; Roglans-Ribas, J.

    1996-12-01

    In support of the U.S. Department of Energy`s (DOE`s) Waste Management Programmatic Environmental Impact Statement, Argonne National Laboratory has developed WASTE-ACC, a computational framework and integrated PC-based database system, to assess atmospheric releases from facility accidents. WASTE-ACC facilitates the many calculations for the accident analyses necessitated by the numerous combinations of waste types, waste management process technologies, facility locations, and site consolidation strategies in the waste management alternatives across the DOE complex. WASTE-ACC is a comprehensive tool that can effectively test future DOE waste management alternatives and assumptions. The computational framework can access several relational databases to calculate atmospheric releases. The databases contain throughput volumes, waste profiles, treatment process parameters, and accident data such as frequencies of initiators, conditional probabilities of subsequent events, and source term release parameters of the various waste forms under accident stresses. This report describes the computational framework and supporting databases used to conduct accident analyses and to develop source terms to assess potential health impacts that may affect on-site workers and off-site members of the public under various DOE waste management alternatives.

  7. Multi-scale approach to the modeling of fission gas discharge during hypothetical loss-of-flow accident in gen-IV sodium fast reactor

    SciTech Connect

    Behafarid, F.; Shaver, D. R.; Bolotnov, I. A.; Jansen, K. E.; Antal, S. P.; Podowski, M. Z.

    2012-07-01

    The required technological and safety standards for future Gen IV Reactors can only be achieved if advanced simulation capabilities become available, which combine high performance computing with the necessary level of modeling detail and high accuracy of predictions. The purpose of this paper is to present new results of multi-scale three-dimensional (3D) simulations of the inter-related phenomena, which occur as a result of fuel element heat-up and cladding failure, including the injection of a jet of gaseous fission products into a partially blocked Sodium Fast Reactor (SFR) coolant channel, and gas/molten sodium transport along the coolant channels. The computational approach to the analysis of the overall accident scenario is based on using two different inter-communicating computational multiphase fluid dynamics (CMFD) codes: a CFD code, PHASTA, and a RANS code, NPHASE-CMFD. Using the geometry and time history of cladding failure and the gas injection rate, direct numerical simulations (DNS), combined with the Level Set method, of two-phase turbulent flow have been performed by the PHASTA code. The model allows one to track the evolution of gas/liquid interfaces at a centimeter scale. The simulated phenomena include the formation and breakup of the jet of fission products injected into the liquid sodium coolant. The PHASTA outflow has been averaged over time to obtain mean phasic velocities and volumetric concentrations, as well as the liquid turbulent kinetic energy and turbulence dissipation rate, all of which have served as the input to the core-scale simulations using the NPHASE-CMFD code. A sliding window time averaging has been used to capture mean flow parameters for transient cases. The results presented in the paper include testing and validation of the proposed models, as well the predictions of fission-gas/liquid-sodium transport along a multi-rod fuel assembly of SFR during a partial loss-of-flow accident. (authors)

  8. Model predictive control of a wind turbine modelled in Simpack

    NASA Astrophysics Data System (ADS)

    Jassmann, U.; Berroth, J.; Matzke, D.; Schelenz, R.; Reiter, M.; Jacobs, G.; Abel, D.

    2014-06-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine to

  9. Molecular structures and thermodynamic properties of monohydrated gaseous iodine compounds: Modelling for severe accident simulation

    NASA Astrophysics Data System (ADS)

    Sudolská, Mária; Cantrel, Laurent; Budzák, Šimon; Černušák, Ivan

    2014-03-01

    Monohydrated complexes of iodine species (I, I2, HI, and HOI) have been studied by correlated ab initio calculations. The standard enthalpies of formation, Gibbs free energy and the temperature dependence of the heat capacities at constant pressure were calculated. The values obtained have been implemented in ASTEC nuclear accident simulation software to check the thermodynamic stability of hydrated iodine compounds in the reactor coolant system and in the nuclear containment building of a pressurised water reactor during a severe accident. It can be concluded that iodine complexes are thermodynamically unstable by means of positive Gibbs free energies and would be represented by trace level concentrations in severe accident conditions; thus it is well justified to only consider pure iodine species and not hydrated forms.

  10. A two-stage optimization model for emergency material reserve layout planning under uncertainty in response to environmental accidents.

    PubMed

    Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Liu, Rentao; Wang, Peng

    2016-06-01

    In the emergency management relevant to pollution accidents, efficiency emergency rescues can be deeply influenced by a reasonable assignment of the available emergency materials to the related risk sources. In this study, a two-stage optimization framework is developed for emergency material reserve layout planning under uncertainty to identify material warehouse locations and emergency material reserve schemes in pre-accident phase coping with potential environmental accidents. This framework is based on an integration of Hierarchical clustering analysis - improved center of gravity (HCA-ICG) model and material warehouse location - emergency material allocation (MWL-EMA) model. First, decision alternatives are generated using HCA-ICG to identify newly-built emergency material warehouses for risk sources which cannot be satisfied by existing ones with a time-effective manner. Second, emergency material reserve planning is obtained using MWL-EMA to make emergency materials be prepared in advance with a cost-effective manner. The optimization framework is then applied to emergency management system planning in Jiangsu province, China. The results demonstrate that the developed framework not only could facilitate material warehouse selection but also effectively provide emergency material for emergency operations in a quick response. PMID:26897572

  11. Predictability of the Indian Ocean Dipole in the coupled models

    NASA Astrophysics Data System (ADS)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2016-06-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  12. Nonconvex model predictive control for commercial refrigeration

    NASA Astrophysics Data System (ADS)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  13. Optimization approaches to nonlinear model predictive control

    SciTech Connect

    Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)

    1991-01-01

    With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.

  14. Towards a Predictive Model of Elastomer seals

    NASA Astrophysics Data System (ADS)

    Khawaja, Musab; Mostofi, Arash; Sutton, Adrian; Stevens, John

    2014-03-01

    Elastomers are a highly versatile class of material. Their diversity of technological application is enabled by the fact that their properties may be tuned through manipulation of their constituent building blocks at multiple length-scales. These scales range from the chemical groups within individual monomers, to the overall morphology on the mesoscale, as well as through compounding with other materials. An important use of elastomers is in seals for mechanical components. Ideally, such seals should act as impermeable barriers to gases and liquids, preventing contamination and damage to equipment. Elastomer failure, therefore, can be extremely costly and is a matter of great importance to industry. The question at the centre of this work relates to the failure of elastomer seals via explosive decompression. This mechanism is a result of permeation of gas molecules through the seals at high pressures, and their subsequent rapid egress upon removal of the elevated pressures. The goal is to develop a model to better understand and predict the structure, porosity and transport of molecular species through elastomer seals, with a view to elucidating general design principles that will inform the development of higher performance materials.

  15. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.

    1985-01-01

    This is the first report of the first phase of a 3-year program. Its objectives are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system, then to develop and verify life prediction models accounting for these degradation modes. The first task (Task I) is to determine the major failure mechanisms. Presently, bond coat oxidation and bond coat creep are being evaluated as potential TBC failure mechanisms. The baseline TBC system consists of an air plasma sprayed ZrO2-Y2O3 top coat, a low pressure plasma sprayed NiCrAlY bond coat, and a Rene'80 substrate. Pre-exposures in air and argon combined with thermal cycle tests in air and argon are being utilized to evaluate bond coat oxidation as a failure mechanism. Unexpectedly, the specimens pre-exposed in argon failed before the specimens pre-exposed in air in subsequent thermal cycles testing in air. Four bond coats with different creep strengths are being utilized to evaluate the effect of bond coat creep on TBC degradation. These bond coats received an aluminide overcoat prior to application of the top coat to reduce the differences in bond coat oxidation behavior. Thermal cycle testing has been initiated. Methods have been selected for measuring tensile strength, Poisson's ratio, dynamic modulus and coefficient of thermal expansion both of the bond coat and top coat layers.

  16. Predictive models for moving contact line flows

    NASA Technical Reports Server (NTRS)

    Rame, Enrique; Garoff, Stephen

    2003-01-01

    Modeling flows with moving contact lines poses the formidable challenge that the usual assumptions of Newtonian fluid and no-slip condition give rise to a well-known singularity. This singularity prevents one from satisfying the contact angle condition to compute the shape of the fluid-fluid interface, a crucial calculation without which design parameters such as the pressure drop needed to move an immiscible 2-fluid system through a solid matrix cannot be evaluated. Some progress has been made for low Capillary number spreading flows. Combining experimental measurements of fluid-fluid interfaces very near the moving contact line with an analytical expression for the interface shape, we can determine a parameter that forms a boundary condition for the macroscopic interface shape when Ca much les than l. This parameter, which plays the role of an "apparent" or macroscopic dynamic contact angle, is shown by the theory to depend on the system geometry through the macroscopic length scale. This theoretically established dependence on geometry allows this parameter to be "transferable" from the geometry of the measurement to any other geometry involving the same material system. Unfortunately this prediction of the theory cannot be tested on Earth.

  17. Radiation risk models for all solid cancers other than those types of cancer requiring individual assessments after a nuclear accident.

    PubMed

    Walsh, Linda; Zhang, Wei

    2016-03-01

    In the assessment of health risks after nuclear accidents, some health consequences require special attention. For example, in their 2013 report on health risk assessment after the Fukushima nuclear accident, the World Health Organisation (WHO) panel of experts considered risks of breast cancer, thyroid cancer and leukaemia. For these specific cancer types, use was made of already published excess relative risk (ERR) and excess absolute risk (EAR) models for radiation-related cancer incidence fitted to the epidemiological data from the Japanese A-bomb Life Span Study (LSS). However, it was also considered important to assess all other types of solid cancer together and the WHO, in their above-mentioned report, stated "No model to calculate the risk for all other solid cancer excluding breast and thyroid cancer risks is available from the LSS data". Applying the LSS models for all solid cancers along with the models for the specific sites means that some cancers have an overlap in the risk evaluations. Thus, calculating the total solid cancer risk plus the breast cancer risk plus the thyroid cancer risk can overestimate the total risk by several per cent. Therefore, the purpose of this paper was to publish the required models for all other solid cancers, i.e. all solid cancers other than those types of cancer requiring special attention after a nuclear accident. The new models presented here have been fitted to the same LSS data set from which the risks provided by the WHO were derived. Although it is known already that the EAR and ERR effect modifications by sex are statistically significant for the outcome "all solid cancer", it is shown here that sex modification is not statistically significant for the outcome "all solid cancer other than thyroid and breast cancer". It is also shown here that the sex-averaged solid cancer risks with and without the sex modification are very similar once breast and thyroid cancers are factored out. Some other notable model

  18. Persistence of airline accidents.

    PubMed

    Barros, Carlos Pestana; Faria, Joao Ricardo; Gil-Alana, Luis Alberiko

    2010-10-01

    This paper expands on air travel accident research by examining the relationship between air travel accidents and airline traffic or volume in the period from 1927-2006. The theoretical model is based on a representative airline company that aims to maximise its profits, and it utilises a fractional integration approach in order to determine whether there is a persistent pattern over time with respect to air accidents and air traffic. Furthermore, the paper analyses how airline accidents are related to traffic using a fractional cointegration approach. It finds that airline accidents are persistent and that a (non-stationary) fractional cointegration relationship exists between total airline accidents and airline passengers, airline miles and airline revenues, with shocks that affect the long-run equilibrium disappearing in the very long term. Moreover, this relation is negative, which might be due to the fact that air travel is becoming safer and there is greater competition in the airline industry. Policy implications are derived for countering accident events, based on competition and regulation. PMID:20618386

  19. Modeling of long range transport pathways for radionuclides to Korea during the Fukushima Dai-ichi nuclear accident and their association with meteorological circulations.

    PubMed

    Lee, Kwan-Hee; Kim, Ki-Hyun; Lee, Jin-Hong; Yun, Ju-Yong; Kim, Cheol-Hee

    2015-10-01

    The Lagrangian FLEXible PARTicle (FLEXPART) dispersion model and National Centers for Environmental Prediction/Global Forecast System (NCEP/GFS) meteorological data were used to simulate the long range transport pathways of three artificial radionuclides: (131)I, (137)Cs, and (133)Xe, coming into Korean Peninsula during the Fukushima Dai-ichi nuclear accident. Using emission rates of these radionuclides estimated from previous studies, three distinctive transport routes of these radionuclides toward the Korean Peninsula for a period from 10 March to 20 April 2011 were exploited by three spatial scales: 1) intercontinental scale - plume released since mid-March 2011 and transported to the North to arrive Korea on 23 March 2011, 2) global (hemispherical) scale - plume traveling over the whole northern hemisphere passing through the Pacific Ocean/Europe to reach the Korean Peninsula with relatively low concentrations in late March 2011 and, 3) regional scale - plume released on early April 2011 arrived at the Korean Peninsula via southwest sea of Japan influenced directly by veering mesoscale wind circulations. Our identification of these transport routes at three different scales of meteorological circulations suggests the feasibility of a multi-scale approach for more accurate prediction of radionuclide transport in the study area. In light of the fact that the observed arrival/duration time of peaks were explained well by the FLEXPART model coupled with NCEP/GFS input data, our approach can be used meaningfully as a decision support model for radiation emergency situations. PMID:26149179

  20. Mapping and modelling of radionuclide distribution on the ground due to the Fukushima accident.

    PubMed

    Saito, Kimiaki

    2014-08-01

    A large-scale environmental monitoring effort, construction of detailed contamination maps based on the monitoring data, studies on radiocaesium migration in natural environments, construction of a prediction model for the air dose rate distribution in the 80 km zone, and construction of a database to preserve and keep open the obtained data have been implemented as national projects. Temporal changes in contamination conditions were analysed. It was found that air dose rates above roads have decreased much faster than those above undisturbed flat fields. Further, the decreasing tendency was found to depend on land uses, magnitudes of initial dose rates and some other factors. PMID:24695555

  1. The myth of science-based predictive modeling.

    SciTech Connect

    Hemez, F. M.

    2004-01-01

    A key aspect of science-based predictive modeling is the assessment of prediction credibility. This publication argues that the credibility of a family of models and their predictions must combine three components: (1) the fidelity of predictions to test data; (2) the robustness of predictions to variability, uncertainty, and lack-of-knowledge; and (3) the prediction accuracy of models in cases where measurements are not available. Unfortunately, these three objectives are antagonistic. A recently published Theorem that demonstrates the irrevocable trade-offs between fidelity-to-data, robustness-to-uncertainty, and confidence in prediction is summarized. High-fidelity models cannot be made increasingly robust to uncertainty and lack-of-knowledge. Similarly, robustness-to-uncertainty can only be improved at the cost of reducing the confidence in prediction. The concept of confidence in prediction relies on a metric for total uncertainty, capable of aggregating different representations of uncertainty (probabilistic or not). The discussion is illustrated with an engineering application where a family of models is developed to predict the acceleration levels obtained when impacts of varying levels propagate through layers of crushable hyper-foam material of varying thicknesses. Convex modeling is invoked to represent a severe lack-of-knowledge about the constitutive material behavior. The analysis produces intervals of performance metrics from which the total uncertainty and confidence levels are estimated. Finally, performance, robustness and confidence are extrapolated throughout the validation domain to assess the predictive power of the family of models away from tested configurations.

  2. Health effects models for nuclear power plant accident consequence analysis. Part 1, Introduction, integration, and summary: Revision 2

    SciTech Connect

    Evans, J.S.; Abrahmson, S.; Bender, M.A.; Boecker, B.B.; Scott, B.R.; Gilbert, E.S.

    1993-10-01

    This report is a revision of NUREG/CR-4214, Rev. 1, Part 1 (1990), Health Effects Models for Nuclear Power Plant Accident Consequence Analysis. This revision has been made to incorporate changes to the Health Effects Models recommended in two addenda to the NUREG/CR-4214, Rev. 1, Part 11, 1989 report. The first of these addenda provided recommended changes to the health effects models for low-LET radiations based on recent reports from UNSCEAR, ICRP and NAS/NRC (BEIR V). The second addendum presented changes needed to incorporate alpha-emitting radionuclides into the accident exposure source term. As in the earlier version of this report, models are provided for early and continuing effects, cancers and thyroid nodules, and genetic effects. Weibull dose-response functions are recommended for evaluating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary, and gastrointestinal syndromes are considered. Linear and linear-quadratic models are recommended for estimating the risks of seven types of cancer in adults - leukemia, bone, lung, breast, gastrointestinal, thyroid, and ``other``. For most cancers, both incidence and mortality are addressed. Five classes of genetic diseases -- dominant, x-linked, aneuploidy, unbalanced translocations, and multifactorial diseases are also considered. Data are provided that should enable analysts to consider the timing and severity of each type of health risk.

  3. Predicting Tenure Dynamics: Models Help Manage Tenure System.

    ERIC Educational Resources Information Center

    Strauss, Jon C.

    1997-01-01

    Presents three different, complementary statistical models for predicting faculty tenure dynamics, using data from Worcester Polytechnic Institute (Massachusetts). The difference equation model exactly describes future behavior but requires complete specification. The Markov-chain model can predict the full life-cycle of tenure from initial age…

  4. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    ERIC Educational Resources Information Center

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  5. Source term estimation using air concentration measurements and a Lagrangian dispersion model - Experiments with pseudo and real cesium-137 observations from the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Chai, Tianfeng; Draxler, Roland; Stein, Ariel

    2015-04-01

    A transfer coefficient matrix (TCM) was created in a previous study using a Lagrangian dispersion model to provide plume predictions under different emission scenarios. The TCM estimates the contribution of each emission period to all sampling locations and can be used to estimate source terms by adjusting emission rates to match the model prediction with the measurements. In this paper, the TCM is used to formulate a cost functional that measures the differences between the model predictions and the actual air concentration measurements. The cost functional also includes a background term which adds the differences between a first guess and the updated emission estimates. Uncertainties of the measurements, as well as those for the first guess of source terms are both considered in the cost functional. In addition, a penalty term is added to create a smooth temporal change in the release rate. The method is first tested with pseudo observations generated using the Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model at the same location and time as the actual observations. The inverse estimation system is able to accurately recover the release rates and performs better than a direct solution using singular value decomposition (SVD). It is found that computing ln(c) differences between model and observations is better than using the original concentration c differences in the cost functional. The inverse estimation results are not sensitive to artificially introduced observational errors or different first guesses. To further test the method, daily average cesium-137 air concentration measurements around the globe from the Fukushima nuclear accident are used to estimate the release of the radionuclide. Compared with the latest estimates by Katata et al. (2014), the recovered release rates successfully capture the main temporal variations. When using subsets of the measured data, the inverse estimation method still manages to identify most of the

  6. Status report of advanced cladding modeling work to assess cladding performance under accident conditions

    SciTech Connect

    B.J. Merrill; Shannon M. Bragg-Sitton

    2013-09-01

    Scoping simulations performed using a severe accident code can be applied to investigate the influence of advanced materials on beyond design basis accident progression and to identify any existing code limitations. In 2012 an effort was initiated to develop a numerical capability for understanding the potential safety advantages that might be realized during severe accident conditions by replacing Zircaloy components in light water reactors (LWRs) with silicon carbide (SiC) components. To this end, a version of the MELCOR code, under development at the Sandia National Laboratories in New Mexico (SNL/NM), was modified by replacing Zircaloy for SiC in the MELCOR reactor core oxidation and material properties routines. The modified version of MELCOR was benchmarked against available experimental data to ensure that present SiC oxidation theory in air and steam were correctly implemented in the code. Additional modifications have been implemented in the code in 2013 to improve the specificity in defining components fabricated from non-standard materials. An overview of these modifications and the status of their implementation are summarized below.

  7. From Predictive Models to Instructional Policies

    ERIC Educational Resources Information Center

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  8. Uncertainties propagation in the framework of a Rod Ejection Accident modeling based on a multi-physics approach

    SciTech Connect

    Le Pallec, J. C.; Crouzet, N.; Bergeaud, V.; Delavaud, C.

    2012-07-01

    The control of uncertainties in the field of reactor physics and their propagation in best-estimate modeling are a major issue in safety analysis. In this framework, the CEA develops a methodology to perform multi-physics simulations including uncertainties analysis. The present paper aims to present and apply this methodology for the analysis of an accidental situation such as REA (Rod Ejection Accident). This accident is characterized by a strong interaction between the different areas of the reactor physics (neutronic, fuel thermal and thermal hydraulic). The modeling is performed with CRONOS2 code. The uncertainties analysis has been conducted with the URANIE platform developed by the CEA: For each identified response from the modeling (output) and considering a set of key parameters with their uncertainties (input), a surrogate model in the form of a neural network has been produced. The set of neural networks is then used to carry out a sensitivity analysis which consists on a global variance analysis with the determination of the Sobol indices for all responses. The sensitivity indices are obtained for the input parameters by an approach based on the use of polynomial chaos. The present exercise helped to develop a methodological flow scheme, to consolidate the use of URANIE tool in the framework of parallel calculations. Finally, the use of polynomial chaos allowed computing high order sensitivity indices and thus highlighting and classifying the influence of identified uncertainties on each response of the analysis (single and interaction effects). (authors)

  9. Physical model to predict the ball-burnishing forces

    NASA Astrophysics Data System (ADS)

    González-Rojas, H. A.; Travieso-Rodríguez, J. A.

    2012-04-01

    In this paper, we have developed a physical model to predict the forces of the ball burnishing. The models have been constructed on the basis of the plasticity theory. During the model development we have figured out the dimensionless number B that characterizes the problem of plastic deformation in the ball-burnishing. The experiments performed in steel and aluminum allows to validate the model and to emphasize the correct prediction of behavior patterns that the model describes.

  10. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  11. NUMERICAL MODELS FOR PREDICTING WATERSHED ACIDIFICATION

    EPA Science Inventory

    Three numerical models of watershed acidification, including the MAGIC II, ETD, and ILWAS models, are reviewed, and a comparative study is made of the specific process formulations that are incorporated in the models to represent hydrological, geochemical, and biogeochemical proc...

  12. The Complexity of Developmental Predictions from Dual Process Models

    ERIC Educational Resources Information Center

    Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

    2011-01-01

    Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

  13. Sweat loss prediction using a multi-model approach

    NASA Astrophysics Data System (ADS)

    Xu, Xiaojiang; Santee, William R.

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  14. Inference on biological mechanisms using an integrated phenotype prediction model.

    PubMed

    Enomoto, Yumi; Ushijima, Masaru; Miyata, Satoshi; Matsuura, Masaaki; Ohtaki, Megu

    2008-03-01

    We propose a methodology for constructing an integrated phenotype prediction model that accounts for multiple pathways regulating a targeted phenotype. The method uses multiple prediction models, each expressing a particular pattern of gene-to-gene interrelationship, such as epistasis. We also propose a methodology using Gene Ontology annotations to infer a biological mechanism from the integrated phenotype prediction model. To construct the integrated models, we employed multiple logistic regression models using a two-step learning approach to examine a number of patterns of gene-to-gene interrelationships. We first selected individual prediction models with acceptable goodness of fit, and then combined the models. The resulting integrated model predicts phenotype as a logical sum of predicted results from the individual models. We used published microarray data on neuroblastoma from Ohira et al (2005) for illustration, constructing an integrated model to predict prognosis and infer the biological mechanisms controlling prognosis. Although the resulting integrated model comprised a small number of genes compared to a previously reported analysis of these data, the model demonstrated excellent performance, with an error rate of 0.12 in a validation analysis. Gene Ontology analysis suggested that prognosis of patients with neuroblastoma may be influenced by biological processes such as cell growth, G-protein signaling, phosphoinositide-mediated signaling, alcohol metabolism, glycolysis, neurophysiological processes, and catecholamine catabolism. PMID:18578362

  15. Autonomous formation flight of helicopters: Model predictive control approach

    NASA Astrophysics Data System (ADS)

    Chung, Hoam

    Formation flight is the primary movement technique for teams of helicopters. However, the potential for accidents is greatly increased when helicopter teams are required to fly in tight formations and under harsh conditions. This dissertation proposes that the automation of helicopter formations is a realistic solution capable of alleviating risks. Helicopter formation flight operations in battlefield situations are highly dynamic and dangerous, and, therefore, we maintain that both a high-level formation management system and a distributed coordinated control algorithm should be implemented to help ensure safe formations. The starting point for safe autonomous formation flights is to design a distributed control law attenuating external disturbances coming into a formation, so that each vehicle can safely maintain sufficient clearance between it and all other vehicles. While conventional methods are limited to homogeneous formations, our decentralized model predictive control (MPC) approach allows for heterogeneity in a formation. In order to avoid the conservative nature inherent in distributed MPC algorithms, we begin by designing a stable MPC for individual vehicles, and then introducing carefully designed inter-agent coupling terms in a performance index. Thus the proposed algorithm works in a decentralized manner, and can be applied to the problem of helicopter formations comprised of heterogenous vehicles. Individual vehicles in a team may be confronted by various emerging situations that will require the capability for in-flight reconfiguration. We propose the concept of a formation manager to manage separation, join, and synchronization of flight course changes. The formation manager accepts an operator's commands, information from neighboring vehicles, and its own vehicle states. Inside the formation manager, there are multiple modes and complex mode switchings represented as a finite state machine (FSM). Based on the current mode and collected

  16. Comparing prediction models for radiographic exposures

    NASA Astrophysics Data System (ADS)

    Ching, W.; Robinson, J.; McEntee, M. F.

    2015-03-01

    During radiographic exposures the milliampere-seconds (mAs), kilovoltage peak (kVp) and source-to-image distance can be adjusted for variations in patient thicknesses. Several exposure adjustment systems have been developed to assist with this selection. This study compares the accuracy of four systems to predict the required mAs for pelvic radiographs taken on a direct digital radiography system (DDR). Sixty radiographs were obtained by adjusting mAs to compensate for varying combinations of source-to-image distance (SID), kVp and patient thicknesses. The 25% rule, the DuPont Bit System and the DigiBit system were compared to determine which of these three most accurately predicted the mAs required for an increase in patient thickness. Similarly, the 15% rule, the DuPont Bit System and the DigiBit system were compared for an increase in kVp. The exposure index (EI) was used as an indication of exposure to the DDR. For each exposure combination the mAs was adjusted until an EI of 1500+/-2% was achieved. The 25% rule was the most accurate at predicting the mAs required for an increase in patient thickness, with 53% of the mAs predictions correct. The DigiBit system was the most accurate at predicting mAs needed for changes in kVp, with 33% of predictions correct. This study demonstrated that the 25% rule and DigiBit system were the most accurate predictors of mAs required for an increase in patient thickness and kVp respectively. The DigiBit system worked well in both scenarios as it is a single exposure adjustment system that considers a variety of exposure factors.

  17. Supercomputer predictive modeling for ensuring space flight safety

    NASA Astrophysics Data System (ADS)

    Betelin, V. B.; Smirnov, N. N.; Nikitin, V. F.

    2015-04-01

    Development of new types of rocket engines, as well as upgrading the existing engines needs computer aided design and mathematical tools for supercomputer modeling of all basic processes of mixing, ignition, combustion and outflow through the nozzle. Even small upgrades and changes introduced in existing rocket engines without proper simulations cause severe accidents at launch places witnessed recently. The paper presents the results of computer code developing, verification and validation, making it possible to simulate unsteady processes of ignition and combustion in rocket engines.

  18. Predicting Historical Droughts in the US With a Multi-model Seasonal Hydrologic Prediction System

    NASA Astrophysics Data System (ADS)

    Luo, L.; Wood, E.; Sheffield, J.; Li, H.

    2008-12-01

    Droughts are as much a part of weather and climate extremes as floods, hurricanes and tornadoes are, but they are the most costly extremes among all natural disasters in the U.S. The estimated annual direct losses to the U.S economy due to droughts are about 6-8 billion, with the drought of 1988 estimated to have damages over $39 billion. Having a seasonal drought prediction system that can accurately predict the onset, development and recovery of drought episodes will significantly help to reduce the loss due to drought. In this study, a seasonal hydrologic ensemble prediction system developed for the eastern United States is used to predict historical droughts in the US retrospectively. The system uses a hydrologic model (i.e., the Variable Infiltration Capacity model) as the central element for producing ensemble predictions of soil moisture, snow, and streamflow with lead times up to six months. One unique feature of this system is in the method for generating ensemble atmospheric forcings for the forecast period. It merges seasonal climate forecasts from multiple climate models with observed climatology in a Bayesian framework, such that the uncertainties related to the atmospheric forcings can be better quantified while the signals from individual models are combined. Simultaneously, climate model forecasts are downscaled to an appropriate spatial scale for hydrologic predictions. When generating daily meteorological forcing, the system uses the rank structures of selected historical forcing records to ensure reasonable weather patterns in space and time. The system is applied to different regions in the US to predict historical drought episodes. These forecasts use seasonal climate forecast from a combination of the NCEP CFS and seven climate models in the European Union's Development of a European Multimodel Ensemble System for Seasonal to-Interannual Prediction (CFS+DEMETER). This study validates the approach of using seasonal climate predictions from

  19. Predictive modeling and reducing cyclic variability in autoignition engines

    DOEpatents

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  20. A thermodynamic model for noble metal alloy inclusions in nuclear fuel rods and application to the study of loss-of-coolant accidents

    NASA Astrophysics Data System (ADS)

    Kaye, Matthew Haigh

    Metal alloy inclusions comprised of Mo, Pd, Rh, Ru, and Tc (the so-called "noble" metals) develop in CANDU fuel pellets as a result of fission. The thermochemical behaviour of this alloy system during severe accident conditions is of interest in connection with computations of loss of volatile compounds of these elements by reaction with steam-hydrogen gas mixtures that develop in the system as a result of water reacting with the Zircalloy cladding. This treatment focuses on the development of thermodynamic models for the Mo-Pd-Rh-Ru-Tc quinary system. A reasonable prediction was made by modelling the ten binary phase diagrams, five of these evaluations being original to this work. This process provides a complete treatment for the five solution phases (vapour, liquid, bcc-solid, fcc-solid, and cph-solid) in this alloy system, as well as self-consistent Gibbs energies of formation for the Mo 5Ru3 intermetallic phase, and two intermediate phases in the Mo-Tc system. The resulting collection of properties, when treated by Gibbs energy minimization, permits phase equilibria to be computed for specified temperatures and compositions. Experimental work in support of this treatment has been performed. Measurements of the solidus and liquidus temperatures for Pd-Rh alloys were made using differential thermal analysis. These measurements confirm that the liquid solution exhibits positive deviation from Raoult's law. Experimental work as a visiting research engineer at AECL (Chalk River) was performed using a custom developed Knudsen cell/mass spectrometer. The Pd partial pressure was measured above multi-component alloys of known composition over a range of temperatures. These are correlated to predicted activities of Pd from the developed thermodynamic model in the multi-component alloy. The thermodynamic treatment developed for the noble metal alloy inclusions has been combined with considerable other data and applied to selected loss-of-coolant-accident scenarios to

  1. Predicting Career Advancement with Structural Equation Modelling

    ERIC Educational Resources Information Center

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  2. A Prediction Model of the Capillary Pressure J-Function.

    PubMed

    Xu, W S; Luo, P Y; Sun, L; Lin, N

    2016-01-01

    The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701

  3. A model to predict the power output from wind farms

    SciTech Connect

    Landberg, L.

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  4. Tampa Bay Water Clarity Model (TBWCM): As a Predictive Tool

    EPA Science Inventory

    The Tampa Bay Water Clarity Model was developed as a predictive tool for estimating the impact of changing nutrient loads on water clarity as measured by secchi depth. The model combines a physical mixing model with an irradiance model and nutrient cycling model. A 10 segment bi...

  5. Econometric models for predicting confusion crop ratios

    NASA Technical Reports Server (NTRS)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  6. Evaluation of Fast-Time Wake Vortex Prediction Models

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  7. Demonstrating the improvement of predictive maturity of a computational model

    SciTech Connect

    Hemez, Francois M; Unal, Cetin; Atamturktur, Huriye S

    2010-01-01

    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

  8. A predictive model for Dengue Hemorrhagic Fever epidemics.

    PubMed

    Halide, Halmar; Ridd, Peter

    2008-08-01

    A statistical model for predicting monthly Dengue Hemorrhagic Fever (DHF) cases from the city of Makassar is developed and tested. The model uses past and present DHF cases, climate and meteorological observations as inputs. These inputs are selected using a stepwise regression method to predict future DHF cases. The model is tested independently and its skill assessed using two skill measures. Using the selected variables as inputs, the model is capable of predicting a moderately-severe epidemic at lead times of up to six months. The most important input variable in the prediction is the present number of DHF cases followed by the relative humidity three to four months previously. A prediction 1-6 months in advance is sufficient to initiate various activities to combat DHF epidemic. The model is suitable for warning and easily becomes an operational tool due to its simplicity in data requirement and computational effort. PMID:18668414

  9. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  10. Life prediction modeling based on strainrange partitioning

    NASA Technical Reports Server (NTRS)

    Halford, Gary R.

    1988-01-01

    Strainrange partitioning (SRP) is an integrated low-cycle-fatigue life predicting system. It was created specifically for calculating cyclic crack initiation life under severe high-temperature fatigue conditions. The key feature of the SRP system is its recognition of the interacting mechanisms of cyclic inelastic deformation that govern cyclic life at high temperatures. The SRP system bridges the gap between the mechanistic level of understanding that breeds new and better materials and the phenomenological level wherein workable engineering life prediction methods are in great demand. The system was recently expanded to address engineering fatigue problems in the low-strain, long-life, nominally elastic regime. This breakthrough, along with other advances in material behavior and testing technology, has permitted the system to also encompass low-strain thermomechanical loading conditions. Other important refinements of the originally proposed method include procedures for dealing with life-reducing effects of multiaxial loading, ratcheting, mean stresses, nonrepetitive (cumulative loading) loading, and environmental and long-time exposure. Procedure were also developed for partitioning creep and plastic strain and for estimating strainrange versus life relations from tensile and creep rupture properties. Each of the important engineering features of the SRP system are discussed and examples shown of how they help toward predicting high-temperature fatigue life under practical, although complex, loading conditions.

  11. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    PubMed Central

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  12. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models.

    PubMed

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L; Huffman, Jennifer E; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F; Wilson, James F; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S

    2015-07-15

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  13. Models Predicting Success of Infertility Treatment: A Systematic Review

    PubMed Central

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  14. GASFLOW: The theoretical model to analyze accidents in nuclear containments, confinements, and facility buildings

    SciTech Connect

    Travis, J.R.; Wilson, T.L.

    1993-05-01

    This report documents the governing physical equations for GASFLOW, a finite-volume computer code for solving transient, three-dimensional, compressible, Navier-Stokes equations for multiple gas species. The code is designed to be a best-estimate tool for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments, confinements, and other facility buildings. An analysis with GASFLOW will result in time-dependent gas-species concentrations throughout the structure analyzed, and in the event of combustion the pressure and temperature loadings on the walls and internal structures. GASFLOW can model geometrically complex containment systems with multiple compartments and internal structures. It can calculate gas behavior of low-speed buoyancy-driven flows, of diffusion-dominated flows, and during deflagrations. The code can model condensation heat transfer to walls and internal structures by natural and forced convection; chemical kinetics of combustion of hydrogen or hydrocarbons fuels; and fluid turbulence. Heat conduction within walls and structures is considered one-dimensional.

  15. Examining the nonparametric effect of drivers' age in rear-end accidents through an additive logistic regression model.

    PubMed

    Ma, Lu; Yan, Xuedong

    2014-06-01

    This study seeks to inspect the nonparametric characteristics connecting the age of the driver to the relative risk of being an at-fault vehicle, in order to discover a more precise and smooth pattern of age impact, which has commonly been neglected in past studies. Records of drivers in two-vehicle rear-end collisions are selected from the general estimates system (GES) 2011 dataset. These extracted observations in fact constitute inherently matched driver pairs under certain matching variables including weather conditions, pavement conditions and road geometry design characteristics that are shared by pairs of drivers in rear-end accidents. The introduced data structure is able to guarantee that the variance of the response variable will not depend on the matching variables and hence provides a high power of statistical modeling. The estimation results exhibit a smooth cubic spline function for examining the nonlinear relationship between the age of the driver and the log odds of being at fault in a rear-end accident. The results are presented with respect to the main effect of age, the interaction effect between age and sex, and the effects of age under different scenarios of pre-crash actions by the leading vehicle. Compared to the conventional specification in which age is categorized into several predefined groups, the proposed method is more flexible and able to produce quantitatively explicit results. First, it confirms the U-shaped pattern of the age effect, and further shows that the risks of young and old drivers change rapidly with age. Second, the interaction effects between age and sex show that female and male drivers behave differently in rear-end accidents. Third, it is found that the pattern of age impact varies according to the type of pre-crash actions exhibited by the leading vehicle. PMID:24642249

  16. Gaussian mixture models as flux prediction method for central receivers

    NASA Astrophysics Data System (ADS)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie

    2016-05-01

    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  17. Comparison of Predictive Models for the Early Diagnosis of Diabetes

    PubMed Central

    Jahani, Meysam

    2016-01-01

    Objectives This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. Methods We used memetic algorithms to update weights and to improve prediction accuracy of models. In the first step, the optimum amount for neural network parameters such as momentum rate, transfer function, and error function were obtained through trial and error and based on the results of previous studies. In the second step, optimum parameters were applied to memetic algorithms in order to improve the accuracy of prediction. This preliminary analysis showed that the accuracy of neural networks is 88%. In the third step, the accuracy of neural network models was improved using a memetic algorithm and resulted model was compared with a logistic regression model using a confusion matrix and receiver operating characteristic curve (ROC). Results The memetic algorithm improved the accuracy from 88.0% to 93.2%. We also found that memetic algorithm had a higher accuracy than the model from the genetic algorithm and a regression model. Among models, the regression model has the least accuracy. For the memetic algorithm model the amount of sensitivity, specificity, positive predictive value, negative predictive value, and ROC are 96.2, 95.3, 93.8, 92.4, and 0.958 respectively. Conclusions The results of this study provide a basis to design a Decision Support System for risk management and planning of care for individuals at risk of diabetes. PMID:27200219

  18. A blind test of the MOIRA lake model for radiocesium for Lake Uruskul, Russia, contaminated by fallout from the Kyshtym accident in 1957.

    PubMed

    Håkanson, L; Sazykina, T

    2001-01-01

    This paper presents results of a model-test carried out within the framework of the COMETES project (EU). The tested model is a new lake model for radiocesium to be used within the MOIRA decision support system (DSS; MOIRA and COMETES are acronyms for EU-projects). This model has previously been validated against independent data from many lakes covering a wide domain of lake characteristics and been demonstrated to yield excellent predictive power (see Håkanson, Modelling Radiocesium in Lakes and Coastal Areas. Kluwer, Dordrecht, 2000, 215 pp). However, the model has not been tested before for cases other than those related to the Chernobyl fallout in 1986, nor for lakes from this part of the world (Southern Urals) and nor for situations with such heavy fallout as this. The aims of this work were: (1) to carry out a blind test of the model for the case of continental Lake Uruskul, heavily contaminated with 90Sr and 137Cs as a result of the Kyshtym radiation accident (29 September 1957) in the Southern Urals, Russia, and (2) if these tests gave satisfactory results to reconstruct the radiocesium dynamics for fish, water and sediments in the lake. Can the model provide meaningful predictions in a situation such as this? The answer is yes, although there are reservations due to the scarcity of reliable empirical data. From the modelling calculations, it may be noted that the maximum levels of 137Cs in fish (here 400 g ww goldfish), water and sediments were about 100,000 Bq/kg ww, 600 Bq/l and 30,000 Bq/kg dw, respectively. The values in fish are comparable to or higher than the levels in fish in the cooling pond of the Chernobyl NPP. The model also predicts an interesting seasonal pattern in 137Cs levels in sediments. There is also a characteristic "three phase" development for the 137Cs levels in fish: first an initial stage when the 137Cs concentrations in fish approach a maximum value, then a phase with relatively short ecological half-lives followed by a final

  19. A model for prediction of STOVL ejector dynamics

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1989-01-01

    A semi-empirical control-volume approach to ejector modeling for transient performance prediction is presented. This new approach is motivated by the need for a predictive real-time ejector sub-system simulation for Short Take-Off Verticle Landing (STOVL) integrated flight and propulsion controls design applications. Emphasis is placed on discussion of the approximate characterization of the mixing process central to thrust augmenting ejector operation. The proposed ejector model suggests transient flow predictions are possible with a model based on steady-flow data. A practical test case is presented to illustrate model calibration.

  20. LHC diphoton Higgs signal predicted by little Higgs models

    SciTech Connect

    Wang Lei; Yang Jinmin

    2011-10-01

    Little Higgs theory naturally predicts a light Higgs boson whose most important discovery channel at the LHC is the diphoton signal pp{yields}h{yields}{gamma}{gamma}. In this work, we perform a comparative study for this signal in some typical little Higgs models, namely, the littlest Higgs model, two littlest Higgs models with T-parity (named LHT-I and LHT-II), and the simplest little Higgs models. We find that compared with the standard model prediction, the diphoton signal rate is always suppressed and the suppression extent can be quite different for different models. The suppression is mild (< or approx. 10%) in the littlest Higgs model but can be quite severe ({approx_equal}90%) in other three models. This means that discovering the light Higgs boson predicted by the little Higgs theory through the diphoton channel at the LHC will be more difficult than discovering the standard model Higgs boson.

  1. The predictive accuracy of intertemporal-choice models.

    PubMed

    Arfer, Kodi B; Luhmann, Christian C

    2015-05-01

    How do people choose between a smaller reward available sooner and a larger reward available later? Past research has evaluated models of intertemporal choice by measuring goodness of fit or identifying which decision-making anomalies they can accommodate. An alternative criterion for model quality, which is partly antithetical to these standard criteria, is predictive accuracy. We used cross-validation to examine how well 10 models of intertemporal choice could predict behaviour in a 100-trial binary-decision task. Many models achieved the apparent ceiling of 85% accuracy, even with smaller training sets. When noise was added to the training set, however, a simple logistic-regression model we call the difference model performed particularly well. In many situations, between-model differences in predictive accuracy may be small, contrary to long-standing controversy over the modelling question in research on intertemporal choice, but the simplicity and robustness of the difference model recommend it to future use. PMID:25773127

  2. Partial least square method for modelling ergonomic risks factors on express bus accidents in the east coast of peninsular west Malaysia

    SciTech Connect

    Hashim, Yusof bin; Taha, Zahari bin

    2015-02-03

    Public, stake holders and authorities in Malaysian government show great concern towards high numbers of passenger’s injuries and passengers fatalities in express bus accident. This paper studies the underlying factors involved in determining ergonomics risk factors towards human error as the reasons in express bus accidents in order to develop an integrated analytical framework. Reliable information about drivers towards bus accident should lead to the design of strategies intended to make the public feel safe in public transport services. In addition there is an analysis of ergonomics risk factors to determine highly ergonomic risk factors which led to accidents. The research was performed in east coast of peninsular Malaysia using variance-based structural equation modeling namely the Partial Least Squares (PLS) regression techniques. A questionnaire survey was carried out at random among 65 express bus drivers operating from the city of Kuantan in Pahang and among 49 express bus drivers operating from the city of Kuala Terengganu in Terengganu to all towns in the east coast of peninsular west Malaysia. The ergonomic risks factors questionnaire is based on demographic information, occupational information, organizational safety climate, ergonomic workplace, physiological factors, stress at workplace, physical fatigue and near miss accidents. The correlation and significant values between latent constructs (near miss accident) were analyzed using SEM SmartPLS, 3M. The finding shows that the correlated ergonomic risks factors (occupational information, t=2.04, stress at workplace, t = 2.81, physiological factor, t=2.08) are significant to physical fatigue and as the mediator to near miss accident at t = 2.14 at p<0.05and T-statistics, t>1.96. The results shows that the effects of physical fatigue due to ergonomic risks factors influence the human error as the reasons in express bus accidents.

  3. Partial least square method for modelling ergonomic risks factors on express bus accidents in the east coast of peninsular west Malaysia

    NASA Astrophysics Data System (ADS)

    Hashim, Yusof bin; Taha, Zahari bin

    2015-02-01

    Public, stake holders and authorities in Malaysian government show great concern towards high numbers of passenger's injuries and passengers fatalities in express bus accident. This paper studies the underlying factors involved in determining ergonomics risk factors towards human error as the reasons in express bus accidents in order to develop an integrated analytical framework. Reliable information about drivers towards bus accident should lead to the design of strategies intended to make the public feel safe in public transport services. In addition there is an analysis of ergonomics risk factors to determine highly ergonomic risk factors which led to accidents. The research was performed in east coast of peninsular Malaysia using variance-based structural equation modeling namely the Partial Least Squares (PLS) regression techniques. A questionnaire survey was carried out at random among 65 express bus drivers operating from the city of Kuantan in Pahang and among 49 express bus drivers operating from the city of Kuala Terengganu in Terengganu to all towns in the east coast of peninsular west Malaysia. The ergonomic risks factors questionnaire is based on demographic information, occupational information, organizational safety climate, ergonomic workplace, physiological factors, stress at workplace, physical fatigue and near miss accidents. The correlation and significant values between latent constructs (near miss accident) were analyzed using SEM SmartPLS, 3M. The finding shows that the correlated ergonomic risks factors (occupational information, t=2.04, stress at workplace, t = 2.81, physiological factor, t=2.08) are significant to physical fatigue and as the mediator to near miss accident at t = 2.14 at p<0.05and T-statistics, t>1.96. The results shows that the effects of physical fatigue due to ergonomic risks factors influence the human error as the reasons in express bus accidents.

  4. Questioning the Faith - Models and Prediction in Stream Restoration (Invited)

    NASA Astrophysics Data System (ADS)

    Wilcock, P.

    2013-12-01

    River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.

  5. Long term radiocesium contamination of fruit trees following the Chernobyl accident

    SciTech Connect

    Antonopoulos-Domis, M.; Clouvas, A.; Gagianas, A.

    1996-12-01

    Radiocesium contamination form the Chernobyl accident of fruits and leaves from various fruit trees was systematically studied form 1990-1995 on two agricultural experimentation farms in Northern Greece. The results are discussed in the framework of a previously published model describing the long-term radiocesium contamination mechanism of deciduous fruit trees after a nuclear accident. The results of the present work qualitatively verify the model predictions. 11 refs., 5 figs., 1 tab.

  6. Predicting Error Bars for QSAR Models

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  7. Predicting Error Bars for QSAR Models

    SciTech Connect

    Schroeter, Timon; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Mueller, Klaus-Robert

    2007-09-18

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D{sub 7} models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  8. Prediction of Complex Aerodynamic Flows with Explicit Algebraic Stress Models

    NASA Technical Reports Server (NTRS)

    Abid, Ridha; Morrison, Joseph H.; Gatski, Thomas B.; Speziale, Charles G.

    1996-01-01

    An explicit algebraic stress equation, developed by Gatski and Speziale, is used in the framework of K-epsilon formulation to predict complex aerodynamic turbulent flows. The nonequilibrium effects are modeled through coefficients that depend nonlinearly on both rotational and irrotational strains. The proposed model was implemented in the ISAAC Navier-Stokes code. Comparisons with the experimental data are presented which clearly demonstrate that explicit algebraic stress models can predict the correct response to nonequilibrium flow.

  9. A predictive ocean oil spill model

    SciTech Connect

    Sanderson, J.; Barnette, D.; Papodopoulos, P.; Schaudt, K.; Szabo, D.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Initially, the project focused on creating an ocean oil spill model and working with the major oil companies to compare their data with the Los Alamos global ocean model. As a result of this initial effort, Los Alamos worked closely with the Eddy Joint Industry Project (EJIP), a consortium oil and gas producing companies in the US. The central theme of the project was to use output produced from LANL`s global ocean model to look in detail at ocean currents in selected geographic areas of the world of interest to consortium members. Once ocean currents are well understood this information could be used to create oil spill models, improve offshore exploration and drilling equipment, and aid in the design of semi-permanent offshore production platforms.

  10. Cyclic Oxidation Modeling and Life Prediction

    NASA Technical Reports Server (NTRS)

    Smialek, James L.

    2004-01-01

    The cyclic oxidation process can be described as an iterative scale growth and spallation sequence by a number of similar models. Model input variable include oxide scale type and growth parameters, spalling geometry, spall constant, and cycle duration. Outputs include net weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. All models and their variations produce a number of similar characteristic features. In general, spalling and material consumption increase to a steady state rate, at which point the retained scale approaches a constant and the rate of weight loss becomes linear. For one model, this regularity was demonstrated as dimensionless, universal expressions, obtained by normalizing the variables by critical performance factors. These insights were enabled through the use of the COSP for Windows cyclic oxidation spalling program.

  11. Aggregate driver model to enable predictable behaviour

    NASA Astrophysics Data System (ADS)

    Chowdhury, A.; Chakravarty, T.; Banerjee, T.; Balamuralidhar, P.

    2015-09-01

    The categorization of driving styles, particularly in terms of aggressiveness and skill is an emerging area of interest under the broader theme of intelligent transportation. There are two possible discriminatory techniques that can be applied for such categorization; a microscale (event based) model and a macro-scale (aggregate) model. It is believed that an aggregate model will reveal many interesting aspects of human-machine interaction; for example, we may be able to understand the propensities of individuals to carry out a given task over longer periods of time. A useful driver model may include the adaptive capability of the human driver, aggregated as the individual propensity to control speed/acceleration. Towards that objective, we carried out experiments by deploying smartphone based application to be used for data collection by a group of drivers. Data is primarily being collected from GPS measurements including position & speed on a second-by-second basis, for a number of trips over a two months period. Analysing the data set, aggregate models for individual drivers were created and their natural aggressiveness were deduced. In this paper, we present the initial results for 12 drivers. It is shown that the higher order moments of the acceleration profile is an important parameter and identifier of journey quality. It is also observed that the Kurtosis of the acceleration profiles stores major information about the driving styles. Such an observation leads to two different ranking systems based on acceleration data. Such driving behaviour models can be integrated with vehicle and road model and used to generate behavioural model for real traffic scenario.

  12. Reconstruction of (131)I radioactive contamination in Ukraine caused by the Chernobyl accident using atmospheric transport modelling.

    PubMed

    Talerko, Nikolai

    2005-01-01

    The evaluation of (131)I air and ground contamination field formation in the territory of Ukraine was made using the model of atmospheric transport LEDI (Lagrangian-Eulerian DIffusion model). The (131)I atmospheric transport over the territory of Ukraine was simulated during the first 12 days after the accident (from 26 April to 7 May 1986) using real aerological information and rain measurement network data. The airborne (131)I concentration and ground deposition fields were calculated as the database for subsequent thyroid dose reconstruction for inhabitants of radioactive contaminated regions. The small-scale deposition field variability is assessed using data of (137)Cs detailed measurements in the territory of Ukraine. The obtained results are compared with available data of radioiodine daily deposition measurements made at the network of meteorological stations in Ukraine and data of the assessments of (131)I soil contamination obtained from the (129)I measurements. PMID:16024139

  13. Templeton prediction model underestimates IVF success in an external validation.

    PubMed

    van Loendersloot, L L; van Wely, M; Repping, S; van der Veen, F; Bossuyt, P M M

    2011-06-01

    Prediction models for IVF can be used to identify couples that will benefit from IVF treatment. Currently there is only one prediction model with a good predictive performance that can be used for predicting pregnancy chances after IVF. That model was developed almost 15 years ago and since IVF has progressed substantially during the last two decades it is questionable whether the model is still valid in current clinical practice. The objective of this study was to validate the prediction model of Templeton for calculating pregnancy chances after IVF. The performance of the prediction model was assessed in terms of discrimination, i.e. the area under the receiver operation characteristic (ROC) curve and calibration. Likely causes for miscalibration were evaluated by refitting the Templeton model to the study data. The area under the ROC curve for the Templeton model was 0.61. Calibration showed a significant and systematic underestimation of success in IVF. Although the Templeton model can distinguish somewhat between women with a high and low success rate in IVF, it systematically underestimates pregnancy chances and has therefore no real value for current IVF practice. PMID:21493154

  14. Improved analytical model for residual stress prediction in orthogonal cutting

    NASA Astrophysics Data System (ADS)

    Qi, Zhaoxu; Li, Bin; Xiong, Liangshan

    2014-09-01

    The analytical model of residual stress in orthogonal cutting proposed by Jiann is an important tool for residual stress prediction in orthogonal cutting. In application of the model, a problem of low precision of the surface residual stress prediction is found. By theoretical analysis, several shortages of Jiann's model are picked out, including: inappropriate boundary conditions, unreasonable calculation method of thermal stress, ignorance of stress constraint and cyclic loading algorithm. These shortages may directly lead to the low precision of the surface residual stress prediction. To eliminate these shortages and make the prediction more accurate, an improved model is proposed. In this model, a new contact boundary condition between tool and workpiece is used to make it in accord with the real cutting process; an improved calculation method of thermal stress is adopted; a stress constraint is added according to the volumeconstancy of plastic deformation; and the accumulative effect of the stresses during cyclic loading is considered. At last, an experiment for measuring residual stress in cutting AISI 1045 steel is conducted. Also, Jiann's model and the improved model are simulated under the same conditions with cutting experiment. The comparisons show that the surface residual stresses predicted by the improved model is closer to the experimental results than the results predicted by Jiann's model.

  15. Improved analytical model for residual stress prediction in orthogonal cutting

    NASA Astrophysics Data System (ADS)

    Qi, Zhaoxu; Li, Bin; Xiong, Liangshan

    2014-09-01

    The analytical model of residual stress in orthogonal cutting proposed by Jiann is an important tool for residual stress prediction in orthogonal cutting. In application of the model, a problem of low precision of the surface residual stress prediction is found. By theoretical analysis, several shortages of Jiann's model are picked out, including: inappropriate boundary conditions, unreasonable calculation method of thermal stress, ignorance of stress constraint and cyclic loading algorithm. These shortages may directly lead to the low precision of the surface residual stress prediction. To eliminate these shortages and make the prediction more accurate, an improved model is proposed. In this model, a new contact boundary condition between tool and workpiece is used to make it in accord with the real cutting process; an improved calculation method of thermal stress is adopted; a stress constraint is added according to the volume-constancy of plastic deformation; and the accumulative effect of the stresses during cyclic loading is considered. At last, an experiment for measuring residual stress in cutting AISI 1045 steel is conducted. Also, Jiann's model and the improved model are simulated under the same conditions with cutting experiment. The comparisons show that the surface residual stresses predicted by the improved model is closer to the experimental results than the results predicted by Jiann's model.

  16. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Strangman, T. E.; Neumann, J. F.; Liu, A.

    1986-01-01

    Thermal barrier coatings (TBCs) for turbine airfoils in high-performance engines represent an advanced materials technology with both performance and durability benefits. The foremost TBC benefit is the reduction of heat transferred into air-cooled components, which yields performance and durability benefits. This program focuses on predicting the lives of two types of strain-tolerant and oxidation-resistant TBC systems that are produced by commercial coating suppliers to the gas turbine industry. The plasma-sprayed TBC system, composed of a low-pressure plasma-spray (LPPS) or an argon shrouded plasma-spray (ASPS) applied oxidation resistant NiCrAlY (or CoNiCrAlY) bond coating and an air-plasma-sprayed yttria (8 percent) partially stabilized zirconia insulative layer, is applied by Chromalloy, Klock, and Union Carbide. The second type of TBC is applied by the electron beam-physical vapor deposition (EB-PVD) process by Temescal.

  17. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  18. MJO prediction skill, predictability, and teleconnection impacts in the Beijing Climate Center Atmospheric General Circulation Model

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Ren, Hong-Li; Zuo, Jinqing; Zhao, Chongbo; Chen, Lijuan; Li, Qiaoping

    2016-09-01

    This study evaluates performance of Madden-Julian oscillation (MJO) prediction in the Beijing Climate Center Atmospheric General Circulation Model (BCC_AGCM2.2). By using the real-time multivariate MJO (RMM) indices, it is shown that the MJO prediction skill of BCC_AGCM2.2 extends to about 16-17 days before the bivariate anomaly correlation coefficient drops to 0.5 and the root-mean-square error increases to the level of the climatological prediction. The prediction skill showed a seasonal dependence, with the highest skill occurring in boreal autumn, and a phase dependence with higher skill for predictions initiated from phases 2-4. The results of the MJO predictability analysis showed that the upper bounds of the prediction skill can be extended to 26 days by using a single-member estimate, and to 42 days by using the ensemble-mean estimate, which also exhibited an initial amplitude and phase dependence. The observed relationship between the MJO and the North Atlantic Oscillation was accurately reproduced by BCC_AGCM2.2 for most initial phases of the MJO, accompanied with the Rossby wave trains in the Northern Hemisphere extratropics driven by MJO convection forcing. Overall, BCC_AGCM2.2 displayed a significant ability to predict the MJO and its teleconnections without interacting with the ocean, which provided a useful tool for fully extracting the predictability source of subseasonal prediction.

  19. Prediction of High-Lift Flows using Turbulent Closure Models

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.; Ying, Susan X.; Bertelrud, Arild

    1997-01-01

    The flow over two different multi-element airfoil configurations is computed using linear eddy viscosity turbulence models and a nonlinear explicit algebraic stress model. A subset of recently-measured transition locations using hot film on a McDonnell Douglas configuration is presented, and the effect of transition location on the computed solutions is explored. Deficiencies in wake profile computations are found to be attributable in large part to poor boundary layer prediction on the generating element, and not necessarily inadequate turbulence modeling in the wake. Using measured transition locations for the main element improves the prediction of its boundary layer thickness, skin friction, and wake profile shape. However, using measured transition locations on the slat still yields poor slat wake predictions. The computation of the slat flow field represents a key roadblock to successful predictions of multi-element flows. In general, the nonlinear explicit algebraic stress turbulence model gives very similar results to the linear eddy viscosity models.

  20. Multikernel linear mixed models for complex phenotype prediction.

    PubMed

    Weissbrod, Omer; Geiger, Dan; Rosset, Saharon

    2016-07-01

    Linear mixed models (LMMs) and their extensions have recently become the method of choice in phenotype prediction for complex traits. However, LMM use to date has typically been limited by assuming simple genetic architectures. Here, we present multikernel linear mixed model (MKLMM), a predictive modeling framework that extends the standard LMM using multiple-kernel machine learning approaches. MKLMM can model genetic interactions and is particularly suitable for modeling complex local interactions between nearby variants. We additionally present MKLMM-Adapt, which automatically infers interaction types across multiple genomic regions. In an analysis of eight case-control data sets from the Wellcome Trust Case Control Consortium and more than a hundred mouse phenotypes, MKLMM-Adapt consistently outperforms competing methods in phenotype prediction. MKLMM is as computationally efficient as standard LMMs and does not require storage of genotypes, thus achieving state-of-the-art predictive power without compromising computational feasibility or genomic privacy. PMID:27302636

  1. Evaluation of battery models for prediction of electric vehicle range

    NASA Technical Reports Server (NTRS)

    Frank, H. A.; Phillips, A. M.

    1977-01-01

    Three analytical models for predicting electric vehicle battery output and the corresponding electric vehicle range for various driving cycles were evaluated. The models were used to predict output and range, and then compared with experimentally determined values determined by laboratory tests on batteries using discharge cycles identical to those encountered by an actual electric vehicle while on SAE cycles. Results indicate that the modified Hoxie model gave the best predictions with an accuracy of about 97 to 98% in the best cases and 86% in the worst case. A computer program was written to perform the lengthy iterative calculations required. The program and hardware used to automatically discharge the battery are described.

  2. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl Nuclear Power Plant accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-07-01

    The coupled model LMDZORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5° × 1.27°, and the same grid stretched over Europe to reach a resolution of 0.66° × 0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels respectively, extending up to the mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The model is validated with the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986 using the emission inventory from Brandt et al. (2002). This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. The best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to De Cort et al., 1998), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for

  3. A color prediction model for imagery analysis

    NASA Technical Reports Server (NTRS)

    Skaley, J. E.; Fisher, J. R.; Hardy, E. E.

    1977-01-01

    A simple model has been devised to selectively construct several points within a scene using multispectral imagery. The model correlates black-and-white density values to color components of diazo film so as to maximize the color contrast of two or three points per composite. The CIE (Commission Internationale de l'Eclairage) color coordinate system is used as a quantitative reference to locate these points in color space. Superimposed on this quantitative reference is a perceptional framework which functionally contrasts color values in a psychophysical sense. This methodology permits a more quantitative approach to the manual interpretation of multispectral imagery while resulting in improved accuracy and lower costs.

  4. Development of operational models for space weather prediction

    NASA Astrophysics Data System (ADS)

    Liu, Siqing; Gong, Jiancun

    Since space weather prediction is currently at the stage of transition from human experience to objective forecasting methods, developing operational forecasting models becomes an important way to improve the capabilities of space weather service. As the existing theoretical models are not fully operational when it comes to space weather prediction, we carried out researches on developing operational models, considering the user needs for prediction of key elements in space environment, which have vital impacts on space assets security. We focused on solar activities, geomagnetic activities, high-energy particles, atmospheric density, plasma environment and so forth. Great progresses have been made in developing 3D dynamic asymmetric magnetopause model, plasma sheet energetic electron flux forecasting model and 400km-atmospheric density forecasting model, and also in the prediction of high-speed solar-wind streams from coronal holes and geomagnetic AE indices. Some of these models have already been running in the operational system of Space Environment Prediction Center, National Space Science Center (SEPC/NSSC). This presentation will introduce the research plans for space weather prediction in China, and current progresses of developing operational models and their applications in daily space weather services in SEPC/NSSC.

  5. Micro-mechanical studies on graphite strength prediction models

    NASA Astrophysics Data System (ADS)

    Kanse, Deepak; Khan, I. A.; Bhasin, V.; Vaze, K. K.

    2013-06-01

    The influence of type of loading and size-effects on the failure strength of graphite were studied using Weibull model. It was observed that this model over-predicts size effect in tension. However, incorporation of grain size effect in Weibull model, allows a more realistic simulation of size effects. Numerical prediction of strength of four-point bend specimen was made using the Weibull parameters obtained from tensile test data. Effective volume calculations were carried out and subsequently predicted strength was compared with experimental data. It was found that Weibull model can predict mean flexural strength with reasonable accuracy even when grain size effect was not incorporated. In addition, the effects of microstructural parameters on failure strength were analyzed using Rose and Tucker model. Uni-axial tensile, three-point bend and four-point bend strengths were predicted using this model and compared with the experimental data. It was found that this model predicts flexural strength within 10%. For uni-axial tensile strength, difference was 22% which can be attributed to less number of tests on tensile specimens. In order to develop failure surface of graphite under multi-axial state of stress, an open ended hollow tube of graphite was subjected to internal pressure and axial load and Batdorf model was employed to calculate failure probability of the tube. Bi-axial failure surface was generated in the first and fourth quadrant for 50% failure probability by varying both internal pressure and axial load.

  6. Radiological dose assessment for bounding accident scenarios at the Critical Experiment Facility, TA-18, Los Alamos National Laboratory

    SciTech Connect

    1991-09-01

    A computer modeling code, CRIT8, was written to allow prediction of the radiological doses to workers and members of the public resulting from these postulated maximum-effect accidents. The code accounts for the relationships of the initial parent radionuclide inventory at the time of the accident to the growth of radioactive daughter products, and considers the atmospheric conditions at time of release. The code then calculates a dose at chosen receptor locations for the sum of radionuclides produced as a result of the accident. Both criticality and non-criticality accidents are examined.

  7. Ensemble Learning of QTL Models Improves Prediction of Complex Traits

    PubMed Central

    Bian, Yang; Holland, James B.

    2015-01-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  8. A Predictive Model of High Shear Thrombus Growth.

    PubMed

    Mehrabadi, Marmar; Casa, Lauren D C; Aidun, Cyrus K; Ku, David N

    2016-08-01

    The ability to predict the timescale of thrombotic occlusion in stenotic vessels may improve patient risk assessment for thrombotic events. In blood contacting devices, thrombosis predictions can lead to improved designs to minimize thrombotic risks. We have developed and validated a model of high shear thrombosis based on empirical correlations between thrombus growth and shear rate. A mathematical model was developed to predict the growth of thrombus based on the hemodynamic shear rate. The model predicts thrombus deposition based on initial geometric and fluid mechanic conditions, which are updated throughout the simulation to reflect the changing lumen dimensions. The model was validated by comparing predictions against actual thrombus growth in six separate in vitro experiments: stenotic glass capillary tubes (diameter = 345 µm) at three shear rates, the PFA-100(®) system, two microfluidic channel dimensions (heights = 300 and 82 µm), and a stenotic aortic graft (diameter = 5.5 mm). Comparison of the predicted occlusion times to experimental results shows excellent agreement. The model is also applied to a clinical angiography image to illustrate the time course of thrombosis in a stenotic carotid artery after plaque cap rupture. Our model can accurately predict thrombotic occlusion time over a wide range of hemodynamic conditions. PMID:26795978

  9. Ensemble Learning of QTL Models Improves Prediction of Complex Traits.

    PubMed

    Bian, Yang; Holland, James B

    2015-10-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  10. Predictive modeling of pedestal structure in KSTAR using EPED model

    SciTech Connect

    Han, Hyunsun; Kim, J. Y.; Kwon, Ohjin

    2013-10-15

    A predictive calculation is given for the structure of edge pedestal in the H-mode plasma of the KSTAR (Korea Superconducting Tokamak Advanced Research) device using the EPED model. Particularly, the dependence of pedestal width and height on various plasma parameters is studied in detail. The two codes, ELITE and HELENA, are utilized for the stability analysis of the peeling-ballooning and kinetic ballooning modes, respectively. Summarizing the main results, the pedestal slope and height have a strong dependence on plasma current, rapidly increasing with it, while the pedestal width is almost independent of it. The plasma density or collisionality gives initially a mild stabilization, increasing the pedestal slope and height, but above some threshold value its effect turns to a destabilization, reducing the pedestal width and height. Among several plasma shape parameters, the triangularity gives the most dominant effect, rapidly increasing the pedestal width and height, while the effect of elongation and squareness appears to be relatively weak. Implication of these edge results, particularly in relation to the global plasma performance, is discussed.

  11. The SAM software system for modeling severe accidents at nuclear power plants equipped with VVER reactors on full-scale and analytic training simulators

    NASA Astrophysics Data System (ADS)

    Osadchaya, D. Yu.; Fuks, R. L.

    2014-04-01

    The architecture of the SAM software package intended for modeling beyond-design-basis accidents at nuclear power plants equipped with VVER reactors evolving into a severe stage with core melting and failure of the reactor pressure vessel is presented. By using the SAM software package it is possible to perform comprehensive modeling of the entire emergency process from the failure initiating event to the stage of severe accident involving meltdown of nuclear fuel, failure of the reactor pressure vessel, and escape of corium onto the concrete basement or into the corium catcher with retention of molten products in it.

  12. Evaluation of prediction intervals for expressing uncertainties in groundwater flow model predictions

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    We tested the accuracy of 95% individual prediction intervals for hydraulic heads, streamflow gains, and effective transmissivities computed by groundwater models of two Danish aquifers. To compute the intervals, we assumed that each predicted value can be written as the sum of a computed dependent variable and a random error. Testing was accomplished by using a cross-validation method and by using new field measurements of hydraulic heads and transmissivities that were not used to develop or calibrate the models. The tested null hypotheses are that the coverage probability of the prediction intervals is not significantly smaller than the assumed probability (95%) and that each tail probability is not significantly different from the assumed probability (2.5%). In all cases tested, these hypotheses were accepted at the 5% level of significance. We therefore conclude that for the groundwater models of two real aquifers the individual prediction intervals appear to be accurate.We tested the accuracy of 95% individual prediction intervals for hydraulic heads, streamflow gains, and effective transmissivities computed by groundwater models of two Danish aquifers. To compute the intervals, we assumed that each predicted value can be written as the sum of a computed dependent variable and a random error. Testing was accomplished by using a cross-validation method and by using new field measurements of hydraulic heads and transmissivities that were not used to develop or calibrate the models. The tested null hypotheses are that the coverage probability of the prediction intervals is not significantly smaller than the assumed probability (95%) and that each tail probability is not significantly different from the assumed probability (2.5%). In all cases tested, these hypotheses were accepted at the 5% level of significance. We therefore conclude that for the groundwater models of two real aquifers the individual prediction intervals appear to be accurate.

  13. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  14. Predicting Market Impact Costs Using Nonparametric Machine Learning Models

    PubMed Central

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  15. Validation of a tuber blight (Phytophthora infestans) prediction model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  16. A Model for Prediction of Heat Stability of Photosynthetic Membranes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A previous study has revealed a positive correlation between heat-induced damage to photosynthetic membranes (thylakoid membranes) and chlorophyll loss. In this study, we exploited this correlation and developed a model for prediction of thermal damage to thylakoids. Prediction is based on estimat...

  17. Keeping Juvenile Delinquents in School: A Prediction Model.

    ERIC Educational Resources Information Center

    Dunham, Roger G.; Alpert, Geoffrey P.

    1987-01-01

    Tested an empirically based prediction model of school dropout on juvenile delinquents (N=137). Identified four factors yielding a high level of prediction: misbehavior in school, disliking school, the negative influence of peers with respect to dropping out and getting into trouble, and a marginal or weak relationship with parents. (Author/ABB)

  18. Geospatial application of the Water Erosion Prediction Project (WEPP) model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Water Erosion Prediction Project (WEPP) model is a process-based technology for prediction of soil erosion by water at hillslope profile, field, and small watershed scales. In particular, WEPP utilizes observed or generated daily climate inputs to drive the surface hydrology processes (infiltrat...

  19. Predictive models for circulating fluidized bed combustors

    SciTech Connect

    Gidaspow, D.

    1989-11-01

    The overall objective of this investigation is to develop experimentally verified models for circulating fluidized bed (CFB) combustors. The purpose of these models is to help American industry, such as Combustion Engineering, design and scale-up CFB combustors that are capable of burning US Eastern high sulfur coals with low SO{sub x} and NO{sub x} emissions. In this report, presented as a technical paper, solids distributions and velocities were computed for a PYROFLOW circulating fluidized bed system. To illustrate the capability of the computer code an example of coal-pyrite separation is included, which was done earlier for a State of Illinois project. 24 refs., 20 figs., 2 tabs.

  20. Reconnection in NIMROD: Model, Predictions, Remedies

    SciTech Connect

    Fowler, T K; Bulmer, R H; Cohen, B I; Hau, D D

    2003-06-25

    It is shown that in NIMROD the formation of closed current configurations, occurring only after the voltage is turned off, is due to the faster resistive decay of nonsymmetric modes compared to the symmetric projection of the 3D steady state achieved by gun injection. Implementing Spitzer resistivity is required to make a definitive comparison with experiment, using two experimental signatures of the model discussed in the paper. If there are serious disagreements, it is suggested that a phenomenological hyper-resistivity be added to the n = 0 component of Ohm's law, similar to hyper-resistive Corsica models that appear to fit experiments. Hyper-resistivity might capture physics at small scale missed by NIMROD. Encouraging results would motivate coupling NIMROD to SPICE with edge physics inspired by UEDGE, as a tool for experimental data analysis.

  1. Predictive Modeling for Comfortable Death Outcome Using Electronic Health Records

    PubMed Central

    Lodhi, Muhammad Kamran; Ansari, Rashid; Yao, Yingwei; Keenan, Gail M.; Wilkie, Diana J.; Khokhar, Ashfaq A.

    2016-01-01

    Electronic health record (EHR) systems are used in healthcare industry to observe the progress of patients. With fast growth of the data, EHR data analysis has become a big data problem. Most EHRs are sparse and multi-dimensional datasets and mining them is a challenging task due to a number of reasons. In this paper, we have used a nursing EHR system to build predictive models to determine what factors impact death anxiety, a significant problem for the dying patients. Different existing modeling techniques have been used to develop coarse-grained as well as fine-grained models to predict patient outcomes. The coarse-grained models help in predicting the outcome at the end of each hospitalization, whereas fine-grained models help in predicting the outcome at the end of each shift, therefore providing a trajectory of predicted outcomes. Based on different modeling techniques, our results show significantly accurate predictions, due to relatively noise-free data. These models can help in determining effective treatments, lowering healthcare costs, and improving the quality of end-of-life (EOL) care.

  2. Downscaling surface wind predictions from numerical weather prediction models in complex terrain with WindNinja

    NASA Astrophysics Data System (ADS)

    Wagenbrenner, Natalie S.; Forthofer, Jason M.; Lamb, Brian K.; Shannon, Kyle S.; Butler, Bret W.

    2016-04-01

    Wind predictions in complex terrain are important for a number of applications. Dynamic downscaling of numerical weather prediction (NWP) model winds with a high-resolution wind model is one way to obtain a wind forecast that accounts for local terrain effects, such as wind speed-up over ridges, flow channeling in valleys, flow separation around terrain obstacles, and flows induced by local surface heating and cooling. In this paper we investigate the ability of a mass-consistent wind model for downscaling near-surface wind predictions from four NWP models in complex terrain. Model predictions are compared with surface observations from a tall, isolated mountain. Downscaling improved near-surface wind forecasts under high-wind (near-neutral atmospheric stability) conditions. Results were mixed during upslope and downslope (non-neutral atmospheric stability) flow periods, although wind direction predictions generally improved with downscaling. This work constitutes evaluation of a diagnostic wind model at unprecedented high spatial resolution in terrain with topographical ruggedness approaching that of typical landscapes in the western US susceptible to wildland fire.

  3. Testing the Predictions of the Central Capacity Sharing Model

    ERIC Educational Resources Information Center

    Tombu, Michael; Jolicoeur, Pierre

    2005-01-01

    The divergent predictions of 2 models of dual-task performance are investigated. The central bottleneck and central capacity sharing models argue that a central stage of information processing is capacity limited, whereas stages before and after are capacity free. The models disagree about the nature of this central capacity limitation. The…

  4. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1975-01-01

    The feasibility of extended and long-range weather prediction by means of global atmospheric models was studied. A number of computer experiments were conducted at GISS with the GISS global general circulation model. Topics discussed include atmospheric response to sea-surface temperature anomalies, and monthly mean forecast experiments with the global model.

  5. EFFECTS OF PHOTOCHEMICAL KINETIC MECHANISMS ON OXIDANT MODEL PREDICTIONS

    EPA Science Inventory

    The comparative effects of kinetic mechanisms on oxidant model predictions have been tested using two different mechanisms (the Carbon-Bond Mechanism II (CBM-II) and the Demerjian Photochemical Box Model (DPBM) mechanism) in three air quality models (the OZIPM/EKMA, the Urban Air...

  6. Propagating uncertainties in statistical model based shape prediction

    NASA Astrophysics Data System (ADS)

    Syrkina, Ekaterina; Blanc, Rémi; Székely, Gàbor

    2011-03-01

    This paper addresses the question of accuracy assessment and confidence regions estimation in statistical model based shape prediction. Shape prediction consists in estimating the shape of an organ based on a partial observation, due e.g. to a limited field of view or poorly contrasted images, and generally requires a statistical model. However, such predictions can be impaired by several sources of uncertainty, in particular the presence of noise in the observation, limited correlations between the predictors and the shape to predict, as well as limitations of the statistical shape model - in particular the number of training samples. We propose a framework which takes these into account and derives confidence regions around the predicted shape. Our method relies on the construction of two separate statistical shape models, for the predictors and for the unseen parts, and exploits the correlations between them assuming a joint Gaussian distribution. Limitations of the models are taken into account by jointly optimizing the prediction and minimizing the shape reconstruction error through cross-validation. An application to the prediction of the shape of the proximal part of the human tibia given the shape of the distal femur is proposed, as well as the evaluation of the reliability of the estimated confidence regions, using a database of 184 samples. Potential applications are reconstructive surgery, e.g. to assess whether an implant fits in a range of acceptable shapes, or functional neurosurgery when the target's position is not directly visible and needs to be inferred from nearby visible structures.

  7. Predictive Modeling: A New Paradigm for Managing Endometrial Cancer.

    PubMed

    Bendifallah, Sofiane; Daraï, Emile; Ballester, Marcos

    2016-03-01

    With the abundance of new options in diagnostic and treatment modalities, a shift in the medical decision process for endometrial cancer (EC) has been observed. The emergence of individualized medicine and the increasing complexity of available medical data has lead to the development of several prediction models. In EC, those clinical models (algorithms, nomograms, and risk scoring systems) have been reported, especially for stratifying and subgrouping patients, with various unanswered questions regarding such things as the optimal surgical staging for lymph node metastasis as well as the assessment of recurrence and survival outcomes. In this review, we highlight existing prognostic and predictive models in EC, with a specific focus on their clinical applicability. We also discuss the methodologic aspects of the development of such predictive models and the steps that are required to integrate these tools into clinical decision making. In the future, the emerging field of molecular or biochemical markers research may substantially improve predictive and treatment approaches. PMID:26577116

  8. New Model Predicts Fire Activity in South America

    NASA Video Gallery

    UC Irvine scientist Jim Randerson discusses a new model that is able to predict fire activity in South America using sea surface temperature observations of the Pacific and Atlantic Ocean. The find...

  9. Submission Form for Peer-Reviewed Cancer Risk Prediction Models

    Cancer.gov

    If you have information about a peer-reviewd cancer risk prediction model that you would like to be considered for inclusion on this list, submit as much information as possible through the form on this page.

  10. Using Pareto points for model identification in predictive toxicology

    PubMed Central

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  11. Atmospheric drag model calibrations for spacecraft lifetime prediction

    NASA Technical Reports Server (NTRS)

    Binebrink, A. L.; Radomski, M. S.; Samii, M. V.

    1989-01-01

    Although solar activity prediction uncertainty normally dominates decay prediction error budget for near-Earth spacecraft, the effect of drag force modeling errors for given levels of solar activity needs to be considered. Two atmospheric density models, the modified Harris-Priester model and the Jacchia-Roberts model, to reproduce the decay histories of the Solar Mesosphere Explorer (SME) and Solar Maximum Mission (SMM) spacecraft in the 490- to 540-kilometer altitude range were analyzed. Historical solar activity data were used in the input to the density computations. For each spacecraft and atmospheric model, a drag scaling adjustment factor was determined for a high-solar-activity year, such that the observed annual decay in the mean semimajor axis was reproduced by an averaged variation-of-parameters (VOP) orbit propagation. The SME (SMM) calibration was performed using calendar year 1983 (1982). The resulting calibration factors differ by 20 to 40 percent from the predictions of the prelaunch ballistic coefficients. The orbit propagations for each spacecraft were extended to the middle of 1988 using the calibrated drag models. For the Jaccia-Roberts density model, the observed decay in the mean semimajor axis of SME (SMM) over the 4.5-year (5.5-year) predictive period was reproduced to within 1.5 (4.4) percent. The corresponding figure for the Harris-Priester model was 8.6 (20.6) percent. Detailed results and conclusions regarding the importance of accurate drag force modeling for lifetime predictions are presented.

  12. A Multistep Chaotic Model for Municipal Solid Waste Generation Prediction

    PubMed Central

    Song, Jingwei; He, Jiaying

    2014-01-01

    Abstract In this study, a univariate local chaotic model is proposed to make one-step and multistep forecasts for daily municipal solid waste (MSW) generation in Seattle, Washington. For MSW generation prediction with long history data, this forecasting model was created based on a nonlinear dynamic method called phase-space reconstruction. Compared with other nonlinear predictive models, such as artificial neural network (ANN) and partial least square–support vector machine (PLS-SVM), and a commonly used linear seasonal autoregressive integrated moving average (sARIMA) model, this method has demonstrated better prediction accuracy from 1-step ahead prediction to 14-step ahead prediction assessed by both mean absolute percentage error (MAPE) and root mean square error (RMSE). Max error, MAPE, and RMSE show that chaotic models were more reliable than the other three models. As chaotic models do not involve random walk, their performance does not vary while ANN and PLS-SVM make different forecasts in each trial. Moreover, this chaotic model was less time consuming than ANN and PLS-SVM models. PMID:25125942

  13. A Multistep Chaotic Model for Municipal Solid Waste Generation Prediction.

    PubMed

    Song, Jingwei; He, Jiaying

    2014-08-01

    In this study, a univariate local chaotic model is proposed to make one-step and multistep forecasts for daily municipal solid waste (MSW) generation in Seattle, Washington. For MSW generation prediction with long history data, this forecasting model was created based on a nonlinear dynamic method called phase-space reconstruction. Compared with other nonlinear predictive models, such as artificial neural network (ANN) and partial least square-support vector machine (PLS-SVM), and a commonly used linear seasonal autoregressive integrated moving average (sARIMA) model, this method has demonstrated better prediction accuracy from 1-step ahead prediction to 14-step ahead prediction assessed by both mean absolute percentage error (MAPE) and root mean square error (RMSE). Max error, MAPE, and RMSE show that chaotic models were more reliable than the other three models. As chaotic models do not involve random walk, their performance does not vary while ANN and PLS-SVM make different forecasts in each trial. Moreover, this chaotic model was less time consuming than ANN and PLS-SVM models. PMID:25125942

  14. Modeling Seizure Self-Prediction: An E-Diary Study

    PubMed Central

    Haut, Sheryl R.; Hall, Charles B.; Borkowski, Thomas; Tennen, Howard; Lipton, Richard B.

    2013-01-01

    Purpose A subset of patients with epilepsy successfully self-predicted seizures in a paper diary study. We conducted an e-diary study to ensure that prediction precedes seizures, and to characterize the prodromal features and time windows that underlie self-prediction. Methods Subjects 18 or older with LRE and ≥3 seizures/month maintained an e-diary, reporting AM/PM data daily, including mood, premonitory symptoms, and all seizures. Self-prediction was rated by, “How likely are you to experience a seizure [time frame]”? Five choices ranged from almost certain (>95% chance) to very unlikely. Relative odds of seizure (OR) within time frames was examined using Poisson models with log normal random effects to adjust for multiple observations. Key Findings Nineteen subjects reported 244 eligible seizures. OR for prediction choices within 6hrs was as high as 9.31 (1.92,45.23) for “almost certain”. Prediction was most robust within 6hrs of diary entry, and remained significant up to 12hrs. For 9 best predictors, average sensitivity was 50%. Older age contributed to successful self-prediction, and self-prediction appeared to be driven by mood and premonitory symptoms. In multivariate modeling of seizure occurrence, self-prediction (2.84; 1.68,4.81), favorable change in mood (0.82; 0.67,0.99) and number of premonitory symptoms (1,11; 1.00,1.24) were significant. Significance Some persons with epilepsy can self-predict seizures. In these individuals, the odds of a seizure following a positive prediction are high. Predictions were robust, not attributable to recall bias, and were related to self awareness of mood and premonitory features. The 6-hour prediction window is suitable for the development of pre-emptive therapy. PMID:24111898

  15. Measures and models for predicting cognitive fatigue

    NASA Astrophysics Data System (ADS)

    Trejo, Leonard J.; Kochavi, Rebekah; Kubitz, Karla; Montgomery, Leslie D.; Rosipal, Roman; Matthews, Bryan

    2005-05-01

    We measured multichannel EEG spectra during a continuous mental arithmetic task and created statistical learning models of cognitive fatigue for single subjects. Sixteen subjects (4 F, 18-38 y) viewed 4-digit problems on a computer, solved the problems, and pressed keys to respond (inter-trial interval = 1 s). Subjects performed until either they felt exhausted or three hours had elapsed. Pre- and post-task measures of mood (Activation Deactivation Adjective Checklist, Visual Analogue Mood Scale) confirmed that fatigue increased and energy decreased over time. We examined response times (RT); amplitudes of ERP components N1, P2, and P300, readiness potentials; and power of frontal theta and parietal alpha rhythms for change as a function of time. Mean RT rose from 6.7 s to 7.9 s over time. After controlling for or rejecting sources of artifact such as EOG, EMG, motion, bad electrodes, and electrical interference, we found that frontal theta power rose by 29% and alpha power rose by 44% over the course of the task. We used 30-channel EEG frequency spectra to model the effects of time in single subjects using a kernel partial least squares (KPLS) classifier. We classified 13-s long EEG segments as being from the first or last 15 minutes of the task, using random sub-samples of each class. Test set accuracies ranged from 91% to 100% correct. We conclude that a KPLS classifier of multichannel spectral measures provides a highly accurate model of EEG-fatigue relationships and is suitable for on-line applications to neurological monitoring.

  16. Prediction of cloud droplet number in a general circulation model

    SciTech Connect

    Ghan, S.J.; Leung, L.R.

    1996-04-01

    We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.

  17. Developing a predictive tropospheric ozone model for Tabriz

    NASA Astrophysics Data System (ADS)

    Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi

    2013-04-01

    Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.

  18. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  19. Toward a predictive model for elastomer seals

    NASA Astrophysics Data System (ADS)

    Molinari, Nicola; Khawaja, Musab; Sutton, Adrian; Mostofi, Arash

    Nitrile butadiene rubber (NBR) and hydrogenated-NBR (HNBR) are widely used elastomers, especially as seals in oil and gas applications. During exposure to well-hole conditions, ingress of gases causes degradation of performance, including mechanical failure. We use computer simulations to investigate this problem at two different length and time-scales. First, we study the solubility of gases in the elastomer using a chemically-inspired description of HNBR based on the OPLS all-atom force-field. Starting with a model of NBR, C=C double bonds are saturated with either hydrogen or intramolecular cross-links, mimicking the hydrogenation of NBR to form HNBR. We validate against trends for the mass density and glass transition temperature for HNBR as a function of cross-link density, and for NBR as a function of the fraction of acrylonitrile in the copolymer. Second, we study mechanical behaviour using a coarse-grained model that overcomes some of the length and time-scale limitations of an all-atom approach. Nanoparticle fillers added to the elastomer matrix to enhance mechanical response are also included. Our initial focus is on understanding the mechanical properties at the elevated temperatures and pressures experienced in well-hole conditions.

  20. Empirical Model for Predicting Rockfall Trajectory Direction

    NASA Astrophysics Data System (ADS)

    Asteriou, Pavlos; Tsiambaos, George

    2016-03-01

    A methodology for the experimental investigation of rockfall in three-dimensional space is presented in this paper, aiming to assist on-going research of the complexity of a block's response to impact during a rockfall. An extended laboratory investigation was conducted, consisting of 590 tests with cubical and spherical blocks made of an artificial material. The effects of shape, slope angle and the deviation of the post-impact trajectory are examined as a function of the pre-impact trajectory direction. Additionally, an empirical model is proposed that estimates the deviation of the post-impact trajectory as a function of the pre-impact trajectory with respect to the slope surface and the slope angle. This empirical model is validated by 192 small-scale field tests, which are also presented in this paper. Some important aspects of the three-dimensional nature of rockfall phenomena are highlighted that have been hitherto neglected. The 3D space data provided in this study are suitable for the calibration and verification of rockfall analysis software that has become increasingly popular in design practice.

  1. Predictive models of circulating fluidized bed combustors

    SciTech Connect

    Gidaspow, D.

    1992-07-01

    Steady flows influenced by walls cannot be described by inviscid models. Flows in circulating fluidized beds have significant wall effects. Particles in the form of clusters or layers can be seen to run down the walls. Hence modeling of circulating fluidized beds (CFB) without a viscosity is not possible. However, in interpreting Equations (8-1) and (8-2) it must be kept in mind that CFB or most other two phase flows are never in a true steady state. Then the viscosity in Equations (8-1) and (8-2) may not be the true fluid viscosity to be discussed next, but an Eddy type viscosity caused by two phase flow oscillations usually referred to as turbulence. In view of the transient nature of two-phase flow, the drag and the boundary layer thickness may not be proportional to the square root of the intrinsic viscosity but depend upon it to a much smaller extent. As another example, liquid-solid flow and settling of colloidal particles in a lamella electrosettler the settling process is only moderately affected by viscosity. Inviscid flow with settling is a good first approximation to this electric field driven process. The physical meaning of the particulate phase viscosity is described in detail in the chapter on kinetic theory. Here the conventional derivation resented in single phase fluid mechanics is generalized to multiphase flow.

  2. Modelling of Ceres: Predictions for Dawn

    NASA Astrophysics Data System (ADS)

    Neumann, Wladimir; Breuer, Doris; Spohn, Tilman

    2014-05-01

    Introduction: The asteroid Ceres is the largest body in the asteroid belt. It can be seen as one of the remaining examples of the intermediate stages of planetary accretion, which additionally is substantially different from most asteroids. Studies of such protoplanetary objects like Ceres and Vesta provide insight into the history of the formation of Earth and other rocky planets. One of Ceres' remarkable properties is the relatively low average bulk density of 2077±36 kg m-3 (see [1]). Assuming a nearly chondritic composition, this low value can be explained either by a relatively high average porosity[2], or by the presence of a low density phase[3,4]. Based on numerical modelling[3,4], it has been proposed that this low density phase, which may have been represented initially by water ice or by hydrated silicates, differentiated from the silicates forming an icy mantle overlying a rocky core. However, the shape and the moment of inertia of Ceres seem to be consistent with both a porous and a differentiated structure. In the first case Ceres would be just a large version of a common asteroid. In the second case, however, this body could exhibit properties characteristic for a planet rather than an asteroid: presence of a core, mantle and crust, as well as a liquid ocean in the past and maybe still a thin basal ocean today. This issue is still under debate, but will be resolved (at least partially), once Dawn orbits Ceres. We study the thermal evolution of a Ceres-like body via numerical modeling in order to draw conclusions about the thermal metamorphism of the interior and its present-day state. In particular, we investigate the evolution of the interior assuming an initially porous structure. We adopted the numerical code from [5], which computes the thermal and structural evolution of planetesimals, including compaction of the initially porous primordial material, which is modeled using a creep law. Our model is suited to prioritise between the two possible

  3. CRAFFT: An Activity Prediction Model based on Bayesian Networks

    PubMed Central

    Nazerfard, Ehsan; Cook, Diane J.

    2014-01-01

    Recent advances in the areas of pervasive computing, data mining, and machine learning offer unique opportunities to provide health monitoring and assistance for individuals facing difficulties to live independently in their homes. Several components have to work together to provide health monitoring for smart home residents including, but not limited to, activity recognition, activity discovery, activity prediction, and prompting system. Compared to the significant research done to discover and recognize activities, less attention has been given to predict the future activities that the resident is likely to perform. Activity prediction components can play a major role in design of a smart home. For instance, by taking advantage of an activity prediction module, a smart home can learn context-aware rules to prompt individuals to initiate important activities. In this paper, we propose an activity prediction model using Bayesian networks together with a novel two-step inference process to predict both the next activity features and the next activity label. We also propose an approach to predict the start time of the next activity which is based on modeling the relative start time of the predicted activity using the continuous normal distribution and outlier detection. To validate our proposed models, we used real data collected from physical smart environments. PMID:25937847

  4. Predicting adverse drug events using pharmacological network models.

    PubMed

    Cami, Aurel; Arnold, Alana; Manzi, Shannon; Reis, Ben

    2011-12-21

    Early and accurate identification of adverse drug events (ADEs) is critically important for public health. We have developed a novel approach for predicting ADEs, called predictive pharmacosafety networks (PPNs). PPNs integrate the network structure formed by known drug-ADE relationships with information on specific drugs and adverse events to predict likely unknown ADEs. Rather than waiting for sufficient post-market evidence to accumulate for a given ADE, this predictive approach relies on leveraging existing, contextual drug safety information, thereby having the potential to identify certain ADEs earlier. We constructed a network representation of drug-ADE associations for 809 drugs and 852 ADEs on the basis of a snapshot of a widely used drug safety database from 2005 and supplemented these data with additional pharmacological information. We trained a logistic regression model to predict unknown drug-ADE associations that were not listed in the 2005 snapshot. We evaluated the model's performance by comparing these predictions with the new drug-ADE associations that appeared in a 2010 snapshot of the same drug safety database. The proposed model achieved an AUROC (area under the receiver operating characteristic curve) statistic of 0.87, with a sensitivity of 0.42 given a specificity of 0.95. These findings suggest that predictive network methods can be useful for predicting unknown ADEs. PMID:22190238

  5. Criteria for deviation from predictions by the concentration addition model.

    PubMed

    Takeshita, Jun-Ichi; Seki, Masanori; Kamo, Masashi

    2016-07-01

    Loewe's additivity (concentration addition) is a well-known model for predicting the toxic effects of chemical mixtures under the additivity assumption of toxicity. However, from the perspective of chemical risk assessment and/or management, it is important to identify chemicals whose toxicities are additive when present concurrently, that is, it should be established whether there are chemical mixtures to which the concentration addition predictive model can be applied. The objective of the present study was to develop criteria for judging test results that deviated from the predictions by the concentration addition chemical mixture model. These criteria were based on the confidence interval of the concentration addition model's prediction and on estimation of errors of the predicted concentration-effect curves by toxicity tests after exposure to single chemicals. A log-logit model with 2 parameters was assumed for the concentration-effect curve of each individual chemical. These parameters were determined by the maximum-likelihood method, and the criteria were defined using the variances and the covariance of the parameters. In addition, the criteria were applied to a toxicity test of a binary mixture of p-n-nonylphenol and p-n-octylphenol using the Japanese killifish, medaka (Oryzias latipes). Consequently, the concentration addition model using confidence interval was capable of predicting the test results at any level, and no reason for rejecting the concentration addition was found. Environ Toxicol Chem 2016;35:1806-1814. © 2015 SETAC. PMID:26660330

  6. Residual bias in a multiphase flow model calibration and prediction

    USGS Publications Warehouse

    Poeter, E.P.; Johnson, R.H.

    2002-01-01

    When calibrated models produce biased residuals, we assume it is due to an inaccurate conceptual model and revise the model, choosing the most representative model as the one with the best-fit and least biased residuals. However, if the calibration data are biased, we may fail to identify an acceptable model or choose an incorrect model. Conceptual model revision could not eliminate biased residuals during inversion of simulated DNAPL migration under controlled conditions at the Borden Site near Ontario Canada. This paper delineates hypotheses for the source of bias, and explains the evolution of the calibration and resulting model predictions.

  7. Detailed source term estimation of the atmospheric release for the Fukushima Daiichi Nuclear Power Station accident by coupling simulations of an atmospheric dispersion model with an improved deposition scheme and oceanic dispersion model

    NASA Astrophysics Data System (ADS)

    Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.

    2015-01-01

    Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Daiichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate the detailed atmospheric releases during the accident using a reverse estimation method which calculates the release rates of radionuclides by comparing measurements of air concentration of a radionuclide or its dose rate in the environment with the ones calculated by atmospheric and oceanic transport, dispersion and deposition models. The atmospheric and oceanic models used are WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN-FDM (Finite difference oceanic dispersion model), both developed by the authors. A sophisticated deposition scheme, which deals with dry and fog-water depositions, cloud condensation nuclei (CCN) activation, and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The results revealed that the major releases of radionuclides due to the FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, midnight of 14 March when the SRV (safety relief valve) was opened three times at Unit 2, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates. The simulation by WSPEEDI-II using the new source term reproduced the local and regional patterns of cumulative

  8. Models for predicting recreational water quality at Lake Erie beaches

    USGS Publications Warehouse

    Francy, Donna S.; Darner, Robert A.; Bertke, Erin E.

    2006-01-01

    Data collected from four Lake Erie beaches during the recreational seasons of 2004-05 and from one Lake Erie beach during 2000-2005 were used to develop predictive models for recreational water quality by means of multiple linear regression. The best model for each beach was based on a unique combination of environmental and water-quality explanatory variables including turbidity, rainfall, wave height, water temperature, day of the year, wind direction, and lake level. Two types of outputs were produced from the models: the predicted Escherichia coli concentration and the probability that the bathing-water standard will be exceeded. The model for one of beaches, Huntington Reservation (Huntington), was validated in 2005. For 2005, the Huntington model yielded more correct responses and better predicted exceedance of the standard than did current methods for assessing recreational water quality, which are based on the previous day's E. coli concentration. Predictions based on the Huntington model have been available to the public through an Internet-based 'nowcasting' system since May 30, 2006. The other beach models are being validated for the first time in 2006. The methods used in this study to develop and test predictive models can be applied at other similar coastal beaches.

  9. Using a Prediction Model to Manage Cyber Security Threats

    PubMed Central

    Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization. PMID:26065024

  10. Using a Prediction Model to Manage Cyber Security Threats.

    PubMed

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization. PMID:26065024

  11. Life prediction and constitutive models for engine hot section

    NASA Technical Reports Server (NTRS)

    Swanson, G. A.; Meyer, T. G.; Nissley, D. M.

    1986-01-01

    The purpose of this program is to develop life prediction models for coated anisotropic materials used in gas turbine airfoils. In the program, two single crystal alloys and two coatings are being tested. These include PWA 1480, Alloy 185, overlay coating (PWA 286), and aluminide coating (PWA 273). Constitutive models are also being developed for these materials to predict the time independent (plastic) and time dependent (creep) strain histories of the materials in the lab tests and for actual design conditions. This nonlinear material behavior is particularly important for high temperature gas turbine applications and is basic to any life prediction system. Some of the accomplishments of the program are highlighted.

  12. Implementation of a model for census prediction and control.

    PubMed Central

    Swain, R W; Kilpatrick, K E; Marsh, J J

    1977-01-01

    A model is described that predicts hospital census and computes, for each day, the number of elective admissions that will maximize the census over the short run, subject to constraints on the probability of overflow. Where a computer is available the model provides detailed predictions of census in units as small as 10 beds; used with manual computation the model allows production of tables of the recommended numbers of elective admissions to the hospital as a whole. The model has been tested in five hospitals and is part of the admissions system in two of them; implementation is described, and the results obtained are discussed. PMID:591350

  13. Predictive data modeling of human type II diabetes related statistics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Kristina L.; Jaenisch, Holger M.; Handley, James W.; Albritton, Nathaniel G.

    2009-04-01

    During the course of routine Type II treatment of one of the authors, it was decided to derive predictive analytical Data Models of the daily sampled vital statistics: namely weight, blood pressure, and blood sugar, to determine if the covariance among the observed variables could yield a descriptive equation based model, or better still, a predictive analytical model that could forecast the expected future trend of the variables and possibly eliminate the number of finger stickings required to montior blood sugar levels. The personal history and analysis with resulting models are presented.

  14. Development and Application of Chronic Disease Risk Prediction Models

    PubMed Central

    Oh, Sun Min; Stefani, Katherine M.

    2014-01-01

    Currently, non-communicable chronic diseases are a major cause of morbidity and mortality worldwide, and a large proportion of chronic diseases are preventable through risk factor management. However, the prevention efficacy at the individual level is not yet satisfactory. Chronic disease prediction models have been developed to assist physicians and individuals in clinical decision-making. A chronic disease prediction model assesses multiple risk factors together and estimates an absolute disease risk for the individual. Accurate prediction of an individual's future risk for a certain disease enables the comparison of benefits and risks of treatment, the costs of alternative prevention strategies, and selection of the most efficient strategy for the individual. A large number of chronic disease prediction models, especially targeting cardiovascular diseases and cancers, have been suggested, and some of them have been adopted in the clinical practice guidelines and recommendations of many countries. Although few chronic disease prediction tools have been suggested in the Korean population, their clinical utility is not as high as expected. This article reviews methodologies that are commonly used for developing and evaluating a chronic disease prediction model and discusses the current status of chronic disease prediction in Korea. PMID:24954311

  15. Predictive modeling of neuroanatomic structures for brain atrophy detection

    NASA Astrophysics Data System (ADS)

    Hu, Xintao; Guo, Lei; Nie, Jingxin; Li, Kaiming; Liu, Tianming

    2010-03-01

    In this paper, we present an approach of predictive modeling of neuroanatomic structures for the detection of brain atrophy based on cross-sectional MRI image. The underlying premise of applying predictive modeling for atrophy detection is that brain atrophy is defined as significant deviation of part of the anatomy from what the remaining normal anatomy predicts for that part. The steps of predictive modeling are as follows. The central cortical surface under consideration is reconstructed from brain tissue map and Regions of Interests (ROI) on it are predicted from other reliable anatomies. The vertex pair-wise distance between the predicted vertex and the true one within the abnormal region is expected to be larger than that of the vertex in normal brain region. Change of white matter/gray matter ratio within a spherical region is used to identify the direction of vertex displacement. In this way, the severity of brain atrophy can be defined quantitatively by the displacements of those vertices. The proposed predictive modeling method has been evaluated by using both simulated atrophies and MRI images of Alzheimer's disease.

  16. Time dependent patient no-show predictive modelling development.

    PubMed

    Huang, Yu-Li; Hanauer, David A

    2016-05-01

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows. PMID:27142954

  17. Development of Interpretable Predictive Models for BPH and Prostate Cancer

    PubMed Central

    Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, JA

    2015-01-01

    BACKGROUND Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. METHODS An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. RESULTS Statistical dependence with PC and BPH was found for prostate volume (P-value < 0.001), PSA (P-value < 0.001), international prostate symptom score (IPSS; P-value < 0.001), digital rectal examination (DRE; P-value < 0.001), age (P-value < 0.002), antecedents (P-value < 0.006), and meat consumption (P-value < 0.08). The two predictive models that were constructed selected a subset of these, namely, volume, PSA, DRE, and IPSS, obtaining an area under the ROC curve (AUC) between 72% and 80% for both PC and BPH prediction. CONCLUSION PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced. PMID:25780348

  18. Model predictive torque control with an extended prediction horizon for electrical drive systems

    NASA Astrophysics Data System (ADS)

    Wang, Fengxiang; Zhang, Zhenbin; Kennel, Ralph; Rodríguez, José

    2015-07-01

    This paper presents a model predictive torque control method for electrical drive systems. A two-step prediction horizon is achieved by considering the reduction of the torque ripples. The electromagnetic torque and the stator flux error between predicted values and the references, and an over-current protection are considered in the cost function design. The best voltage vector is selected by minimising the value of the cost function, which aims to achieve a low torque ripple in two intervals. The study is carried out experimentally. The results show that the proposed method achieves good performance in both steady and transient states.

  19. A new indirect multi-step-ahead prediction model for a long-term hydrologic prediction

    NASA Astrophysics Data System (ADS)

    Cheng, Chun-Tian; Xie, Jing-Xin; Chau, Kwok-Wing; Layeghifard, Mehdi

    2008-10-01

    SummaryA dependable long-term hydrologic prediction is essential to planning, designing and management activities of water resources. A three-stage indirect multi-step-ahead prediction model, which combines dynamic spline interpolation into multilayer adaptive time-delay neural network (ATNN), is proposed in this study for the long term hydrologic prediction. In the first two stages, a group of spline interpolation and dynamic extraction units are utilized to amplify the effect of observations in order to decrease the errors accumulation and propagation caused by the previous prediction. In the last step, variable time delays and weights are dynamically regulated by ATNN and the output of ATNN can be obtained as a multi-step-ahead prediction. We use two examples to illustrate the effectiveness of the proposed model. One example is the sunspots time series that is a well-known nonlinear and non-Gaussian benchmark time series and is often used to evaluate the effectiveness of nonlinear models. Another example is a case study of a long-term hydrologic prediction which uses the monthly discharges data from the Manwan Hydropower Plant in Yunnan Province of China. Application results show that the proposed method is feasible and effective.

  20. Groundwater Level Prediction using M5 Model Trees

    NASA Astrophysics Data System (ADS)

    Nalarajan, Nitha Ayinippully; Mohandas, C.

    2015-01-01

    Groundwater is an important resource, readily available and having high economic value and social benefit. Recently, it had been considered a dependable source of uncontaminated water. During the past two decades, increased rate of extraction and other greedy human actions have resulted in the groundwater crisis, both qualitatively and quantitatively. Under prevailing circumstances, the availability of predicted groundwater levels increase the importance of this valuable resource, as an aid in the planning of groundwater resources. For this purpose, data-driven prediction models are widely used in the present day world. M5 model tree (MT) is a popular soft computing method emerging as a promising method for numeric prediction, producing understandable models. The present study discusses the groundwater level predictions using MT employing only the historical groundwater levels from a groundwater monitoring well. The results showed that MT can be successively used for forecasting groundwater levels.

  1. Predicting waste stabilization pond performance using an ecological simulation model

    SciTech Connect

    New, G.R.

    1987-01-01

    Waste stabilization ponds (lagoons) are often favored in small communities because of their low cost and ease of operation. Most models currently used to predict performance are empirical or fail to address the primary lagoon cell. Empirical methods for predicting lagoon performance have been found to be off as much as 248 percent when used on a system other than the one they were developed for. Also, the present models developed for the primary cell lack the ability to predict parameters other than biochemical oxygen demand (BOD) and nitrogen. Oxygen consumption is usually estimated from BOD utilization. LAGOON is a fortran program which models the biogeochemical processes characteristic of the primary cell of facultative lagoons. Model parameters can be measured from lagoons in the vicinity of a proposed lagoon or estimated from laboratory studies. The model was calibrated utilizing a subset of the Corinne Utah lagoon data then validated utilizing a subset of the Corinne Utah data.

  2. Predicting lettuce canopy photosynthesis with statistical and neural network models

    NASA Technical Reports Server (NTRS)

    Frick, J.; Precetti, C.; Mitchell, C. A.

    1998-01-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).

  3. Predicting lettuce canopy photosynthesis with statistical and neural network models.

    PubMed

    Frick, J; Precetti, C; Mitchell, C A

    1998-11-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future). PMID:11542672

  4. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.

  5. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.

  6. Model Building Strategies for Predicting Multiple Landslide Events

    NASA Astrophysics Data System (ADS)

    Lombardo, L.; Cama, M.; Märker, M.; Parisi, L.; Rotigliano, E.

    2013-12-01

    A model building strategy is tested to assess the susceptibility for extreme climatic events driven landslides. In fact, extreme climatic inputs such as storms typically are very local phenomena in the Mediterranean areas, so that with the exception of recently stricken areas, the landslide inventories which are required to train any stochastic model are actually unavailable. A solution is here proposed, consisting in training a susceptibility model in a source catchment, which was implemented by applying the binary logistic regression technique, and exporting its predicting function (selected predictors regressed coefficients) in a target catchment to predict its landslide distribution. To test the method we exploit the disaster that occurred in the Messina area (southern Italy) on the 1st of October 2009 where, following a 250mm/8hours storm, approximately 2000 debris flow/debris avalanches landslides in an area of 21km2 triggered, killing thirty-seven people, injuring more than one hundred, and causing 0.5M euro worth of structural damage. The debris flows and debris avalanches phenomena involved the thin weathered mantle of the Varisican low to high grade metamorphic rocks that outcrop in the eastern slopes of the Peloritan Belt. Two 10km2 wide stream catchments, which are located inside the storm core area were exploited: susceptibility models trained in the Briga catchment were tested when exported to predict the landslides distribution in the Giampilieri catchment. The prediction performance (based on goodness of fit, prediction skill, accuracy and precision assessment) of the exported model was then compared with that of a model prepared in the Giampilieri catchment exploiting its landslide inventory. The results demonstrate that the landslide scenario observed in the Giampilieri catchment can be predicted with the same high performance without knowing its landslide distribution: we obtained in fact a very poor decrease in predictive performance when

  7. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    NASA Technical Reports Server (NTRS)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  8. A Simple Model Predicting Individual Weight Change in Humans.

    PubMed

    Thomas, Diana M; Martin, Corby K; Heymsfield, Steven; Redman, Leanne M; Schoeller, Dale A; Levine, James A

    2011-11-01

    Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants' weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319

  9. A Simple Model Predicting Individual Weight Change in Humans

    PubMed Central

    Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.

    2010-01-01

    Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319

  10. Predictive Modeling With Big Data: Is Bigger Really Better?

    PubMed

    Junqué de Fortuny, Enric; Martens, David; Provost, Foster

    2013-12-01

    With the increasingly widespread collection and processing of "big data," there is natural interest in using these data assets to improve decision making. One of the best understood ways to use data to improve decision making is via predictive analytics. An important, open question is: to what extent do larger data actually lead to better predictive models? In this article we empirically demonstrate that when predictive models are built from sparse, fine-grained data-such as data on low-level human behavior-we continue to see marginal increases in predictive performance even to very large scale. The empirical results are based on data drawn from nine different predictive modeling applications, from book reviews to banking transactions. This study provides a clear illustration that larger data indeed can be more valuable assets for predictive analytics. This implies that institutions with larger data assets-plus the skill to take advantage of them-potentially can obtain substantial competitive advantage over institutions without such access or skill. Moreover, the results suggest that it is worthwhile for companies with access to such fine-grained data, in the context of a key predictive task, to gather both more data instances and more possible data features. As an additional contribution, we introduce an implementation of the multivariate Bernoulli Naïve Bayes algorithm that can scale to massive, sparse data. PMID:27447254

  11. Predictions of Geospace Drivers By the Probability Distribution Function Model

    NASA Astrophysics Data System (ADS)

    Bussy-Virat, C.; Ridley, A. J.

    2014-12-01

    Geospace drivers like the solar wind speed, interplanetary magnetic field (IMF), and solar irradiance have a strong influence on the density of the thermosphere and the near-Earth space environment. This has important consequences on the drag on satellites that are in low orbit and therefore on their position. One of the basic problems with space weather prediction is that these drivers can only be measured about one hour before they affect the environment. In order to allow for adequate planning for some members of the commercial, military, or civilian communities, reliable long-term space weather forecasts are needed. The study presents a model for predicting geospace drivers up to five days in advance. This model uses the same general technique to predict the solar wind speed, the three components of the IMF, and the solar irradiance F10.7. For instance, it uses Probability distribution functions (PDFs) to relate the current solar wind speed and slope to the future solar wind speed, as well as the solar wind speed to the solar wind speed one solar rotation in the future. The PDF Model has been compared to other models for predictions of the speed. It has been found that it is better than using the current solar wind speed (i.e., persistence), and better than the Wang-Sheeley-Arge Model for prediction horizons of 24 hours. Once the drivers are predicted, and the uncertainty on the drivers are specified, the density in the thermosphere can be derived using various models of the thermosphere, such as the Global Ionosphere Thermosphere Model. In addition, uncertainties on the densities can be estimated, based on ensembles of simulations. From the density and uncertainty predictions, satellite positions, as well as the uncertainty in those positions can be estimated. These can assist operators in determining the probability of collisions between objects in low Earth orbit.

  12. An empirical model for probabilistic decadal prediction: A global analysis

    NASA Astrophysics Data System (ADS)

    Suckling, Emma; Hawkins, Ed; Eden, Jonathan; van Oldenborgh, Geert Jan

    2016-04-01

    Empirical models, designed to predict land-based surface variables over seasons to decades ahead, provide useful benchmarks for comparison against the performance of dynamical forecast systems; they may also be employable as predictive tools for use by climate services in their own right. A new global empirical decadal prediction system is presented, based on a multiple linear regression approach designed to produce probabilistic output for comparison against dynamical models. Its performance is evaluated for surface air temperature over a set of historical hindcast experiments under a series of different prediction `modes'. The modes include a real-time setting, a scenario in which future volcanic forcings are prescribed during the hindcasts, and an approach which exploits knowledge of the forced trend. A two-tier prediction system, which uses knowledge of future sea surface temperatures in the Pacific and Atlantic Oceans, is also tested, but within a perfect knowledge framework. Each mode is designed to identify sources of predictability and uncertainty, as well as investigate different approaches to the design of decadal prediction systems for operational use. It is found that the empirical model shows skill above that of persistence hindcasts for annual means at lead times of up to ten years ahead in all of the prediction modes investigated. Small improvements in skill are found at all lead times when including future volcanic forcings in the hindcasts. It is also suggested that hindcasts which exploit full knowledge of the forced trend due to increasing greenhouse gases throughout the hindcast period can provide more robust estimates of model bias for the calibration of the empirical model in an operational setting. The two-tier system shows potential for improved real-time prediction, given the assumption that skilful predictions of large-scale modes of variability are available. The empirical model framework has been designed with enough flexibility to

  13. A SCOPING STUDY: Development of Probabilistic Risk Assessment Models for Reactivity Insertion Accidents During Shutdown In U.S. Commercial Light Water Reactors

    SciTech Connect

    S. Khericha

    2011-06-01

    This report documents the scoping study of developing generic simplified fuel damage risk models for quantitative analysis from inadvertent reactivity insertion events during shutdown (SD) in light water pressurized and boiling water reactors. In the past, nuclear fuel reactivity accidents have been analyzed both mainly deterministically and probabilistically for at-power and SD operations of nuclear power plants (NPPs). Since then, many NPPs had power up-rates and longer refueling intervals, which resulted in fuel configurations that may potentially respond differently (in an undesirable way) to reactivity accidents. Also, as shown in a recent event, several inadvertent operator actions caused potential nuclear fuel reactivity insertion accident during SD operations. The set inadvertent operator actions are likely to be plant- and operation-state specific and could lead to accident sequences. This study is an outcome of the concern which arose after the inadvertent withdrawal of control rods at Dresden Unit 3 in 2008 due to operator actions in the plant inadvertently three control rods were withdrawn from the reactor without knowledge of the main control room operator. The purpose of this Standardized Plant Analysis Risk (SPAR) Model development project is to develop simplified SPAR Models that can be used by staff analysts to perform risk analyses of operating events and/or conditions occurring during SD operation. These types of accident scenarios are dominated by the operator actions, (e.g., misalignment of valves, failure to follow procedures and errors of commissions). Human error probabilities specific to this model were assessed using the methodology developed for SPAR model human error evaluations. The event trees, fault trees, basic event data and data sources for the model are provided in the report. The end state is defined as the reactor becomes critical. The scoping study includes a brief literature search/review of historical events, developments of

  14. Cross-Validation of Aerobic Capacity Prediction Models in Adolescents.

    PubMed

    Burns, Ryan Donald; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Saint-Maurice, Pedro F; Welk, Greg J; Mahar, Matthew T

    2015-08-01

    Cardiorespiratory endurance is a component of health-related fitness. FITNESSGRAM recommends the Progressive Aerobic Cardiovascular Endurance Run (PACER) or One mile Run/Walk (1MRW) to assess cardiorespiratory endurance by estimating VO2 Peak. No research has cross-validated prediction models from both PACER and 1MRW, including the New PACER Model and PACER-Mile Equivalent (PACER-MEQ) using current standards. The purpose of this study was to cross-validate prediction models from PACER and 1MRW against measured VO2 Peak in adolescents. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years (Mean = 14.7 ± 1.3 years; 32 girls, 52 boys) who completed the PACER and 1MRW in addition to a laboratory maximal treadmill test to measure VO2 Peak. Multiple correlations among various models with measured VO2 Peak were considered moderately strong (R = .74-0.78), and prediction error (RMSE) ranged from 5.95 ml·kg⁻¹,min⁻¹ to 8.27 ml·kg⁻¹.min⁻¹. Criterion-referenced agreement into FITNESSGRAM's Healthy Fitness Zones was considered fair-to-good among models (Kappa = 0.31-0.62; Agreement = 75.5-89.9%; F = 0.08-0.65). In conclusion, prediction models demonstrated moderately strong linear relationships with measured VO2 Peak, fair prediction error, and fair-to-good criterion referenced agreement with measured VO2 Peak into FITNESSGRAM's Healthy Fitness Zones. PMID:26186536

  15. Predictive modeling of coral disease distribution within a reef system.

    PubMed

    Williams, Gareth J; Aeby, Greta S; Cowie, Rebecca O M; Davy, Simon K

    2010-01-01

    Diseases often display complex and distinct associations with their environment due to differences in etiology, modes of transmission between hosts, and the shifting balance between pathogen virulence and host resistance. Statistical modeling has been underutilized in coral disease research to explore the spatial patterns that result from this triad of interactions. We tested the hypotheses that: 1) coral diseases show distinct associations with multiple environmental factors, 2) incorporating interactions (synergistic collinearities) among environmental variables is important when predicting coral disease spatial patterns, and 3) modeling overall coral disease prevalence (the prevalence of multiple diseases as a single proportion value) will increase predictive error relative to modeling the same diseases independently. Four coral diseases: Porites growth anomalies (PorGA), Porites tissue loss (PorTL), Porites trematodiasis (PorTrem), and Montipora white syndrome (MWS), and their interactions with 17 predictor variables were modeled using boosted regression trees (BRT) within a reef system in Hawaii. Each disease showed distinct associations with the predictors. Environmental predictors showing the strongest overall associations with the coral diseases were both biotic and abiotic. PorGA was optimally predicted by a negative association with turbidity, PorTL and MWS by declines in butterflyfish and juvenile parrotfish abundance respectively, and PorTrem by a modal relationship with Porites host cover. Incorporating interactions among predictor variables contributed to the predictive power of our models, particularly for PorTrem. Combining diseases (using overall disease prevalence as the model response), led to an average six-fold increase in cross-validation predictive deviance over modeling the diseases individually. We therefore recommend coral diseases to be modeled separately, unless known to have etiologies that respond in a similar manner to particular

  16. Predictive performance of a model of anaesthetic uptake with desflurane.

    PubMed

    Kennedy, R

    2006-04-01

    We have previously shown that a model of anaesthetic uptake and distribution, developed for use as a teaching tool, is able to predict end-tidal isoflurane and sevoflurane concentrations at least as well as commonly used propofol models predict blood levels of propofol. Models with good predictive performance may be useful as part of real-time prediction systems. The aim of this study was to assess the performance of this model with desflurane. Twenty adult patients undergoing routine anaesthesia were studied. The total fresh gas flow and vaporizor settings were collected at 10-second intervals from the anaesthetic machine. These data were used as inputs to the model, which had been initialized for patient weight and desflurane. Output of the model is a predicted end-tidal value at each point in time. These values were compared with measured end-tidal desflurane using a standard statistical technique of Varvel and colleagues. Data was analysed from 19 patients. Median performance error was 78% (95% CI 8-147), median absolute performance error 77% (6-149), divergence 10.6%/h (-80-101) and wobble 8.9% (-6-24). The predictive performance of this model with desflurane was poor, with considerable variability between patients. The reasons for the difference between desflurane and our previous results with isoflurane and sevoflurane are not obvious, but may provide important clues to the necessary components for such models. The data collected in this study may assist in the development and evaluation of improved models. PMID:16617640

  17. The Use of Behavior Models for Predicting Complex Operations

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2010-01-01

    Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.

  18. Risk prediction models for hepatocellular carcinoma in different populations

    PubMed Central

    Ma, Xiao; Yang, Yang; Tu, Hong; Gao, Jing; Tan, Yu-Ting; Zheng, Jia-Li; Bray, Freddie; Xiang, Yong-Bing

    2016-01-01

    Hepatocellular carcinoma (HCC) is a malignant disease with limited therapeutic options due to its aggressive progression. It places heavy burden on most low and middle income countries to treat HCC patients. Nowadays accurate HCC risk predictions can help making decisions on the need for HCC surveillance and antiviral therapy. HCC risk prediction models based on major risk factors of HCC are useful and helpful in providing adequate surveillance strategies to individuals who have different risk levels. Several risk prediction models among cohorts of different populations for estimating HCC incidence have been presented recently by using simple, efficient, and ready-to-use parameters. Moreover, using predictive scoring systems to assess HCC development can provide suggestions to improve clinical and public health approaches, making them more cost-effective and effort-effective, for inducing personalized surveillance programs according to risk stratification. In this review, the features of risk prediction models of HCC across different populations were summarized, and the perspectives of HCC risk prediction models were discussed as well. PMID:27199512

  19. COMPASS: A Framework for Automated Performance Modeling and Prediction

    SciTech Connect

    Lee, Seyong; Meredith, Jeremy S; Vetter, Jeffrey S

    2015-01-01

    Flexible, accurate performance predictions offer numerous benefits such as gaining insight into and optimizing applications and architectures. However, the development and evaluation of such performance predictions has been a major research challenge, due to the architectural complexities. To address this challenge, we have designed and implemented a prototype system, named COMPASS, for automated performance model generation and prediction. COMPASS generates a structured performance model from the target application's source code using automated static analysis, and then, it evaluates this model using various performance prediction techniques. As we demonstrate on several applications, the results of these predictions can be used for a variety of purposes, such as design space exploration, identifying performance tradeoffs for applications, and understanding sensitivities of important parameters. COMPASS can generate these predictions across several types of applications from traditional, sequential CPU applications to GPU-based, heterogeneous, parallel applications. Our empirical evaluation demonstrates a maximum overhead of 4%, flexibility to generate models for 9 applications, speed, ease of creation, and very low relative errors across a diverse set of architectures.

  20. Risk prediction models for hepatocellular carcinoma in different populations.

    PubMed

    Ma, Xiao; Yang, Yang; Tu, Hong; Gao, Jing; Tan, Yu-Ting; Zheng, Jia-Li; Bray, Freddie; Xiang, Yong-Bing

    2016-04-01

    Hepatocellular carcinoma (HCC) is a malignant disease with limited therapeutic options due to its aggressive progression. It places heavy burden on most low and middle income countries to treat HCC patients. Nowadays accurate HCC risk predictions can help making decisions on the need for HCC surveillance and antiviral therapy. HCC risk prediction models based on major risk factors of HCC are useful and helpful in providing adequate surveillance strategies to individuals who have different risk levels. Several risk prediction models among cohorts of different populations for estimating HCC incidence have been presented recently by using simple, efficient, and ready-to-use parameters. Moreover, using predictive scoring systems to assess HCC development can provide suggestions to improve clinical and public health approaches, making them more cost-effective and effort-effective, for inducing personalized surveillance programs according to risk stratification. In this review, the features of risk prediction models of HCC across different populations were summarized, and the perspectives of HCC risk prediction models were discussed as well. PMID:27199512

  1. Predicting ICME Magnetic Fields with a Numerical Flux Rope Model

    NASA Astrophysics Data System (ADS)

    Manchester, W.; van der Holst, B.; Sokolov, I.

    2014-12-01

    Coronal mass ejections (CMEs) are a dramatic manifestation of solar activity that release vast amounts of plasma into the heliosphere, and have many effects on the interplanetary medium and on planetary atmospheres, and are the major driver of space weather. CMEs occur with the formation and expulsion of large-scale flux ropes from the solar corona, which are routinely observed in interplanetary space. Simulating and predicting the structure and dynamics of these ICME magnetic fields is essential to the progress of heliospheric science and space weather prediction. We combine observations made by different observing techniques of CME events to develop a numerical model capable of predicting the magnetic field of interplanetary coronal mass ejections (ICMES). Photospheric magnetic field measurements from SOHO/MDI and SDO/HMI are used to specify a coronal magnetic flux rope that drives the CMEs. We examine halo CMEs events that produced clearly observed magnetic clouds at Earth and present our model predictions of these events with an emphasis placed on the z component of the magnetic field. Comparison of the MHD model predictions with coronagraph observations and in-situ data allow us to robustly determine the parameters that define the initial state of the driving flux rope, thus providing a predictive model.

  2. Modeling of leachable 137Cs in throughfall and stemflow for Japanese forest canopies after Fukushima Daiichi Nuclear Power Plant accident.

    PubMed

    Loffredo, Nicolas; Onda, Yuichi; Kawamori, Ayumi; Kato, Hiroaki

    2014-09-15

    The Fukushima accident dispersed significant amounts of radioactive cesium (Cs) in the landscape. Our research investigated, from June 2011 to November 2013, the mobility of leachable Cs in forests canopies. In particular, (137)Cs and (134)Cs activity concentrations were measured in rainfall, throughfall, and stemflow in broad-leaf and cedar forests in an area located 40 km from the power plant. Leachable (137)Cs loss was modeled by a double exponential (DE) model. This model could not reproduce the variation in activity concentration observed. In order to refine the DE model, the main physical measurable parameters (rainfall intensity, wind velocity, and snowfall occurrence) were assessed, and rainfall was identified as the dominant factor controlling observed variation. A corrective factor was then developed to incorporate rainfall intensity in an improved DE model. With the original DE model, we estimated total (137)Cs loss by leaching from canopies to be 72 ± 4%, 67 ± 4%, and 48 ± 2% of the total plume deposition under mature cedar, young cedar, and broad-leaf forests, respectively. In contrast, with the improved DE model, the total (137)Cs loss by leaching was estimated to be 34 ± 2%, 34 ± 2%, and 16 ± 1% of the total plume deposition under mature cedar, young cedar, and broad-leaf forests, respectively. The improved DE model corresponds better to observed data in literature. Understanding (137)Cs and (134)Cs forest dynamics is important for forecasting future contamination of forest soils around the FDNPP. It also provides a basis for understanding forest transfers in future potential nuclear disasters. PMID:24995637

  3. Internal Flow Thermal/Fluid Modeling of STS-107 Port Wing in Support of the Columbia Accident Investigation Board

    NASA Technical Reports Server (NTRS)

    Sharp, John R.; Kittredge, Ken; Schunk, Richard G.

    2003-01-01

    As part of the aero-thermodynamics team supporting the Columbia Accident Investigation Board (CAB), the Marshall Space Flight Center was asked to perform engineering analyses of internal flows in the port wing. The aero-thermodynamics team was split into internal flow and external flow teams with the support being divided between shorter timeframe engineering methods and more complex computational fluid dynamics. In order to gain a rough order of magnitude type of knowledge of the internal flow in the port wing for various breach locations and sizes (as theorized by the CAB to have caused the Columbia re-entry failure), a bulk venting model was required to input boundary flow rates and pressures to the computational fluid dynamics (CFD) analyses. This paper summarizes the modeling that was done by MSFC in Thermal Desktop. A venting model of the entire Orbiter was constructed in FloCAD based on Rockwell International s flight substantiation analyses and the STS-107 reentry trajectory. Chemical equilibrium air thermodynamic properties were generated for SINDA/FLUINT s fluid property routines from a code provided by Langley Research Center. In parallel, a simplified thermal mathematical model of the port wing, including the Thermal Protection System (TPS), was based on more detailed Shuttle re-entry modeling previously done by the Dryden Flight Research Center. Once the venting model was coupled with the thermal model of the wing structure with chemical equilibrium air properties, various breach scenarios were assessed in support of the aero-thermodynamics team. The construction of the coupled model and results are presented herein.

  4. Katz model prediction of Caenorhabditis elegans mutagenesis on STS-42

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Badhwar, Gautam D.

    1992-01-01

    Response parameters that describe the production of recessive lethal mutations in C. elegans from ionizing radiation are obtained with the Katz track structure model. The authors used models of the space radiation environment and radiation transport to predict and discuss mutation rates for C. elegans on the IML-1 experiment aboard STS-42.

  5. Relating Data and Models to Characterize Parameter and Prediction Uncertainty

    EPA Science Inventory

    Applying PBPK models in risk analysis requires that we realistically assess the uncertainty of relevant model predictions in as quantitative a way as possible. The reality of human variability may add a confusing feature to the overall uncertainty assessment, as uncertainty and v...

  6. COMPARISONS OF MODELS PREDICTING AMBIENT LAKE PHOSPHORUS CONCENTRATIONS

    EPA Science Inventory

    The Vollenweider, Dillon, and Larsen/Mercier models for predicting ambient lake phosphorus concentrations and classifying lakes by trophic state are compared in this report. The Dillon and Larsen/Mercier models gave comparable results in ranking 39 lakes relative to known ambient...

  7. COMPARISONS OF SPATIAL PATTERNS OF WET DEPOSITION TO MODEL PREDICTIONS

    EPA Science Inventory

    The Community Multiscale Air Quality model, (CMAQ), is a "one-atmosphere" model, in that it uses a consistent set of chemical reactions and physical principles to predict concentrations of primary pollutants, photochemical smog, and fine aerosols, as well as wet and dry depositi...

  8. Implementing a Resource Requirements Prediction Model in Community Colleges.

    ERIC Educational Resources Information Center

    Rice, Gary Alan

    The purposes of this study were to determine what characterizes a useful cost estimating model at the community college level, to implement at a community college the Resource Requirements Prediction Model 1.6 (RRPM) developed by the National Center for Higher Education Management Systems, to identify problems associated with implementation and…

  9. Comparison of model predictions with LDEF satellite radiation measurements

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.; Harmon, B. A.; Parnell, T. A.; Watts, J. W., Jr.; Benton, E. V.

    1994-01-01

    Some early results are summarized from a program under way to utilize Long Duration Exposure Facility (LDEF) satellite data for evaluating and improving current models of the space radiation environment in low Earth orbit. Reported here are predictions and comparisons with some of the LDEF dose and induced radioactivity data, which are used to check the accuracy of current models describing the magnitude and directionality of the trapped proton environment. Preliminary findings are that the environment models underestimate both dose and activation from trapped protons by a factor of about two, and the observed anisotropy is higher than predicted.

  10. Prediction horizon effects on stochastic modelling hints for neural networks

    SciTech Connect

    Drossu, R.; Obradovic, Z.

    1995-12-31

    The objective of this paper is to investigate the relationship between stochastic models and neural network (NN) approaches to time series modelling. Experiments on a complex real life prediction problem (entertainment video traffic) indicate that prior knowledge can be obtained through stochastic analysis both with respect to an appropriate NN architecture as well as to an appropriate sampling rate, in the case of a prediction horizon larger than one. An improvement of the obtained NN predictor is also proposed through a bias removal post-processing, resulting in much better performance than the best stochastic model.

  11. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, Timothy K.; Chrostowski, Jon D.

    1991-01-01

    Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.

  12. Stellar coronae - What can be predicted with minimum flux models?

    NASA Technical Reports Server (NTRS)

    Hammer, R.; Endler, F.; Ulmschneider, P.

    1983-01-01

    In order to determine the possible errors of various minimum flux corona (MFC) predictions, MFC models are compared with a grid of detailed coronal models covering a range of two orders of magnitude in coronal heating and damping length values. The MFC concept is totally unreliable in the prediction of mass loss and the relative importance of various kinds of energy losses, and MFC predictions for the mass loss rate and energy losses due to stellar wind can be wrong by many orders of magnitude. It is suggested that for future applications, the unreliable MFC formulas should be replaced by a grid of related models accounting for the coronal dependence on damping length, such as the models underlying the present study.

  13. Predictive Models for Fast and Effective Profiling of Kinase Inhibitors.

    PubMed

    Bora, Alina; Avram, Sorin; Ciucanu, Ionel; Raica, Marius; Avram, Stefana

    2016-05-23

    In this study we developed two-dimensional pharmacophore-based random forest models for the effective profiling of kinase inhibitors. One hundred seven prediction models were developed to address distinct kinases spanning over all kinase groups. Rigorous external validation demonstrates excellent virtual screening and classification potential of the predictors and, more importantly, the capacity to prioritize novel chemical scaffolds in large chemical libraries. The models built upon more diverse and more potent compounds tend to exert the highest predictive power. The analysis of ColBioS-FlavRC (Collection of Bioselective Flavonoids and Related Compounds) highlighted several potentially promiscuous derivatives with undesirable selectivity against kinases. The prediction models can be downloaded from www.chembioinf.ro . PMID:27064988

  14. Three-model ensemble wind prediction in southern Italy

    NASA Astrophysics Data System (ADS)

    Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo

    2016-03-01

    Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  15. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2007-01-01

    Sustained increases in energy prices have focused attention on gas resources in low permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are large. Planning and development decisions for extraction of such resources must be area-wide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm the decision to enter such plays depends on reconnaissance level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional scale cost functions. The context of the worked example is the Devonian Antrim shale gas play, Michigan Basin. One finding relates to selection of the resource prediction model to be used with economic models. Models which can best predict aggregate volume over larger areas (many hundreds of sites) may lose granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined by extraneous factors. The paper also shows that when these simple prediction models are used to strategically order drilling prospects, the gain in gas volume over volumes associated with simple random site selection amounts to 15 to 20 percent. It also discusses why the observed benefit of updating predictions from results of new drilling, as opposed to following static predictions, is somewhat smaller. Copyright 2007, Society of Petroleum Engineers.

  16. The PRIMROSE cardiovascular risk prediction models for people with severe mental illness

    PubMed Central

    Osborn, David PJ; Hardoon, Sarah; Omar, Rumana Z; Holt, Richard IG; King, Michael; Larsen, John; Marston, Louise; Morris, Richard W; Nazareth, Irwin; Walters, Kate; Petersen, Irene

    2015-01-01

    Importance People with Severe Mental Illness (SMI) including schizophrenia and bipolar disorder have excess cardiovascular disease (CVD). Risk prediction models, validated for the general population, may not accurately estimate cardiovascular risk in this group. Objectives To develop and validate a risk model exclusive to predicting CVD events in people with SMI, using established cardiovascular risk factors and additional variables. Design Prospective cohort and risk score development study. Setting UK Primary care Participants 38,824 people with a diagnosis of SMI (schizophrenia, bipolar disorder or other non-organic psychosis) aged 30-90 years. Median follow-up 5.6 years with 2,324 CVD events (6%). Main outcomes and measures Ten year risk of first cardiovascular event (myocardial infarction, angina pectoris, cerebrovascular accidents or major coronary surgery). Predictors included age, gender, height, weight, systolic blood pressure, diabetes, smoking, body mass index (BMI), lipid profile, social deprivation, SMI diagnosis, prescriptions of antidepressant , antipsychotics and reports of heavy alcohol use. Results We developed two risk models for people with SMI: The PRIMROSE BMI model and a lipid model. These mutually excluded lipids and BMI. From cross-validations, in terms of discrimination, for men, the PRIMROSE lipid model D statistic was 1.92 (1.80-2.03) and C statistic was 0.80 (0.76-0.83) compared to 1.74 (1.54-1.86) and 0.78 (0.75-0.82) for published Framingham risk scores; in women corresponding results were 1.87 (1.76-1.98) and 0.80 (0.76-0.83) for the PRIMROSE lipid model and 1.58 (1.48-1.68) and 0.76 (0.72-0.80) for Framingham. Discrimination statistics for the PRIMROSE BMI model were comparable to those for the PRIMROSE lipid model. Calibration plots suggested that both PRIMROSE models were superior to the Framingham models. Conclusion and relevance The PRIMROSE BMI and lipid CVD risk prediction models performed better in SMI than models which only

  17. Review of the chronic exposure pathways models in MACCS (MELCOR Accident Consequence Code System) and several other well-known probabilistic risk assessment models

    SciTech Connect

    Tveten, U. )

    1990-06-01

    The purpose of this report is to document the results of the work performed by the author in connection with the following task, performed for US Nuclear Regulatory Commission, (USNRC) Office of Nuclear Regulatory Research, Division of Systems Research: MACCS Chronic Exposure Pathway Models: Review the chronic exposure pathway models implemented in the MELCOR Accident Consequence Code System (MACCS) and compare those models to the chronic exposure pathway models implemented in similar codes developed in countries that are members of the OECD. The chronic exposures concerned are via: the terrestrial food pathways, the water pathways, the long-term groundshine pathway, and the inhalation of resuspended radionuclides pathway. The USNRC has indicated during discussions of the task that the major effort should be spent on the terrestrial food pathways. There is one chapter for each of the categories of chronic exposure pathways listed above.

  18. Efficient Modelling and Prediction of Meshing Noise from Chain Drives

    NASA Astrophysics Data System (ADS)

    ZHENG, H.; WANG, Y. Y.; LIU, G. R.; LAM, K. Y.; QUEK, K. P.; ITO, T.; NOGUCHI, Y.

    2001-08-01

    This paper presents a practical approach for predicting the meshing noise due to the impact of chain rollers against the sprocket of chain drives. An acoustical model relating dynamic response of rollers and its induced sound pressure is developed based on the fact that the acoustic field is mainly created by oscillating rigid cylindrical rollers. Finite element techniques and numerical software codes are employed to model and simulate the acceleration response of each chain roller which is necessary for noise level prediction of a chain drive under varying operation conditions and different sprocket configurations. The predicted acoustic pressure levels of meshing noise are compared with the available experimental measurements. It is shown that the predictions are in reasonable agreement with the experiments and the approach enables designers to obtain required information on the noise level of a selected chain drive in a time- and cost-efficient manner.

  19. Modeling system for predicting enterococci levels at Holly Beach.

    PubMed

    Zhang, Zaihong; Deng, Zhiqiang; Rusch, Kelly A; Walker, Nan D

    2015-08-01

    This paper presents a new modeling system for nowcasting and forecasting enterococci levels in coastal recreation waters at any time during the day. The modeling system consists of (1) an artificial neural network (ANN) model for predicting the enterococci level at sunrise time, (2) a clear-sky solar radiation and turbidity correction to the ANN model, (3) remote sensing algorithms for turbidity, and (4) nowcasting/forecasting data. The first three components are also unique features of the new modeling system. While the component (1) is useful to beach monitoring programs requiring enterococci levels in early morning, the component (2) in combination with the component (1) makes it possible to predict the bacterial level in beach waters at any time during the day if the data from the components (3) and (4) are available. Therefore, predictions from the component (2) are of primary interest to beachgoers. The modeling system was developed using three years of swimming season data and validated using additional four years of independent data. Testing results showed that (1) the sunrise-time model correctly reproduced 82.63% of the advisories issued in seven years with a false positive rate of 2.65% and a false negative rate of 14.72%, and (2) the new modeling system was capable of predicting the temporal variability in enterococci levels in beach waters, ranging from hourly changes to daily cycles. The results demonstrate the efficacy of the new modeling system in predicting enterococci levels in coastal beach waters. Applications of the modeling system will improve the management of recreational beaches and protection of public health. PMID:26186681

  20. Testable polarization predictions for models of CMB isotropy anomalies

    SciTech Connect

    Dvorkin, Cora; Peiris, Hiranya V.; Hu, Wayne

    2008-03-15

    Anomalies in the large-scale cosmic microwave background (CMB) temperature sky measured by the Wilkinson Microwave Anisotropy Probe have been suggested as possible evidence for a violation of statistical isotropy on large scales. In any physical model for broken isotropy, there are testable consequences for the CMB polarization field. We develop simulation tools for predicting the polarization field in models that break statistical isotropy locally through a modulation field. We study two different models: dipolar modulation, invoked to explain the asymmetry in power between northern and southern ecliptic hemispheres, and quadrupolar modulation, posited to explain the alignments between the quadrupole and octopole. For the dipolar case, we show that predictions for the correlation between the first 10 multipoles of the temperature and polarization fields can typically be tested at better than the 98% CL. For the quadrupolar case, we show that the polarization quadrupole and octopole should be moderately aligned. Such an alignment is a generic prediction of explanations which involve the temperature field at recombination and thus discriminate against explanations involving foregrounds or local secondary anisotropy. Predicted correlations between temperature and polarization multipoles out to l=5 provide tests at the {approx}99% CL or stronger for quadrupolar models that make the temperature alignment more than a few percent likely. As predictions of anomaly models, polarization statistics move beyond the a posteriori inferences that currently dominate the field.

  1. Short communication: Accounting for new mutations in genomic prediction models.

    PubMed

    Casellas, Joaquim; Esquivelzeta, Cecilia; Legarra, Andrés

    2013-08-01

    Genomic evaluation models so far do not allow for accounting of newly generated genetic variation due to mutation. The main target of this research was to extend current genomic BLUP models with mutational relationships (model AM), and compare them against standard genomic BLUP models (model A) by analyzing simulated data. Model performance and precision of the predicted breeding values were evaluated under different population structures and heritabilities. The deviance information criterion (DIC) clearly favored the mutational relationship model under large heritabilities or populations with moderate-to-deep pedigrees contributing phenotypic data (i.e., differences equal or larger than 10 DIC units); this model provided slightly higher correlation coefficients between simulated and predicted genomic breeding values. On the other hand, null DIC differences, or even relevant advantages for the standard genomic BLUP model, were reported under small heritabilities and shallow pedigrees, although precision of the genomic breeding values did not differ across models at a significant level. This method allows for slightly more accurate genomic predictions and handling of newly created variation; moreover, this approach does not require additional genotyping or phenotyping efforts, but a more accurate handing of available data. PMID:23746579

  2. Gaussian predictive process models for large spatial data sets

    PubMed Central

    Banerjee, Sudipto; Gelfand, Alan E.; Finley, Andrew O.; Sang, Huiyan

    2009-01-01

    Summary With scientific data available at geocoded locations, investigators are increasingly turning to spatial process models for carrying out statistical inference. Over the last decade, hierarchical models implemented through Markov chain Monte Carlo methods have become especially popular for spatial modelling, given their flexibility and power to fit models that would be infeasible with classical methods as well as their avoidance of possibly inappropriate asymptotics. However, fitting hierarchical spatial models often involves expensive matrix decompositions whose computational complexity increases in cubic order with the number of spatial locations, rendering such models infeasible for large spatial data sets. This computational burden is exacerbated in multivariate settings with several spatially dependent response variables. It is also aggravated when data are collected at frequent time points and spatiotemporal process models are used. With regard to this challenge, our contribution is to work with what we call predictive process models for spatial and spatiotemporal data. Every spatial (or spatiotemporal) process induces a predictive process model (in fact, arbitrarily many of them). The latter models project process realizations of the former to a lower dimensional subspace, thereby reducing the computational burden. Hence, we achieve the flexibility to accommodate non-stationary, non-Gaussian, possibly multivariate, possibly spatiotemporal processes in the context of large data sets. We discuss attractive theoretical properties of these predictive processes. We also provide a computational template encompassing these diverse settings. Finally, we illustrate the approach with simulated and real data sets. PMID:19750209

  3. Cloud Based Metalearning System for Predictive Modeling of Biomedical Data

    PubMed Central

    Vukićević, Milan

    2014-01-01

    Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data. PMID:24892101

  4. Predictive Models of Li-ion Battery Lifetime

    SciTech Connect

    Smith, Kandler; Wood, Eric; Santhanagopalan, Shriram; Kim, Gi-heon; Shi, Ying; Pesaran, Ahmad

    2015-06-15

    It remains an open question how best to predict real-world battery lifetime based on accelerated calendar and cycle aging data from the laboratory. Multiple degradation mechanisms due to (electro)chemical, thermal, and mechanical coupled phenomena influence Li-ion battery lifetime, each with different dependence on time, cycling and thermal environment. The standardization of life predictive models would benefit the industry by reducing test time and streamlining development of system controls.

  5. Strains at the myotendinous junction predicted by a micromechanical model

    PubMed Central

    Sharafi, Bahar; Ames, Elizabeth G.; Holmes, Jeffrey W.; Blemker, Silvia S.

    2011-01-01

    The goal of this work was to create a finite element micromechanical model of the myotendinous junction (MTJ) to examine how the structure and mechanics of the MTJ affect the local micro-scale strains experienced by muscle fibers. We validated the model through comparisons with histological longitudinal sections of muscles fixed in slack and stretched positions. The model predicted deformations of the A-bands within the fiber near the MTJ that were similar to those measured from the histological sections. We then used the model to predict the dependence of local fiber strains on activation and the mechanical properties of the endomysium. The model predicted that peak micro-scale strains increase with activation and as the compliance of the endomysium decreases. Analysis of the models revealed that, in passive stretch, local fiber strains are governed by the difference of the mechanical properties between the fibers and the endomysium. In active stretch, strain distributions are governed by the difference in cross-sectional area along the length of the tapered region of the fiber near the MTJ. The endomysium provides passive resistance that balances the active forces and prevents the tapered region of the fiber from undergoing excessive strain. These model predictions lead to the following hypotheses: (i) the increased likelihood of injury during active lengthening of muscle fibers may be due to the increase in peak strain with activation and (ii) endomysium may play a role in protecting fibers from injury by reducing the strains within the fiber at the MTJ. PMID:21945569

  6. Rotor Broadband Noise Prediction with Comparison to Model Data

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Burley, Casey L.

    2001-01-01

    This paper reports an analysis and prediction development of rotor broadband noise. The two primary components of this noise are Blade-Wake Interaction (BWI) noise, due to the blades' interaction with the turbulent wakes of the preceding blades, and "Self" noise, due to the development and shedding of turbulence within the blades' boundary layers. Emphasized in this report is the new code development for Self noise. The analysis and validation employs data from the HART program, a model BO-105 rotor wind tunnel test conducted in the German-Dutch Wind Tunnel (DNW). The BWI noise predictions are based on measured pressure response coherence functions using cross-spectral methods. The Self noise predictions are based on previously reported semiempirical modeling of Self noise obtained from isolated airfoil sections and the use of CAMRAD.Modl to define rotor performance and local blade segment flow conditions. Both BWI and Self noise from individual blade segments are Doppler shifted and summed at the observer positions. Prediction comparisons with measurements show good agreement for a range of rotor operating conditions from climb to steep descent. The broadband noise predictions, along with those of harmonic and impulsive Blade-Vortex Interaction (BVI) noise predictions, demonstrate a significant advance in predictive capability for main rotor noise.

  7. Orbit Modelling for Satellites Using the NASA Prediction Bulletins

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Koch, D. W.; Maslyar, G. A.; Foreman, J. C.

    1976-01-01

    For some satellites the NASA Prediction Bulletins are the only means available to the general user for obtaining orbital information. A computational interface between the information given in the NASA Prediction Bulletins and standard orbit determination programs is provided. Such an interface is necessary to obtain accurate orbit predictions. The theoretical considerations and their computational verification in this interface modelling are presented. This analysis was performed in conjunction with satellite aided search and rescue position location experiments where accurate orbits of the Amateur Satellite Corporation (AMSAT) OSCAR-6 and OSCAR-7 spacecraft are a prerequisite.

  8. Model Predictions to the 2005 Ultrasonic Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Joon; Song, Sung-Jin; Park, Joon-Soo

    2006-03-01

    The World Federation of NDE Centers (WFNDEC) has addressed the 2005 ultrasonic benchmark problems including linear scanning of the side drilled hole (SDH) specimen with oblique incidence with an emphasis on further study on SV-wave responses of the SDH versus angles around 60 degrees and responses of a circular crack. To solve these problems, we adopted the multi-Gaussian beam model as beam models and the Kirchhoff approximation and the separation of variables method as far-field scattering models. By integration of the beam and scattering models and the system efficiency factor obtained from the given reference experimental setups provided by Center for Nondestructive Evaluation into our ultrasonic measurement models, we predicted the responses of the SDH and the circular cracks (pill-box crack like flaws). This paper summarizes our models and predicted results for the 2005 ultrasonic benchmark problems.

  9. Atmospheric analysis and prediction model development, volume 1

    NASA Technical Reports Server (NTRS)

    Kesel, P. G.; Wellck, R. E.; Langland, R. A.; Lewit, H. L.

    1976-01-01

    A set of hemispheric atmospheric analysis and prediction models was designed and tested. All programs were executed on either a 63 x 63 or 187 x 187 polar stereographic grid of the Northern Hemisphere. Parameters for objective analysis included sea surface temperature, sea level pressure, and twelve levels (from 1,000 to 100 millibars) of temperatures, heights, and winds. Stratospheric extensions (up to 10 millibars) were also provided. Four versions of a complex atmospheric prediction model, based on primitive equations, were programmed and tested. These models were executed on either the 63 x 63 or 187 x 187 grid, using either five or ten computational layers. The coarse-mesh (63 x 63) models were tested using real data for the period 21-23 April 1976. The fine-mesh (187 x 187) models were debugged, but insufficient computer resources precluded production tests. Preliminary test results for the 63 x 63 models are provided. Problem areas and proposed solutions are discussed.

  10. Predictive modeling of particle-laden, turbulent flows

    SciTech Connect

    Sinclair, J.L.

    1992-01-01

    The successful prediction of particle-laden, turbulent flows relies heavily on the representation of turbulence in the gas phase. Several types of turbulence models for single-phase gas flow have been developed which compare reasonably well with experimental data. In the present work, a low-Reynolds'' k-[epsilon], closure model is chosen to describe the Reynolds stresses associated with gas-phase turbulence. This closure scheme, which involves transport equations for the turbulent kinetic energy and its dissipation rate, is valid in the turbulent core as well as the viscous sublayer. Several versions of the low-Reynolds k-[epsilon] closure are documented in the literature. However, even those models which are similar in theory often differ considerably in their quantitative and qualitative predictions, making the selection of such a model a difficult task. The purpose of this progress report is to document our findings on the performance of ten different versions of the low-Reynolds k-[epsilon] model on predicting fully developed pipe flow. The predictions are compared with the experimental data of Schildknecht, et al. (1979). With the exception of the model put forth by Hoffman (1975), the predictions of all the closures show reasonable agreement for the mean velocity profile. However, important quantitative differences exist for the turbulent kinetic energy profile. In addition, the predicted eddy viscosity profile and the wall-region profile of the turbulent kinetic energy dissipation rate exhibit both quantitative and qualitative differences. An effort to extend the present comparisons to include experimental measurements of other researchers is recommended in order to further evaluate the performance of the models.

  11. Modeling the prediction of business intelligence system effectiveness.

    PubMed

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important. PMID:27376005

  12. Prediction of Standard Enthalpy of Formation by a QSPR Model

    PubMed Central

    Vatani, Ali; Mehrpooya, Mehdi; Gharagheizi, Farhad

    2007-01-01

    The standard enthalpy of formation of 1115 compounds from all chemical groups, were predicted using genetic algorithm-based multivariate linear regression (GA-MLR). The obtained multivariate linear five descriptors model by GA-MLR has correlation coefficient (R2 = 0.9830). All molecular descriptors which have entered in this model are calculated from chemical structure of any molecule. As a result, application of this model for any compound is easy and accurate.

  13. A model of the human in a cognitive prediction task.

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1973-01-01

    The human decision maker's behavior when predicting future states of discrete linear dynamic systems driven by zero-mean Gaussian processes is modeled. The task is on a slow enough time scale that physiological constraints are insignificant compared with cognitive limitations. The model is basically a linear regression system identifier with a limited memory and noisy observations. Experimental data are presented and compared to the model.

  14. Investigation of models for large scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1982-01-01

    Long-range numerical prediction and climate simulation experiments with various global atmospheric general circulation models are reported. A chronological listing of the titles of all publications and technical reports already distributed is presented together with an account of the most recent reseach. Several reports on a series of perpetual January climate simulations with the GISS coarse mesh climate model are listed. A set of perpetual July climate simulations with the same model is presented and the results are described.

  15. MJO empirical modeling and improved prediction by "Past Noise Forecasting"

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.; Robertson, A. W.; Ghil, M.

    2011-12-01

    The Madden-Julian oscillation (MJO) is the dominant mode of intraseasonal variability in tropics and plays an important role in global climate. Here we presents modeling and prediction study of MJO by using Empirical Model Reduction (EMR). EMR is a methodology for constructing stochastic models based on the observed evolution of selected climate fields; these models represent unresolved processes as multivariate, spatially correlated stochastic forcing. In EMR, multiple polynomial regression is used to estimate the nonlinear, deterministic propagator of the dynamics, as well as multi-level additive stochastic forcing -"noise", directly from the observational dataset. The EMR approach has been successfully applied on the seasonal-to-interannual time scale for real-time ENSO prediction (Kondrashov et al. 2005), as well as atmospheric midlatitude intraseasonal variability (Kondrashov et al. 2006,2010). In this study nonlinear (quadratic) with annual cycle, three-level EMR model was developed to model and predict leading pair of real-time multivariate Madden-Julian oscillation (RMM1,2) daily indices (June 1974- January 2009, http://cawcr.gov.au/staff/mwheeler/maproom/RMM/). The EMR model captures essential MJO statistical features, such as seasonal dependence, RMM1,2 autocorrelations and spectra. By using the "Past Noise Forecasting" (PNF) approach developed and successfully applied to improve long-term ENSO prediction in Chekroun et al. (2011), we are able to notably improve the cross-validated prediction skill of RMM indices- especially at lead times of 15-to-30 days. The EMR/PNF method has two steps: (i) select noise samples - or "snippets" - from the past noise, which have forced the EMR model to yield the MJO phase resembling the one at the the currently observed state; and (ii) use these "noise" snippets to create ensemble forecast of EMR model. The MJO phase identification is based on Singular Spectrum Analysis reconstruction of 30-60 day MJO cycle.

  16. Charge transport model to predict intrinsic reliability for dielectric materials

    SciTech Connect

    Ogden, Sean P.; Borja, Juan; Plawsky, Joel L. Gill, William N.; Lu, T.-M.; Yeap, Kong Boon

    2015-09-28

    Several lifetime models, mostly empirical in nature, are used to predict reliability for low-k dielectrics used in integrated circuits. There is a dispute over which model provides the most accurate prediction for device lifetime at operating conditions. As a result, there is a need to transition from the use of these largely empirical models to one built entirely on theory. Therefore, a charge transport model was developed to predict the device lifetime of low-k interconnect systems. The model is based on electron transport and donor-type defect formation. Breakdown occurs when a critical defect concentration accumulates, resulting in electron tunneling and the emptying of positively charged traps. The enhanced local electric field lowers the barrier for electron injection into the dielectric, causing a positive feedforward failure. The charge transport model is able to replicate experimental I-V and I-t curves, capturing the current decay at early stress times and the rapid current increase at failure. The model is based on field-driven and current-driven failure mechanisms and uses a minimal number of parameters. All the parameters have some theoretical basis or have been measured experimentally and are not directly used to fit the slope of the time-to-failure versus applied field curve. Despite this simplicity, the model is able to accurately predict device lifetime for three different sources of experimental data. The simulation's predictions at low fields and very long lifetimes show that the use of a single empirical model can lead to inaccuracies in device reliability.

  17. In silico modeling to predict drug-induced phospholipidosis

    SciTech Connect

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G. Sadrieh, Nakissa

    2013-06-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the construction and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL.

  18. Charge transport model to predict intrinsic reliability for dielectric materials

    NASA Astrophysics Data System (ADS)

    Ogden, Sean P.; Borja, Juan; Plawsky, Joel L.; Lu, T.-M.; Yeap, Kong Boon; Gill, William N.

    2015-09-01

    Several lifetime models, mostly empirical in nature, are used to predict reliability for low-k dielectrics used in integrated circuits. There is a dispute over which model provides the most accurate prediction for device lifetime at operating conditions. As a result, there is a need to transition from the use of these largely empirical models to one built entirely on theory. Therefore, a charge transport model was developed to predict the device lifetime of low-k interconnect systems. The model is based on electron transport and donor-type defect formation. Breakdown occurs when a critical defect concentration accumulates, resulting in electron tunneling and the emptying of positively charged traps. The enhanced local electric field lowers the barrier for electron injection into the dielectric, causing a positive feedforward failure. The charge transport model is able to replicate experimental I-V and I-t curves, capturing the current decay at early stress times and the rapid current increase at failure. The model is based on field-driven and current-driven failure mechanisms and uses a minimal number of parameters. All the parameters have some theoretical basis or have been measured experimentally and are not directly used to fit the slope of the time-to-failure versus applied field curve. Despite this simplicity, the model is able to accurately predict device lifetime for three different sources of experimental data. The simulation's predictions at low fields and very long lifetimes show that the use of a single empirical model can lead to inaccuracies in device reliability.

  19. Cardiopulmonary Circuit Models for Predicting Injury to the Heart

    NASA Astrophysics Data System (ADS)

    Ward, Richard; Wing, Sarah; Bassingthwaighte, James; Neal, Maxwell

    2004-11-01

    Circuit models have been used extensively in physiology to describe cardiopulmonary function. Such models are being used in the DARPA Virtual Soldier (VS) Project* to predict the response to injury or physiological stress. The most complex model consists of systemic circulation, pulmonary circulation, and a four-chamber heart sub-model. This model also includes baroreceptor feedback, airway mechanics, gas exchange, and pleural pressure influence on the circulation. As part of the VS Project, Oak Ridge National Laboratory has been evaluating various cardiopulmonary circuit models for predicting the effects of injury to the heart. We describe, from a physicist's perspective, the concept of building circuit models, discuss both unstressed and stressed models, and show how the stressed models are used to predict effects of specific wounds. *This work was supported by a grant from the DARPA, executed by the U.S. Army Medical Research and Materiel Command/TATRC Cooperative Agreement, Contract # W81XWH-04-2-0012. The submitted manuscript has been authored by the U.S. Department of Energy, Office of Science of the Oak Ridge National Laboratory, managed for the U.S. DOE by UT-Battelle, LLC, under contract No. DE-AC05-00OR22725. Accordingly, the U.S. Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purpose.