Sample records for model predictions include

  1. Multivariate Analysis of Seismic Field Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alam, M. Kathleen

    1999-06-01

    This report includes the details of the model building procedure and prediction of seismic field data. Principal Components Regression, a multivariate analysis technique, was used to model seismic data collected as two pieces of equipment were cycled on and off. Models built that included only the two pieces of equipment of interest had trouble predicting data containing signals not included in the model. Evidence for poor predictions came from the prediction curves as well as spectral F-ratio plots. Once the extraneous signals were included in the model, predictions improved dramatically. While Principal Components Regression performed well for the present datamore » sets, the present data analysis suggests further work will be needed to develop more robust modeling methods as the data become more complex.« less

  2. Comparison of Two Predictive Models for Short-Term Mortality in Patients after Severe Traumatic Brain Injury.

    PubMed

    Kesmarky, Klara; Delhumeau, Cecile; Zenobi, Marie; Walder, Bernhard

    2017-07-15

    The Glasgow Coma Scale (GCS) and the Abbreviated Injury Score of the head region (HAIS) are validated prognostic factors in traumatic brain injury (TBI). The aim of this study was to compare the prognostic performance of an alternative predictive model including motor GCS, pupillary reactivity, age, HAIS, and presence of multi-trauma for short-term mortality with a reference predictive model including motor GCS, pupil reaction, and age (IMPACT core model). A secondary analysis of a prospective epidemiological cohort study in Switzerland including patients after severe TBI (HAIS >3) with the outcome death at 14 days was performed. Performance of prediction, accuracy of discrimination (area under the receiver operating characteristic curve [AUROC]), calibration, and validity of the two predictive models were investigated. The cohort included 808 patients (median age, 56; interquartile range, 33-71), median GCS at hospital admission 3 (3-14), abnormal pupil reaction 29%, with a death rate of 29.7% at 14 days. The alternative predictive model had a higher accuracy of discrimination to predict death at 14 days than the reference predictive model (AUROC 0.852, 95% confidence interval [CI] 0.824-0.880 vs. AUROC 0.826, 95% CI 0.795-0.857; p < 0.0001). The alternative predictive model had an equivalent calibration, compared with the reference predictive model Hosmer-Lemeshow p values (Chi2 8.52, Hosmer-Lemeshow p = 0.345 vs. Chi2 8.66, Hosmer-Lemeshow p = 0.372). The optimism-corrected value of AUROC for the alternative predictive model was 0.845. After severe TBI, a higher performance of prediction for short-term mortality was observed with the alternative predictive model, compared with the reference predictive model.

  3. University of North Carolina Caries Risk Assessment Study: comparisons of high risk prediction, any risk prediction, and any risk etiologic models.

    PubMed

    Beck, J D; Weintraub, J A; Disney, J A; Graves, R C; Stamm, J W; Kaste, L M; Bohannan, H M

    1992-12-01

    The purpose of this analysis is to compare three different statistical models for predicting children likely to be at risk of developing dental caries over a 3-yr period. Data are based on 4117 children who participated in the University of North Carolina Caries Risk Assessment Study, a longitudinal study conducted in the Aiken, South Carolina, and Portland, Maine areas. The three models differed with respect to either the types of variables included or the definition of disease outcome. The two "Prediction" models included both risk factor variables thought to cause dental caries and indicator variables that are associated with dental caries, but are not thought to be causal for the disease. The "Etiologic" model included only etiologic factors as variables. A dichotomous outcome measure--none or any 3-yr increment, was used in the "Any Risk Etiologic model" and the "Any Risk Prediction Model". Another outcome, based on a gradient measure of disease, was used in the "High Risk Prediction Model". The variables that are significant in these models vary across grades and sites, but are more consistent among the Etiologic model than the Predictor models. However, among the three sets of models, the Any Risk Prediction Models have the highest sensitivity and positive predictive values, whereas the High Risk Prediction Models have the highest specificity and negative predictive values. Considerations in determining model preference are discussed.

  4. Natural analogs in the petroleum industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, J.R.

    1995-09-01

    This article describes the use of natural analogues in petroleum exploration and includes numerous geologic model descriptions which have historically been used in the prediction of geometries and location of oil and gas accumulations. These geologic models have been passed down to and used by succeeding generations of petroleum geologists. Some examples of these geologic models include the Allan fault-plane model, porosity prediction, basin modelling, prediction of basin compartmentalization, and diagenesis.

  5. Predictions of Cockpit Simulator Experimental Outcome Using System Models

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Goka, T.

    1984-01-01

    This study involved predicting the outcome of a cockpit simulator experiment where pilots used cockpit displays of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. The experiments were run on the NASA Ames Research Center multicab cockpit simulator facility. Prior to the experiments, a mathematical model of the pilot/aircraft/CDTI flight system was developed which included relative in-trail and vertical dynamics between aircraft in the approach string. This model was used to construct a digital simulation of the string dynamics including response to initial position errors. The model was then used to predict the outcome of the in-trail following cockpit simulator experiments. Outcome included performance and sensitivity to different separation criteria. The experimental results were then used to evaluate the model and its prediction accuracy. Lessons learned in this modeling and prediction study are noted.

  6. Random glucose is useful for individual prediction of type 2 diabetes: results of the Study of Health in Pomerania (SHIP).

    PubMed

    Kowall, Bernd; Rathmann, Wolfgang; Giani, Guido; Schipf, Sabine; Baumeister, Sebastian; Wallaschofski, Henri; Nauck, Matthias; Völzke, Henry

    2013-04-01

    Random glucose is widely used in routine clinical practice. We investigated whether this non-standardized glycemic measure is useful for individual diabetes prediction. The Study of Health in Pomerania (SHIP), a population-based cohort study in north-east Germany, included 3107 diabetes-free persons aged 31-81 years at baseline in 1997-2001. 2475 persons participated at 5-year follow-up and gave self-reports of incident diabetes. For the total sample and for subjects aged ≥50 years, statistical properties of prediction models with and without random glucose were compared. A basic model (including age, sex, diabetes of parents, hypertension and waist circumference) and a comprehensive model (additionally including various lifestyle variables and blood parameters, but not HbA1c) performed statistically significantly better after adding random glucose (e.g., the area under the receiver-operating curve (AROC) increased from 0.824 to 0.856 after adding random glucose to the comprehensive model in the total sample). Likewise, adding random glucose to prediction models which included HbA1c led to significant improvements of predictive ability (e.g., for subjects ≥50 years, AROC increased from 0.824 to 0.849 after adding random glucose to the comprehensive model+HbA1c). Random glucose is useful for individual diabetes prediction, and improves prediction models including HbA1c. Copyright © 2012 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.

  7. A Field Synopsis of Sex in Clinical Prediction Models for Cardiovascular Disease

    PubMed Central

    Paulus, Jessica K.; Wessler, Benjamin S.; Lundquist, Christine; Lai, Lana L.Y.; Raman, Gowri; Lutz, Jennifer S.; Kent, David M.

    2017-01-01

    Background Several widely-used risk scores for cardiovascular disease (CVD) incorporate sex effects, yet there has been no systematic summary of the role of sex in clinical prediction models (CPMs). To better understand the potential of these models to support sex-specific care, we conducted a field synopsis of sex effects in CPMs for CVD. Methods and Results We identified CPMs in the Tufts Predictive Analytics and Comparative Effectiveness (PACE) CPM Registry, a comprehensive database of CVD CPMs published from 1/1990–5/2012. We report the proportion of models including sex effects on CVD incidence or prognosis, summarize the directionality of the predictive effects of sex, and explore factors influencing the inclusion of sex. Of 592 CVD-related CPMs, 193 (33%) included sex as a predictor or presented sex-stratified models. Sex effects were included in 78% (53/68) of models predicting incidence of CVD in a general population, versus only 35% (59/171), 21% (12/58) and 17% (12/72) of models predicting outcomes in patients with coronary artery disease (CAD), stroke, and heart failure, respectively. Among sex-including CPMs, women with heart failure were at lower mortality risk in 8/8 models; women undergoing revascularization for CAD were at higher mortality risk in 10/12 models. Factors associated with the inclusion of sex effects included the number of outcome events and using cohorts at-risk for CVD (rather than with established CVD). Conclusions While CPMs hold promise for supporting sex-specific decision making in CVD clinical care, sex effects are included in only one third of published CPMs. PMID:26908865

  8. Prediction of the birch pollen season characteristics in Cracow, Poland using an 18-year data series.

    PubMed

    Dorota, Myszkowska

    2013-03-01

    The aim of the study was to construct the model forecasting the birch pollen season characteristics in Cracow on the basis of an 18-year data series. The study was performed using the volumetric method (Lanzoni/Burkard trap). The 98/95 % method was used to calculate the pollen season. The Spearman's correlation test was applied to find the relationship between the meteorological parameters and pollen season characteristics. To construct the predictive model, the backward stepwise multiple regression analysis was used including the multi-collinearity of variables. The predictive models best fitted the pollen season start and end, especially models containing two independent variables. The peak concentration value was predicted with the higher prediction error. Also the accuracy of the models predicting the pollen season characteristics in 2009 was higher in comparison with 2010. Both, the multi-variable model and one-variable model for the beginning of the pollen season included air temperature during the last 10 days of February, while the multi-variable model also included humidity at the beginning of April. The models forecasting the end of the pollen season were based on temperature in March-April, while the peak day was predicted using the temperature during the last 10 days of March.

  9. Does information available at admission for delivery improve prediction of vaginal birth after cesarean?

    PubMed Central

    Grobman, William A.; Lai, Yinglei; Landon, Mark B.; Spong, Catherine Y.; Leveno, Kenneth J.; Rouse, Dwight J.; Varner, Michael W.; Moawad, Atef H.; Simhan, Hyagriv N.; Harper, Margaret; Wapner, Ronald J.; Sorokin, Yoram; Miodovnik, Menachem; Carpenter, Marshall; O'sullivan, Mary J.; Sibai, Baha M.; Langer, Oded; Thorp, John M.; Ramin, Susan M.; Mercer, Brian M.

    2010-01-01

    Objective To construct a predictive model for vaginal birth after cesarean (VBAC) that combines factors that can be ascertained only as the pregnancy progresses with those known at initiation of prenatal care. Study design Using multivariable modeling, we constructed a predictive model for VBAC that included patient factors known at the initial prenatal visit as well as those that only became evident as the pregancy progressed to the admission for delivery. Results 9616 women were analyzed. The regression equation for VBAC success included multiple factors that could not be known at the first prenatal visit. The area under the curve for this model was significantly greater (P < .001) than that of a model that included only factors available at the first prenatal visit. Conclusion A prediction model for VBAC success that incorporates factors that can be ascertained only as the pregnancy progresses adds to the predictive accuracy of a model that uses only factors available at a first prenatal visit. PMID:19813165

  10. Developing Risk Prediction Models for Postoperative Pancreatic Fistula: a Systematic Review of Methodology and Reporting Quality.

    PubMed

    Wen, Zhang; Guo, Ya; Xu, Banghao; Xiao, Kaiyin; Peng, Tao; Peng, Minhao

    2016-04-01

    Postoperative pancreatic fistula is still a major complication after pancreatic surgery, despite improvements of surgical technique and perioperative management. We sought to systematically review and critically access the conduct and reporting of methods used to develop risk prediction models for predicting postoperative pancreatic fistula. We conducted a systematic search of PubMed and EMBASE databases to identify articles published before January 1, 2015, which described the development of models to predict the risk of postoperative pancreatic fistula. We extracted information of developing a prediction model including study design, sample size and number of events, definition of postoperative pancreatic fistula, risk predictor selection, missing data, model-building strategies, and model performance. Seven studies of developing seven risk prediction models were included. In three studies (42 %), the number of events per variable was less than 10. The number of candidate risk predictors ranged from 9 to 32. Five studies (71 %) reported using univariate screening, which was not recommended in building a multivariate model, to reduce the number of risk predictors. Six risk prediction models (86 %) were developed by categorizing all continuous risk predictors. The treatment and handling of missing data were not mentioned in all studies. We found use of inappropriate methods that could endanger the development of model, including univariate pre-screening of variables, categorization of continuous risk predictors, and model validation. The use of inappropriate methods affects the reliability and the accuracy of the probability estimates of predicting postoperative pancreatic fistula.

  11. Use of model calibration to achieve high accuracy in analysis of computer networks

    DOEpatents

    Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

    2004-05-11

    A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

  12. Predicting Upwelling Radiance on the West Florida Shelf

    DTIC Science & Technology

    2006-03-31

    National Science Foundation . The chemical and biological model includes the ability to simulate multiple groups of phytoplankton, multiple limiting nutrients, spectral light harvesting by phytoplankton, multiple particulate and dissolved degradational pools of organic material, and non-stoichometric carbon, nitrogen, phosphorus, silica, and iron dynamics. It also includes a complete spectral light model for the prediction of Inherent Optical Properties (IOPs). The coupling of the predicted IOP model (Ecosim 2.0) with robust radiative transfer model (Ecolight

  13. The evaluation of different forest structural indices to predict the stand aboveground biomass of even-aged Scotch pine (Pinus sylvestris L.) forests in Kunduz, Northern Turkey.

    PubMed

    Ercanli, İlker; Kahriman, Aydın

    2015-03-01

    We assessed the effect of stand structural diversity, including the Shannon, improved Shannon, Simpson, McIntosh, Margelef, and Berger-Parker indices, on stand aboveground biomass (AGB) and developed statistical prediction models for the stand AGB values, including stand structural diversity indices and some stand attributes. The AGB prediction model, including only stand attributes, accounted for 85 % of the total variance in AGB (R (2)) with an Akaike's information criterion (AIC) of 807.2407, Bayesian information criterion (BIC) of 809.5397, Schwarz Bayesian criterion (SBC) of 818.0426, and root mean square error (RMSE) of 38.529 Mg. After inclusion of the stand structural diversity into the model structure, considerable improvement was observed in statistical accuracy, including 97.5 % of the total variance in AGB, with an AIC of 614.1819, BIC of 617.1242, SBC of 633.0853, and RMSE of 15.8153 Mg. The predictive fitting results indicate that some indices describing the stand structural diversity can be employed as significant independent variables to predict the AGB production of the Scotch pine stand. Further, including the stand diversity indices in the AGB prediction model with the stand attributes provided important predictive contributions in estimating the total variance in AGB.

  14. Methods for using groundwater model predictions to guide hydrogeologic data collection, with application to the Death Valley regional groundwater flow system

    USGS Publications Warehouse

    Tiedeman, C.R.; Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.

    2003-01-01

    Calibrated models of groundwater systems can provide substantial information for guiding data collection. This work considers using such models to guide hydrogeologic data collection for improving model predictions by identifying model parameters that are most important to the predictions. Identification of these important parameters can help guide collection of field data about parameter values and associated flow system features and can lead to improved predictions. Methods for identifying parameters important to predictions include prediction scaled sensitivities (PSS), which account for uncertainty on individual parameters as well as prediction sensitivity to parameters, and a new "value of improved information" (VOII) method presented here, which includes the effects of parameter correlation in addition to individual parameter uncertainty and prediction sensitivity. In this work, the PSS and VOII methods are demonstrated and evaluated using a model of the Death Valley regional groundwater flow system. The predictions of interest are advective transport paths originating at sites of past underground nuclear testing. Results show that for two paths evaluated the most important parameters include a subset of five or six of the 23 defined model parameters. Some of the parameters identified as most important are associated with flow system attributes that do not lie in the immediate vicinity of the paths. Results also indicate that the PSS and VOII methods can identify different important parameters. Because the methods emphasize somewhat different criteria for parameter importance, it is suggested that parameters identified by both methods be carefully considered in subsequent data collection efforts aimed at improving model predictions.

  15. Seasonal Drought Prediction: Advances, Challenges, and Future Prospects

    NASA Astrophysics Data System (ADS)

    Hao, Zengchao; Singh, Vijay P.; Xia, Youlong

    2018-03-01

    Drought prediction is of critical importance to early warning for drought managements. This review provides a synthesis of drought prediction based on statistical, dynamical, and hybrid methods. Statistical drought prediction is achieved by modeling the relationship between drought indices of interest and a suite of potential predictors, including large-scale climate indices, local climate variables, and land initial conditions. Dynamical meteorological drought prediction relies on seasonal climate forecast from general circulation models (GCMs), which can be employed to drive hydrological models for agricultural and hydrological drought prediction with the predictability determined by both climate forcings and initial conditions. Challenges still exist in drought prediction at long lead time and under a changing environment resulting from natural and anthropogenic factors. Future research prospects to improve drought prediction include, but are not limited to, high-quality data assimilation, improved model development with key processes related to drought occurrence, optimal ensemble forecast to select or weight ensembles, and hybrid drought prediction to merge statistical and dynamical forecasts.

  16. Evaluation of an ensemble of genetic models for prediction of a quantitative trait.

    PubMed

    Milton, Jacqueline N; Steinberg, Martin H; Sebastiani, Paola

    2014-01-01

    Many genetic markers have been shown to be associated with common quantitative traits in genome-wide association studies. Typically these associated genetic markers have small to modest effect sizes and individually they explain only a small amount of the variability of the phenotype. In order to build a genetic prediction model without fitting a multiple linear regression model with possibly hundreds of genetic markers as predictors, researchers often summarize the joint effect of risk alleles into a genetic score that is used as a covariate in the genetic prediction model. However, the prediction accuracy can be highly variable and selecting the optimal number of markers to be included in the genetic score is challenging. In this manuscript we present a strategy to build an ensemble of genetic prediction models from data and we show that the ensemble-based method makes the challenge of choosing the number of genetic markers more amenable. Using simulated data with varying heritability and number of genetic markers, we compare the predictive accuracy and inclusion of true positive and false positive markers of a single genetic prediction model and our proposed ensemble method. The results show that the ensemble of genetic models tends to include a larger number of genetic variants than a single genetic model and it is more likely to include all of the true genetic markers. This increased sensitivity is obtained at the price of a lower specificity that appears to minimally affect the predictive accuracy of the ensemble.

  17. A mathematical model of a large open fire

    NASA Technical Reports Server (NTRS)

    Harsha, P. T.; Bragg, W. N.; Edelman, R. B.

    1981-01-01

    A mathematical model capable of predicting the detailed characteristics of large, liquid fuel, axisymmetric, pool fires is described. The predicted characteristics include spatial distributions of flame gas velocity, soot concentration and chemical specie concentrations including carbon monoxide, carbon dioxide, water, unreacted oxygen, unreacted fuel and nitrogen. Comparisons of the predictions with experimental values are also given.

  18. Model for Predicting Passage of Invasive Fish Species Through Culverts

    NASA Astrophysics Data System (ADS)

    Neary, V.

    2010-12-01

    Conservation efforts to promote or inhibit fish passage include the application of simple fish passage models to determine whether an open channel flow allows passage of a given fish species. Derivations of simple fish passage models for uniform and nonuniform flow conditions are presented. For uniform flow conditions, a model equation is developed that predicts the mean-current velocity threshold in a fishway, or velocity barrier, which causes exhaustion at a given maximum distance of ascent. The derivation of a simple expression for this exhaustion-threshold (ET) passage model is presented using kinematic principles coupled with fatigue curves for threatened and endangered fish species. Mean current velocities at or above the threshold predict failure to pass. Mean current velocities below the threshold predict successful passage. The model is therefore intuitive and easily applied to predict passage or exclusion. The ET model’s simplicity comes with limitations, however, including its application only to uniform flow, which is rarely found in the field. This limitation is addressed by deriving a model that accounts for nonuniform conditions, including backwater profiles and drawdown curves. Comparison of these models with experimental data from volitional swimming studies of fish indicates reasonable performance, but limitations are still present due to the difficulty in predicting fish behavior and passage strategies that can vary among individuals and different fish species.

  19. Crash prediction modeling for curved segments of rural two-lane two-way highways in Utah.

    DOT National Transportation Integrated Search

    2015-10-01

    This report contains the results of the development of crash prediction models for curved segments of rural : two-lane two-way highways in the state of Utah. The modeling effort included the calibration of the predictive : model found in the Highway ...

  20. Testing the Predictive Power of Coulomb Stress on Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.

    2009-12-01

    Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.

  1. Modeling to predict pilot performance during CDTI-based in-trail following experiments

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Goka, T.

    1984-01-01

    A mathematical model was developed of the flight system with the pilot using a cockpit display of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. Both in-trail and vertical dynamics were included. The nominal spacing was based on one of three criteria (Constant Time Predictor; Constant Time Delay; or Acceleration Cue). This model was used to simulate digitally the dynamics of a string of multiple following aircraft, including response to initial position errors. The simulation was used to predict the outcome of a series of in-trail following experiments, including pilot performance in maintaining correct longitudinal spacing and vertical position. The experiments were run in the NASA Ames Research Center multi-cab cockpit simulator facility. The experimental results were then used to evaluate the model and its prediction accuracy. Model parameters were adjusted, so that modeled performance matched experimental results. Lessons learned in this modeling and prediction study are summarized.

  2. Prediction and assimilation of surf-zone processes using a Bayesian network: Part I: Forward models

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holland, K. Todd

    2011-01-01

    Prediction of coastal processes, including waves, currents, and sediment transport, can be obtained from a variety of detailed geophysical-process models with many simulations showing significant skill. This capability supports a wide range of research and applied efforts that can benefit from accurate numerical predictions. However, the predictions are only as accurate as the data used to drive the models and, given the large temporal and spatial variability of the surf zone, inaccuracies in data are unavoidable such that useful predictions require corresponding estimates of uncertainty. We demonstrate how a Bayesian-network model can be used to provide accurate predictions of wave-height evolution in the surf zone given very sparse and/or inaccurate boundary-condition data. The approach is based on a formal treatment of a data-assimilation problem that takes advantage of significant reduction of the dimensionality of the model system. We demonstrate that predictions of a detailed geophysical model of the wave evolution are reproduced accurately using a Bayesian approach. In this surf-zone application, forward prediction skill was 83%, and uncertainties in the model inputs were accurately transferred to uncertainty in output variables. We also demonstrate that if modeling uncertainties were not conveyed to the Bayesian network (i.e., perfect data or model were assumed), then overly optimistic prediction uncertainties were computed. More consistent predictions and uncertainties were obtained by including model-parameter errors as a source of input uncertainty. Improved predictions (skill of 90%) were achieved because the Bayesian network simultaneously estimated optimal parameters while predicting wave heights.

  3. Seismic activity prediction using computational intelligence techniques in northern Pakistan

    NASA Astrophysics Data System (ADS)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat

    2017-10-01

    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  4. Predictability of the Indian Ocean Dipole in the coupled models

    NASA Astrophysics Data System (ADS)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2017-03-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  5. An Operational Model for the Prediction of Jet Blast

    DOT National Transportation Integrated Search

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  6. Models for predicting fuel consumption in sagebrush-dominated ecosystems

    Treesearch

    Clinton S. Wright

    2013-01-01

    Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....

  7. Using sensitivity analysis in model calibration efforts

    USGS Publications Warehouse

    Tiedeman, Claire; Hill, Mary C.

    2003-01-01

    In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.

  8. Multi-model blending

    DOEpatents

    Hamann, Hendrik F.; Hwang, Youngdeok; van Kessel, Theodore G.; Khabibrakhmanov, Ildar K.; Muralidhar, Ramachandran

    2016-10-18

    A method and a system to perform multi-model blending are described. The method includes obtaining one or more sets of predictions of historical conditions, the historical conditions corresponding with a time T that is historical in reference to current time, and the one or more sets of predictions of the historical conditions being output by one or more models. The method also includes obtaining actual historical conditions, the actual historical conditions being measured conditions at the time T, assembling a training data set including designating the two or more set of predictions of historical conditions as predictor variables and the actual historical conditions as response variables, and training a machine learning algorithm based on the training data set. The method further includes obtaining a blended model based on the machine learning algorithm.

  9. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  10. Analysis of Predicted Aircraft Wake Vortex Transport and Comparison with Experiment Volume I -- Wake Vortex Predictive System Study

    DOT National Transportation Integrated Search

    1974-04-01

    A unifying wake vortex transport model is developed and applied to a wake vortex predictive system concept. The fundamentals of vortex motion underlying the predictive model are discussed including vortex decay, bursting and instability phenomena. A ...

  11. Airport Noise Prediction Model -- MOD 7

    DOT National Transportation Integrated Search

    1978-07-01

    The MOD 7 Airport Noise Prediction Model is fully operational. The language used is Fortran, and it has been run on several different computer systems. Its capabilities include prediction of noise levels for single parameter changes, for multiple cha...

  12. Combustion of Nitramine Propellants

    DTIC Science & Technology

    1983-03-01

    through development of a comprehensive analytical model. The ultimate goals are to enable prediction of deflagration rate over a wide pressure range...superior in burn rate prediction , both simple models fail in correlating existing temperature- sensitivity data. (2) In the second part, a...auxiliary condition to enable independent burn rate prediction ; improved melt phase model including decomposition-gas bubbles; model for far-field

  13. Prediction models for intracranial hemorrhage or major bleeding in patients on antiplatelet therapy: a systematic review and external validation study.

    PubMed

    Hilkens, N A; Algra, A; Greving, J P

    2016-01-01

    ESSENTIALS: Prediction models may help to identify patients at high risk of bleeding on antiplatelet therapy. We identified existing prediction models for bleeding and validated them in patients with cerebral ischemia. Five prediction models were identified, all of which had some methodological shortcomings. Performance in patients with cerebral ischemia was poor. Background Antiplatelet therapy is widely used in secondary prevention after a transient ischemic attack (TIA) or ischemic stroke. Bleeding is the main adverse effect of antiplatelet therapy and is potentially life threatening. Identification of patients at increased risk of bleeding may help target antiplatelet therapy. This study sought to identify existing prediction models for intracranial hemorrhage or major bleeding in patients on antiplatelet therapy and evaluate their performance in patients with cerebral ischemia. We systematically searched PubMed and Embase for existing prediction models up to December 2014. The methodological quality of the included studies was assessed with the CHARMS checklist. Prediction models were externally validated in the European Stroke Prevention Study 2, comprising 6602 patients with a TIA or ischemic stroke. We assessed discrimination and calibration of included prediction models. Five prediction models were identified, of which two were developed in patients with previous cerebral ischemia. Three studies assessed major bleeding, one studied intracerebral hemorrhage and one gastrointestinal bleeding. None of the studies met all criteria of good quality. External validation showed poor discriminative performance, with c-statistics ranging from 0.53 to 0.64 and poor calibration. A limited number of prediction models is available that predict intracranial hemorrhage or major bleeding in patients on antiplatelet therapy. The methodological quality of the models varied, but was generally low. Predictive performance in patients with cerebral ischemia was poor. In order to reliably predict the risk of bleeding in patients with cerebral ischemia, development of a prediction model according to current methodological standards is needed. © 2015 International Society on Thrombosis and Haemostasis.

  14. The Theory of Planned Behavior as a Model of Heavy Episodic Drinking Among College Students

    PubMed Central

    Collins, Susan E.; Carey, Kate B.

    2008-01-01

    This study provided a simultaneous, confirmatory test of the theory of planned behavior (TPB) in predicting heavy episodic drinking (HED) among college students. It was hypothesized that past HED, drinking attitudes, subjective norms and drinking refusal self-efficacy would predict intention, which would in turn predict future HED. Participants consisted of 131 college drinkers (63% female) who reported having engaged in HED in the previous two weeks. Participants were recruited and completed questionnaires within the context of a larger intervention study (see Collins & Carey, 2005). Latent factor structural equation modeling was used to test the ability of the TPB to predict HED. Chi-square tests and fit indices indicated good fit for the final structural models. Self-efficacy and attitudes but not subjective norms significantly predicted baseline intention, and intention and past HED predicted future HED. Contrary to hypotheses, however, a structural model excluding past HED provided a better fit than a model including it. Although further studies must be conducted before a definitive conclusion is reached, a TPB model excluding past behavior, which is arguably more parsimonious and theory driven, may provide better prediction of HED among college drinkers than a model including past behavior. PMID:18072832

  15. Biotic and abiotic factors predicting the global distribution and population density of an invasive large mammal

    PubMed Central

    Lewis, Jesse S.; Farnsworth, Matthew L.; Burdett, Chris L.; Theobald, David M.; Gray, Miranda; Miller, Ryan S.

    2017-01-01

    Biotic and abiotic factors are increasingly acknowledged to synergistically shape broad-scale species distributions. However, the relative importance of biotic and abiotic factors in predicting species distributions is unclear. In particular, biotic factors, such as predation and vegetation, including those resulting from anthropogenic land-use change, are underrepresented in species distribution modeling, but could improve model predictions. Using generalized linear models and model selection techniques, we used 129 estimates of population density of wild pigs (Sus scrofa) from 5 continents to evaluate the relative importance, magnitude, and direction of biotic and abiotic factors in predicting population density of an invasive large mammal with a global distribution. Incorporating diverse biotic factors, including agriculture, vegetation cover, and large carnivore richness, into species distribution modeling substantially improved model fit and predictions. Abiotic factors, including precipitation and potential evapotranspiration, were also important predictors. The predictive map of population density revealed wide-ranging potential for an invasive large mammal to expand its distribution globally. This information can be used to proactively create conservation/management plans to control future invasions. Our study demonstrates that the ongoing paradigm shift, which recognizes that both biotic and abiotic factors shape species distributions across broad scales, can be advanced by incorporating diverse biotic factors. PMID:28276519

  16. Predicting outcomes in patients with perforated gastroduodenal ulcers: artificial neural network modelling indicates a highly complex disease.

    PubMed

    Søreide, K; Thorsen, K; Søreide, J A

    2015-02-01

    Mortality prediction models for patients with perforated peptic ulcer (PPU) have not yielded consistent or highly accurate results. Given the complex nature of this disease, which has many non-linear associations with outcomes, we explored artificial neural networks (ANNs) to predict the complex interactions between the risk factors of PPU and death among patients with this condition. ANN modelling using a standard feed-forward, back-propagation neural network with three layers (i.e., an input layer, a hidden layer and an output layer) was used to predict the 30-day mortality of consecutive patients from a population-based cohort undergoing surgery for PPU. A receiver-operating characteristic (ROC) analysis was used to assess model accuracy. Of the 172 patients, 168 had their data included in the model; the data of 117 (70%) were used for the training set, and the data of 51 (39%) were used for the test set. The accuracy, as evaluated by area under the ROC curve (AUC), was best for an inclusive, multifactorial ANN model (AUC 0.90, 95% CIs 0.85-0.95; p < 0.001). This model outperformed standard predictive scores, including Boey and PULP. The importance of each variable decreased as the number of factors included in the ANN model increased. The prediction of death was most accurate when using an ANN model with several univariate influences on the outcome. This finding demonstrates that PPU is a highly complex disease for which clinical prognoses are likely difficult. The incorporation of computerised learning systems might enhance clinical judgments to improve decision making and outcome prediction.

  17. Lung Cancer Risk Prediction Model Incorporating Lung Function: Development and Validation in the UK Biobank Prospective Cohort Study.

    PubMed

    Muller, David C; Johansson, Mattias; Brennan, Paul

    2017-03-10

    Purpose Several lung cancer risk prediction models have been developed, but none to date have assessed the predictive ability of lung function in a population-based cohort. We sought to develop and internally validate a model incorporating lung function using data from the UK Biobank prospective cohort study. Methods This analysis included 502,321 participants without a previous diagnosis of lung cancer, predominantly between 40 and 70 years of age. We used flexible parametric survival models to estimate the 2-year probability of lung cancer, accounting for the competing risk of death. Models included predictors previously shown to be associated with lung cancer risk, including sex, variables related to smoking history and nicotine addiction, medical history, family history of lung cancer, and lung function (forced expiratory volume in 1 second [FEV1]). Results During accumulated follow-up of 1,469,518 person-years, there were 738 lung cancer diagnoses. A model incorporating all predictors had excellent discrimination (concordance (c)-statistic [95% CI] = 0.85 [0.82 to 0.87]). Internal validation suggested that the model will discriminate well when applied to new data (optimism-corrected c-statistic = 0.84). The full model, including FEV1, also had modestly superior discriminatory power than one that was designed solely on the basis of questionnaire variables (c-statistic = 0.84 [0.82 to 0.86]; optimism-corrected c-statistic = 0.83; p FEV1 = 3.4 × 10 -13 ). The full model had better discrimination than standard lung cancer screening eligibility criteria (c-statistic = 0.66 [0.64 to 0.69]). Conclusion A risk prediction model that includes lung function has strong predictive ability, which could improve eligibility criteria for lung cancer screening programs.

  18. Predictive value of the DASH tool for predicting return to work of injured workers with musculoskeletal disorders of the upper extremity.

    PubMed

    Armijo-Olivo, Susan; Woodhouse, Linda J; Steenstra, Ivan A; Gross, Douglas P

    2016-12-01

    To determine whether the Disabilities of the Arm, Shoulder, and Hand (DASH) tool added to the predictive ability of established prognostic factors, including patient demographic and clinical outcomes, to predict return to work (RTW) in injured workers with musculoskeletal (MSK) disorders of the upper extremity. A retrospective cohort study using a population-based database from the Workers' Compensation Board of Alberta (WCB-Alberta) that focused on claimants with upper extremity injuries was used. Besides the DASH, potential predictors included demographic, occupational, clinical and health usage variables. Outcome was receipt of compensation benefits after 3 months. To identify RTW predictors, a purposeful logistic modelling strategy was used. A series of receiver operating curve analyses were performed to determine which model provided the best discriminative ability. The sample included 3036 claimants with upper extremity injuries. The final model for predicting RTW included the total DASH score in addition to other established predictors. The area under the curve for this model was 0.77, which is interpreted as fair discrimination. This model was statistically significantly different than the model of established predictors alone (p<0.001). When comparing the DASH total score versus DASH item 23, a non-significant difference was obtained between the models (p=0.34). The DASH tool together with other established predictors significantly helped predict RTW after 3 months in participants with upper extremity MSK disorders. An appealing result for clinicians and busy researchers is that DASH item 23 has equal predictive ability to the total DASH score. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  19. PredictABEL: an R package for the assessment of risk prediction models.

    PubMed

    Kundu, Suman; Aulchenko, Yurii S; van Duijn, Cornelia M; Janssens, A Cecile J W

    2011-04-01

    The rapid identification of genetic markers for multifactorial diseases from genome-wide association studies is fuelling interest in investigating the predictive ability and health care utility of genetic risk models. Various measures are available for the assessment of risk prediction models, each addressing a different aspect of performance and utility. We developed PredictABEL, a package in R that covers descriptive tables, measures and figures that are used in the analysis of risk prediction studies such as measures of model fit, predictive ability and clinical utility, and risk distributions, calibration plot and the receiver operating characteristic plot. Tables and figures are saved as separate files in a user-specified format, which include publication-quality EPS and TIFF formats. All figures are available in a ready-made layout, but they can be customized to the preferences of the user. The package has been developed for the analysis of genetic risk prediction studies, but can also be used for studies that only include non-genetic risk factors. PredictABEL is freely available at the websites of GenABEL ( http://www.genabel.org ) and CRAN ( http://cran.r-project.org/).

  20. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    PubMed

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  1. Assessment of Turbulent Shock-Boundary Layer Interaction Computations Using the OVERFLOW Code

    NASA Technical Reports Server (NTRS)

    Oliver, A. B.; Lillard, R. P.; Schwing, A. M.; Blaisdell, G> A.; Lyrintzis, A. S.

    2007-01-01

    The performance of two popular turbulence models, the Spalart-Allmaras model and Menter s SST model, and one relatively new model, Olsen & Coakley s Lag model, are evaluated using the OVERFLOWcode. Turbulent shock-boundary layer interaction predictions are evaluated with three different experimental datasets: a series of 2D compression ramps at Mach 2.87, a series of 2D compression ramps at Mach 2.94, and an axisymmetric coneflare at Mach 11. The experimental datasets include flows with no separation, moderate separation, and significant separation, and use several different experimental measurement techniques (including laser doppler velocimetry (LDV), pitot-probe measurement, inclined hot-wire probe measurement, preston tube skin friction measurement, and surface pressure measurement). Additionally, the OVERFLOW solutions are compared to the solutions of a second CFD code, DPLR. The predictions for weak shock-boundary layer interactions are in reasonable agreement with the experimental data. For strong shock-boundary layer interactions, all of the turbulence models overpredict the separation size and fail to predict the correct skin friction recovery distribution. In most cases, surface pressure predictions show too much upstream influence, however including the tunnel side-wall boundary layers in the computation improves the separation predictions.

  2. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.

    PubMed

    Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko

    2016-03-01

    In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Predicting Barrett's Esophagus in Families: An Esophagus Translational Research Network (BETRNet) Model Fitting Clinical Data to a Familial Paradigm.

    PubMed

    Sun, Xiangqing; Elston, Robert C; Barnholtz-Sloan, Jill S; Falk, Gary W; Grady, William M; Faulx, Ashley; Mittal, Sumeet K; Canto, Marcia; Shaheen, Nicholas J; Wang, Jean S; Iyer, Prasad G; Abrams, Julian A; Tian, Ye D; Willis, Joseph E; Guda, Kishore; Markowitz, Sanford D; Chandar, Apoorva; Warfe, James M; Brock, Wendy; Chak, Amitabh

    2016-05-01

    Barrett's esophagus is often asymptomatic and only a small portion of Barrett's esophagus patients are currently diagnosed and under surveillance. Therefore, it is important to develop risk prediction models to identify high-risk individuals with Barrett's esophagus. Familial aggregation of Barrett's esophagus and esophageal adenocarcinoma, and the increased risk of esophageal adenocarcinoma for individuals with a family history, raise the necessity of including genetic factors in the prediction model. Methods to determine risk prediction models using both risk covariates and ascertained family data are not well developed. We developed a Barrett's Esophagus Translational Research Network (BETRNet) risk prediction model from 787 singly ascertained Barrett's esophagus pedigrees and 92 multiplex Barrett's esophagus pedigrees, fitting a multivariate logistic model that incorporates family history and clinical risk factors. The eight risk factors, age, sex, education level, parental status, smoking, heartburn frequency, regurgitation frequency, and use of acid suppressant, were included in the model. The prediction accuracy was evaluated on the training dataset and an independent validation dataset of 643 multiplex Barrett's esophagus pedigrees. Our results indicate family information helps to predict Barrett's esophagus risk, and predicting in families improves both prediction calibration and discrimination accuracy. Our model can predict Barrett's esophagus risk for anyone with family members known to have, or not have, had Barrett's esophagus. It can predict risk for unrelated individuals without knowing any relatives' information. Our prediction model will shed light on effectively identifying high-risk individuals for Barrett's esophagus screening and surveillance, consequently allowing intervention at an early stage, and reducing mortality from esophageal adenocarcinoma. Cancer Epidemiol Biomarkers Prev; 25(5); 727-35. ©2016 AACR. ©2016 American Association for Cancer Research.

  4. Multimodel predictive system for carbon dioxide solubility in saline formation waters.

    PubMed

    Wang, Zan; Small, Mitchell J; Karamalidis, Athanasios K

    2013-02-05

    The prediction of carbon dioxide solubility in brine at conditions relevant to carbon sequestration (i.e., high temperature, pressure, and salt concentration (T-P-X)) is crucial when this technology is applied. Eleven mathematical models for predicting CO(2) solubility in brine are compared and considered for inclusion in a multimodel predictive system. Model goodness of fit is evaluated over the temperature range 304-433 K, pressure range 74-500 bar, and salt concentration range 0-7 m (NaCl equivalent), using 173 published CO(2) solubility measurements, particularly selected for those conditions. The performance of each model is assessed using various statistical methods, including the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). Different models emerge as best fits for different subranges of the input conditions. A classification tree is generated using machine learning methods to predict the best-performing model under different T-P-X subranges, allowing development of a multimodel predictive system (MMoPS) that selects and applies the model expected to yield the most accurate CO(2) solubility prediction. Statistical analysis of the MMoPS predictions, including a stratified 5-fold cross validation, shows that MMoPS outperforms each individual model and increases the overall accuracy of CO(2) solubility prediction across the range of T-P-X conditions likely to be encountered in carbon sequestration applications.

  5. BehavePlus fire modeling system: Past, present, and future

    Treesearch

    Patricia L. Andrews

    2007-01-01

    Use of mathematical fire models to predict fire behavior and fire effects plays an important supporting role in wildland fire management. When used in conjunction with personal fire experience and a basic understanding of the fire models, predictions can be successfully applied to a range of fire management activities including wildfire behavior prediction, prescribed...

  6. Statistical Models for Predicting Automobile Driving Postures for Men and Women Including Effects of Age.

    PubMed

    Park, Jangwoon; Ebert, Sheila M; Reed, Matthew P; Hallman, Jason J

    2016-03-01

    Previously published statistical models of driving posture have been effective for vehicle design but have not taken into account the effects of age. The present study developed new statistical models for predicting driving posture. Driving postures of 90 U.S. drivers with a wide range of age and body size were measured in laboratory mockup in nine package conditions. Posture-prediction models for female and male drivers were separately developed by employing a stepwise regression technique using age, body dimensions, vehicle package conditions, and two-way interactions, among other variables. Driving posture was significantly associated with age, and the effects of other variables depended on age. A set of posture-prediction models is presented for women and men. The results are compared with a previously developed model. The present study is the first study of driver posture to include a large cohort of older drivers and the first to report a significant effect of age. The posture-prediction models can be used to position computational human models or crash-test dummies for vehicle design and assessment. © 2015, Human Factors and Ergonomics Society.

  7. An etiologic prediction model incorporating biomarkers to predict the bladder cancer risk associated with occupational exposure to aromatic amines: a pilot study.

    PubMed

    Mastrangelo, Giuseppe; Carta, Angela; Arici, Cecilia; Pavanello, Sofia; Porru, Stefano

    2017-01-01

    No etiological prediction model incorporating biomarkers is available to predict bladder cancer risk associated with occupational exposure to aromatic amines. Cases were 199 bladder cancer patients. Clinical, laboratory and genetic data were predictors in logistic regression models (full and short) in which the dependent variable was 1 for 15 patients with aromatic amines related bladder cancer and 0 otherwise. The receiver operating characteristics approach was adopted; the area under the curve was used to evaluate discriminatory ability of models. Area under the curve was 0.93 for the full model (including age, smoking and coffee habits, DNA adducts, 12 genotypes) and 0.86 for the short model (including smoking, DNA adducts, 3 genotypes). Using the "best cut-off" of predicted probability of a positive outcome, percentage of cases correctly classified was 92% (full model) against 75% (short model). Cancers classified as "positive outcome" are those to be referred for evaluation by an occupational physician for etiological diagnosis; these patients were 28 (full model) or 60 (short model). Using 3 genotypes instead of 12 can double the number of patients with suspect of aromatic amine related cancer, thus increasing costs of etiologic appraisal. Integrating clinical, laboratory and genetic factors, we developed the first etiologic prediction model for aromatic amine related bladder cancer. Discriminatory ability was excellent, particularly for the full model, allowing individualized predictions. Validation of our model in external populations is essential for practical use in the clinical setting.

  8. In silico modeling to predict drug-induced phospholipidosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G., E-mail: luis.valerio@fda.hhs.gov

    2013-06-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the constructionmore » and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL.« less

  9. Systems biology as a conceptual framework for research in family medicine; use in predicting response to influenza vaccination.

    PubMed

    Majnarić-Trtica, Ljiljana; Vitale, Branko

    2011-10-01

    To introduce systems biology as a conceptual framework for research in family medicine, based on empirical data from a case study on the prediction of influenza vaccination outcomes. This concept is primarily oriented towards planning preventive interventions and includes systematic data recording, a multi-step research protocol and predictive modelling. Factors known to affect responses to influenza vaccination include older age, past exposure to influenza viruses, and chronic diseases; however, constructing useful prediction models remains a challenge, because of the need to identify health parameters that are appropriate for general use in modelling patients' responses. The sample consisted of 93 patients aged 50-89 years (median 69), with multiple medical conditions, who were vaccinated against influenza. Literature searches identified potentially predictive health-related parameters, including age, gender, diagnoses of the main chronic ageing diseases, anthropometric measures, and haematological and biochemical tests. By applying data mining algorithms, patterns were identified in the data set. Candidate health parameters, selected in this way, were then combined with information on past influenza virus exposure to build the prediction model using logistic regression. A highly significant prediction model was obtained, indicating that by using a systems biology approach it is possible to answer unresolved complex medical uncertainties. Adopting this systems biology approach can be expected to be useful in identifying the most appropriate target groups for other preventive programmes.

  10. Systematic review and retrospective validation of prediction models for weight loss after bariatric surgery.

    PubMed

    Sharples, Alistair J; Mahawar, Kamal; Cheruvu, Chandra V N

    2017-11-01

    Patients often have less than realistic expectations of the weight loss they are likely to achieve after bariatric surgery. It would be useful to have a well-validated prediction tool that could give patients a realistic estimate of their expected weight loss. To perform a systematic review of the literature to identify existing prediction models and attempt to validate these models. University hospital, United Kingdom. A systematic review was performed. All English language studies were included if they used data to create a prediction model for postoperative weight loss after bariatric surgery. These models were then tested on patients undergoing bariatric surgery between January 1, 2013 and December 31, 2014 within our unit. An initial literature search produced 446 results, of which only 4 were included in the final review. Our study population included 317 patients. Mean preoperative body mass index was 46.1 ± 7.1. For 257 (81.1%) patients, 12-month follow-up was available, and mean body mass index and percentage excess weight loss at 12 months was 33.0 ± 6.7 and 66.1% ± 23.7%, respectively. All 4 of the prediction models significantly overestimated the amount of weight loss achieved by patients. The best performing prediction model in our series produced a correlation coefficient (R 2 ) of .61 and an area under the curve of .71 on receiver operating curve analysis. All prediction models overestimated weight loss after bariatric surgery in our cohort. There is a need to develop better procedures and patient-specific models for better patient counselling. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.

  11. Questioning the Faith - Models and Prediction in Stream Restoration (Invited)

    NASA Astrophysics Data System (ADS)

    Wilcock, P.

    2013-12-01

    River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.

  12. Analysis of Predicted Aircraft Wake Vortex Transport and Comparison with Experiment Volume II -- Appendixes

    DOT National Transportation Integrated Search

    1974-04-01

    A unifying wake vortex transport model is developed and applied to a wake vortex predictive system concept. The fundamentals of vortex motion underlying the predictive model are discussed including vortex decay, bursting and instability phenomena. A ...

  13. Multitrait, Random Regression, or Simple Repeatability Model in High-Throughput Phenotyping Data Improve Genomic Prediction for Wheat Grain Yield.

    PubMed

    Sun, Jin; Rutkoski, Jessica E; Poland, Jesse A; Crossa, José; Jannink, Jean-Luc; Sorrells, Mark E

    2017-07-01

    High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat ( L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect selection for grain yield. In this study, we evaluated three statistical models, simple repeatability (SR), multitrait (MT), and random regression (RR), for the longitudinal data of secondary traits and compared the impact of the proposed models for secondary traits on their predictive abilities for grain yield. Grain yield and secondary traits, canopy temperature (CT) and normalized difference vegetation index (NDVI), were collected in five diverse environments for 557 wheat lines with available pedigree and genomic information. A two-stage analysis was applied for pedigree and genomic selection (GS). First, secondary traits were fitted by SR, MT, or RR models, separately, within each environment. Then, best linear unbiased predictions (BLUPs) of secondary traits from the above models were used in the multivariate prediction models to compare predictive abilities for grain yield. Predictive ability was substantially improved by 70%, on average, from multivariate pedigree and genomic models when including secondary traits in both training and test populations. Additionally, (i) predictive abilities slightly varied for MT, RR, or SR models in this data set, (ii) results indicated that including BLUPs of secondary traits from the MT model was the best in severe drought, and (iii) the RR model was slightly better than SR and MT models under drought environment. Copyright © 2017 Crop Science Society of America.

  14. A study of material damping in large space structures

    NASA Technical Reports Server (NTRS)

    Highsmith, A. L.; Allen, D. H.

    1989-01-01

    A constitutive model was developed for predicting damping as a function of damage in continuous fiber reinforced laminated composites. The damage model is a continuum formulation, and uses internal state variables to quantify damage and its subsequent effect on material response. The model is sensitive to the stacking sequence of the laminate. Given appropriate baseline data from unidirectional material, and damping as a function of damage in one crossply laminate, damage can be predicted as a function of damage in other crossply laminates. Agreement between theory and experiment was quite good. A micromechanics model was also developed for examining the influence of damage on damping. This model explicitly includes crack surfaces. The model provides reasonable predictions of bending stiffness as a function of damage. Damping predictions are not in agreement with the experiment. This is thought to be a result of dissipation mechanisms such as friction, which are not presently included in the analysis.

  15. Individualized Prediction of Heat Stress in Firefighters: A Data-Driven Approach Using Classification and Regression Trees.

    PubMed

    Mani, Ashutosh; Rao, Marepalli; James, Kelley; Bhattacharya, Amit

    2015-01-01

    The purpose of this study was to explore data-driven models, based on decision trees, to develop practical and easy to use predictive models for early identification of firefighters who are likely to cross the threshold of hyperthermia during live-fire training. Predictive models were created for three consecutive live-fire training scenarios. The final predicted outcome was a categorical variable: will a firefighter cross the upper threshold of hyperthermia - Yes/No. Two tiers of models were built, one with and one without taking into account the outcome (whether a firefighter crossed hyperthermia or not) from the previous training scenario. First tier of models included age, baseline heart rate and core body temperature, body mass index, and duration of training scenario as predictors. The second tier of models included the outcome of the previous scenario in the prediction space, in addition to all the predictors from the first tier of models. Classification and regression trees were used independently for prediction. The response variable for the regression tree was the quantitative variable: core body temperature at the end of each scenario. The predicted quantitative variable from regression trees was compared to the upper threshold of hyperthermia (38°C) to predict whether a firefighter would enter hyperthermia. The performance of classification and regression tree models was satisfactory for the second (success rate = 79%) and third (success rate = 89%) training scenarios but not for the first (success rate = 43%). Data-driven models based on decision trees can be a useful tool for predicting physiological response without modeling the underlying physiological systems. Early prediction of heat stress coupled with proactive interventions, such as pre-cooling, can help reduce heat stress in firefighters.

  16. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  17. Predicting Forearm Physical Exposures During Computer Work Using Self-Reports, Software-Recorded Computer Usage Patterns, and Anthropometric and Workstation Measurements.

    PubMed

    Huysmans, Maaike A; Eijckelhof, Belinda H W; Garza, Jennifer L Bruno; Coenen, Pieter; Blatter, Birgitte M; Johnson, Peter W; van Dieën, Jaap H; van der Beek, Allard J; Dennerlein, Jack T

    2017-12-15

    Alternative techniques to assess physical exposures, such as prediction models, could facilitate more efficient epidemiological assessments in future large cohort studies examining physical exposures in relation to work-related musculoskeletal symptoms. The aim of this study was to evaluate two types of models that predict arm-wrist-hand physical exposures (i.e. muscle activity, wrist postures and kinematics, and keyboard and mouse forces) during computer use, which only differed with respect to the candidate predicting variables; (i) a full set of predicting variables, including self-reported factors, software-recorded computer usage patterns, and worksite measurements of anthropometrics and workstation set-up (full models); and (ii) a practical set of predicting variables, only including the self-reported factors and software-recorded computer usage patterns, that are relatively easy to assess (practical models). Prediction models were build using data from a field study among 117 office workers who were symptom-free at the time of measurement. Arm-wrist-hand physical exposures were measured for approximately two hours while workers performed their own computer work. Each worker's anthropometry and workstation set-up were measured by an experimenter, computer usage patterns were recorded using software and self-reported factors (including individual factors, job characteristics, computer work behaviours, psychosocial factors, workstation set-up characteristics, and leisure-time activities) were collected by an online questionnaire. We determined the predictive quality of the models in terms of R2 and root mean squared (RMS) values and exposure classification agreement to low-, medium-, and high-exposure categories (in the practical model only). The full models had R2 values that ranged from 0.16 to 0.80, whereas for the practical models values ranged from 0.05 to 0.43. Interquartile ranges were not that different for the two models, indicating that only for some physical exposures the full models performed better. Relative RMS errors ranged between 5% and 19% for the full models, and between 10% and 19% for the practical model. When the predicted physical exposures were classified into low, medium, and high, classification agreement ranged from 26% to 71%. The full prediction models, based on self-reported factors, software-recorded computer usage patterns, and additional measurements of anthropometrics and workstation set-up, show a better predictive quality as compared to the practical models based on self-reported factors and recorded computer usage patterns only. However, predictive quality varied largely across different arm-wrist-hand exposure parameters. Future exploration of the relation between predicted physical exposure and symptoms is therefore only recommended for physical exposures that can be reasonably well predicted. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  18. Predicting outcome in severe traumatic brain injury using a simple prognostic model.

    PubMed

    Sobuwa, Simpiwe; Hartzenberg, Henry Benjamin; Geduld, Heike; Uys, Corrie

    2014-06-17

    Several studies have made it possible to predict outcome in severe traumatic brain injury (TBI) making it beneficial as an aid for clinical decision-making in the emergency setting. However, reliable predictive models are lacking for resource-limited prehospital settings such as those in developing countries like South Africa. To develop a simple predictive model for severe TBI using clinical variables in a South African prehospital setting. All consecutive patients admitted at two level-one centres in Cape Town, South Africa, for severe TBI were included. A binary logistic regression model was used, which included three predictor variables: oxygen saturation (SpO₂), Glasgow Coma Scale (GCS) and pupil reactivity. The Glasgow Outcome Scale was used to assess outcome on hospital discharge. A total of 74.4% of the outcomes were correctly predicted by the logistic regression model. The model demonstrated SpO₂ (p=0.019), GCS (p=0.001) and pupil reactivity (p=0.002) as independently significant predictors of outcome in severe TBI. Odds ratios of a good outcome were 3.148 (SpO₂ ≥ 90%), 5.108 (GCS 6 - 8) and 4.405 (pupils bilaterally reactive). This model is potentially useful for effective predictions of outcome in severe TBI.

  19. The importance of different frequency bands in predicting subcutaneous glucose concentration in type 1 diabetic patients.

    PubMed

    Lu, Yinghui; Gribok, Andrei V; Ward, W Kenneth; Reifman, Jaques

    2010-08-01

    We investigated the relative importance and predictive power of different frequency bands of subcutaneous glucose signals for the short-term (0-50 min) forecasting of glucose concentrations in type 1 diabetic patients with data-driven autoregressive (AR) models. The study data consisted of minute-by-minute glucose signals collected from nine deidentified patients over a five-day period using continuous glucose monitoring devices. AR models were developed using single and pairwise combinations of frequency bands of the glucose signal and compared with a reference model including all bands. The results suggest that: for open-loop applications, there is no need to explicitly represent exogenous inputs, such as meals and insulin intake, in AR models; models based on a single-frequency band, with periods between 60-120 min and 150-500 min, yield good predictive power (error <3 mg/dL) for prediction horizons of up to 25 min; models based on pairs of bands produce predictions that are indistinguishable from those of the reference model as long as the 60-120 min period band is included; and AR models can be developed on signals of short length (approximately 300 min), i.e., ignoring long circadian rhythms, without any detriment in prediction accuracy. Together, these findings provide insights into efficient development of more effective and parsimonious data-driven models for short-term prediction of glucose concentrations in diabetic patients.

  20. Erratum: Probabilistic application of a fugacity model to predict triclosan fate during wastewater treatment.

    PubMed

    Bock, Michael; Lyndall, Jennifer; Barber, Timothy; Fuchsman, Phyllis; Perruchon, Elyse; Capdevielle, Marie

    2010-10-01

    The fate and partitioning of the antimicrobial compound, triclosan, in wastewater treatment plants (WWTPs) is evaluated using a probabilistic fugacity model to predict the range of triclosan concentrations in effluent and secondary biosolids. The WWTP model predicts 84% to 92% triclosan removal, which is within the range of measured removal efficiencies (typically 70% to 98%). Triclosan is predominantly removed by sorption and subsequent settling of organic particulates during primary treatment and by aerobic biodegradation during secondary treatment. Median modeled removal efficiency due to sorption is 40% for all treatment phases and 31% in the primary treatment phase. Median modeled removal efficiency due to biodegradation is 48% for all treatment phases and 44% in the secondary treatment phase. Important factors contributing to variation in predicted triclosan concentrations in effluent and biosolids include influent concentrations, solids concentrations in settling tanks, and factors related to solids retention time. Measured triclosan concentrations in biosolids and non-United States (US) effluent are consistent with model predictions. However, median concentrations in US effluent are over-predicted with this model, suggesting that differences in some aspect of treatment practices not incorporated in the model (e.g., disinfection methods) may affect triclosan removal from effluent. Model applications include predicting changes in environmental loadings associated with new triclosan applications and supporting risk analyses for biosolids-amended land and effluent receiving waters. © 2010 SETAC.

  1. Probabilistic application of a fugacity model to predict triclosan fate during wastewater treatment.

    PubMed

    Bock, Michael; Lyndall, Jennifer; Barber, Timothy; Fuchsman, Phyllis; Perruchon, Elyse; Capdevielle, Marie

    2010-07-01

    The fate and partitioning of the antimicrobial compound, triclosan, in wastewater treatment plants (WWTPs) is evaluated using a probabilistic fugacity model to predict the range of triclosan concentrations in effluent and secondary biosolids. The WWTP model predicts 84% to 92% triclosan removal, which is within the range of measured removal efficiencies (typically 70% to 98%). Triclosan is predominantly removed by sorption and subsequent settling of organic particulates during primary treatment and by aerobic biodegradation during secondary treatment. Median modeled removal efficiency due to sorption is 40% for all treatment phases and 31% in the primary treatment phase. Median modeled removal efficiency due to biodegradation is 48% for all treatment phases and 44% in the secondary treatment phase. Important factors contributing to variation in predicted triclosan concentrations in effluent and biosolids include influent concentrations, solids concentrations in settling tanks, and factors related to solids retention time. Measured triclosan concentrations in biosolids and non-United States (US) effluent are consistent with model predictions. However, median concentrations in US effluent are over-predicted with this model, suggesting that differences in some aspect of treatment practices not incorporated in the model (e.g., disinfection methods) may affect triclosan removal from effluent. Model applications include predicting changes in environmental loadings associated with new triclosan applications and supporting risk analyses for biosolids-amended land and effluent receiving waters. (c) 2010 SETAC.

  2. Connectome-based predictive modeling of attention: Comparing different functional connectivity features and prediction methods across datasets.

    PubMed

    Yoo, Kwangsun; Rosenberg, Monica D; Hsu, Wei-Ting; Zhang, Sheng; Li, Chiang-Shan R; Scheinost, Dustin; Constable, R Todd; Chun, Marvin M

    2018-02-15

    Connectome-based predictive modeling (CPM; Finn et al., 2015; Shen et al., 2017) was recently developed to predict individual differences in traits and behaviors, including fluid intelligence (Finn et al., 2015) and sustained attention (Rosenberg et al., 2016a), from functional brain connectivity (FC) measured with fMRI. Here, using the CPM framework, we compared the predictive power of three different measures of FC (Pearson's correlation, accordance, and discordance) and two different prediction algorithms (linear and partial least square [PLS] regression) for attention function. Accordance and discordance are recently proposed FC measures that respectively track in-phase synchronization and out-of-phase anti-correlation (Meskaldji et al., 2015). We defined connectome-based models using task-based or resting-state FC data, and tested the effects of (1) functional connectivity measure and (2) feature-selection/prediction algorithm on individualized attention predictions. Models were internally validated in a training dataset using leave-one-subject-out cross-validation, and externally validated with three independent datasets. The training dataset included fMRI data collected while participants performed a sustained attention task and rested (N = 25; Rosenberg et al., 2016a). The validation datasets included: 1) data collected during performance of a stop-signal task and at rest (N = 83, including 19 participants who were administered methylphenidate prior to scanning; Farr et al., 2014a; Rosenberg et al., 2016b), 2) data collected during Attention Network Task performance and rest (N = 41, Rosenberg et al., in press), and 3) resting-state data and ADHD symptom severity from the ADHD-200 Consortium (N = 113; Rosenberg et al., 2016a). Models defined using all combinations of functional connectivity measure (Pearson's correlation, accordance, and discordance) and prediction algorithm (linear and PLS regression) predicted attentional abilities, with correlations between predicted and observed measures of attention as high as 0.9 for internal validation, and 0.6 for external validation (all p's < 0.05). Models trained on task data outperformed models trained on rest data. Pearson's correlation and accordance features generally showed a small numerical advantage over discordance features, while PLS regression models were usually better than linear regression models. Overall, in addition to correlation features combined with linear models (Rosenberg et al., 2016a), it is useful to consider accordance features and PLS regression for CPM. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Stall flutter analysis of propfans

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.

    1988-01-01

    Three semi-empirical aerodynamic stall models are compared with respect to their lift and moment hysteresis loop prediction, limit cycle behavior, easy implementation, and feasibility in developing the parameters required for stall flutter prediction of advanced turbines. For the comparison of aeroelastic response prediction including stall, a typical section model and a plate structural model are considered. The response analysis includes both plunging and pitching motions of the blades. In model A, a correction of the angle of attack is applied when the angle of attack exceeds the static stall angle. In model B, a synthesis procedure is used for angles of attack above static stall angles, and the time history effects are accounted for through the Wagner function.

  4. Thermal cut-off response modelling of universal motors

    NASA Astrophysics Data System (ADS)

    Thangaveloo, Kashveen; Chin, Yung Shin

    2017-04-01

    This paper presents a model to predict the thermal cut-off (TCO) response behaviour in universal motors. The mathematical model includes the calculations of heat loss in the universal motor and the flow characteristics around the TCO component which together are the main parameters for TCO response prediction. In order to accurately predict the TCO component temperature, factors like the TCO component resistance, the effect of ambient, and the flow conditions through the motor are taken into account to improve the prediction accuracy of the model.

  5. Estimating Additive and Non-Additive Genetic Variances and Predicting Genetic Merits Using Genome-Wide Dense Single Nucleotide Polymorphism Markers

    PubMed Central

    Su, Guosheng; Christensen, Ole F.; Ostersen, Tage; Henryon, Mark; Lund, Mogens S.

    2012-01-01

    Non-additive genetic variation is usually ignored when genome-wide markers are used to study the genetic architecture and genomic prediction of complex traits in human, wild life, model organisms or farm animals. However, non-additive genetic effects may have an important contribution to total genetic variation of complex traits. This study presented a genomic BLUP model including additive and non-additive genetic effects, in which additive and non-additive genetic relation matrices were constructed from information of genome-wide dense single nucleotide polymorphism (SNP) markers. In addition, this study for the first time proposed a method to construct dominance relationship matrix using SNP markers and demonstrated it in detail. The proposed model was implemented to investigate the amounts of additive genetic, dominance and epistatic variations, and assessed the accuracy and unbiasedness of genomic predictions for daily gain in pigs. In the analysis of daily gain, four linear models were used: 1) a simple additive genetic model (MA), 2) a model including both additive and additive by additive epistatic genetic effects (MAE), 3) a model including both additive and dominance genetic effects (MAD), and 4) a full model including all three genetic components (MAED). Estimates of narrow-sense heritability were 0.397, 0.373, 0.379 and 0.357 for models MA, MAE, MAD and MAED, respectively. Estimated dominance variance and additive by additive epistatic variance accounted for 5.6% and 9.5% of the total phenotypic variance, respectively. Based on model MAED, the estimate of broad-sense heritability was 0.506. Reliabilities of genomic predicted breeding values for the animals without performance records were 28.5%, 28.8%, 29.2% and 29.5% for models MA, MAE, MAD and MAED, respectively. In addition, models including non-additive genetic effects improved unbiasedness of genomic predictions. PMID:23028912

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo

    Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.

  7. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  8. The Coastal Ocean Prediction Systems program: Understanding and managing our coastal ocean

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eden, H.F.; Mooers, C.N.K.

    1990-06-01

    The goal of COPS is to couple a program of regular observations to numerical models, through techniques of data assimilation, in order to provide a predictive capability for the US coastal ocean including the Great Lakes, estuaries, and the entire Exclusive Economic Zone (EEZ). The objectives of the program include: determining the predictability of the coastal ocean and the processes that govern the predictability; developing efficient prediction systems for the coastal ocean based on the assimilation of real-time observations into numerical models; and coupling the predictive systems for the physical behavior of the coastal ocean to predictive systems for biological,more » chemical, and geological processes to achieve an interdisciplinary capability. COPS will provide the basis for effective monitoring and prediction of coastal ocean conditions by optimizing the use of increased scientific understanding, improved observations, advanced computer models, and computer graphics to make the best possible estimates of sea level, currents, temperatures, salinities, and other properties of entire coastal regions.« less

  9. Galaxy Formation At Extreme Redshifts: Semi-Analytic Model Predictions And Challenges For Observations

    NASA Astrophysics Data System (ADS)

    Yung, L. Y. Aaron; Somerville, Rachel S.

    2017-06-01

    The well-established Santa Cruz semi-analytic galaxy formation framework has been shown to be quite successful at explaining observations in the local Universe, as well as making predictions for low-redshift observations. Recently, metallicity-based gas partitioning and H2-based star formation recipes have been implemented in our model, replacing the legacy cold-gas based recipe. We then use our revised model to explore the high-redshift Universe and make predictions up to z = 15. Although our model is only calibrated to observations from the local universe, our predictions seem to match incredibly well with mid- to high-redshift observational constraints available-to-date, including rest-frame UV luminosity functions and the reionization history as constrained by CMB and IGM observations. We provide predictions for individual and statistical galaxy properties at a wide range of redshifts (z = 4 - 15), including objects that are too far or too faint to be detected with current facilities. And using our model predictions, we also provide forecasted luminosity functions and other observables for upcoming studies with JWST.

  10. Risk Prediction for Epithelial Ovarian Cancer in 11 United States–Based Case-Control Studies: Incorporation of Epidemiologic Risk Factors and 17 Confirmed Genetic Loci

    PubMed Central

    Clyde, Merlise A.; Palmieri Weber, Rachel; Iversen, Edwin S.; Poole, Elizabeth M.; Doherty, Jennifer A.; Goodman, Marc T.; Ness, Roberta B.; Risch, Harvey A.; Rossing, Mary Anne; Terry, Kathryn L.; Wentzensen, Nicolas; Whittemore, Alice S.; Anton-Culver, Hoda; Bandera, Elisa V.; Berchuck, Andrew; Carney, Michael E.; Cramer, Daniel W.; Cunningham, Julie M.; Cushing-Haugen, Kara L.; Edwards, Robert P.; Fridley, Brooke L.; Goode, Ellen L.; Lurie, Galina; McGuire, Valerie; Modugno, Francesmary; Moysich, Kirsten B.; Olson, Sara H.; Pearce, Celeste Leigh; Pike, Malcolm C.; Rothstein, Joseph H.; Sellers, Thomas A.; Sieh, Weiva; Stram, Daniel; Thompson, Pamela J.; Vierkant, Robert A.; Wicklund, Kristine G.; Wu, Anna H.; Ziogas, Argyrios; Tworoger, Shelley S.; Schildkraut, Joellen M.

    2016-01-01

    Previously developed models for predicting absolute risk of invasive epithelial ovarian cancer have included a limited number of risk factors and have had low discriminatory power (area under the receiver operating characteristic curve (AUC) < 0.60). Because of this, we developed and internally validated a relative risk prediction model that incorporates 17 established epidemiologic risk factors and 17 genome-wide significant single nucleotide polymorphisms (SNPs) using data from 11 case-control studies in the United States (5,793 cases; 9,512 controls) from the Ovarian Cancer Association Consortium (data accrued from 1992 to 2010). We developed a hierarchical logistic regression model for predicting case-control status that included imputation of missing data. We randomly divided the data into an 80% training sample and used the remaining 20% for model evaluation. The AUC for the full model was 0.664. A reduced model without SNPs performed similarly (AUC = 0.649). Both models performed better than a baseline model that included age and study site only (AUC = 0.563). The best predictive power was obtained in the full model among women younger than 50 years of age (AUC = 0.714); however, the addition of SNPs increased the AUC the most for women older than 50 years of age (AUC = 0.638 vs. 0.616). Adapting this improved model to estimate absolute risk and evaluating it in prospective data sets is warranted. PMID:27698005

  11. Improving CSF biomarker accuracy in predicting prevalent and incident Alzheimer disease

    PubMed Central

    Fagan, A.M.; Williams, M.M.; Ghoshal, N.; Aeschleman, M.; Grant, E.A.; Marcus, D.S.; Mintun, M.A.; Holtzman, D.M.; Morris, J.C.

    2011-01-01

    Objective: To investigate factors, including cognitive and brain reserve, which may independently predict prevalent and incident dementia of the Alzheimer type (DAT) and to determine whether inclusion of identified factors increases the predictive accuracy of the CSF biomarkers Aβ42, tau, ptau181, tau/Aβ42, and ptau181/Aβ42. Methods: Logistic regression identified variables that predicted prevalent DAT when considered together with each CSF biomarker in a cross-sectional sample of 201 participants with normal cognition and 46 with DAT. The area under the receiver operating characteristic curve (AUC) from the resulting model was compared with the AUC generated using the biomarker alone. In a second sample with normal cognition at baseline and longitudinal data available (n = 213), Cox proportional hazards models identified variables that predicted incident DAT together with each biomarker, and the models' concordance probability estimate (CPE), which was compared to the CPE generated using the biomarker alone. Results: APOE genotype including an ε4 allele, male gender, and smaller normalized whole brain volumes (nWBV) were cross-sectionally associated with DAT when considered together with every biomarker. In the longitudinal sample (mean follow-up = 3.2 years), 14 participants (6.6%) developed DAT. Older age predicted a faster time to DAT in every model, and greater education predicted a slower time in 4 of 5 models. Inclusion of ancillary variables resulted in better cross-sectional prediction of DAT for all biomarkers (p < 0.0021), and better longitudinal prediction for 4 of 5 biomarkers (p < 0.0022). Conclusions: The predictive accuracy of CSF biomarkers is improved by including age, education, and nWBV in analyses. PMID:21228296

  12. Genomic Prediction with Pedigree and Genotype × Environment Interaction in Spring Wheat Grown in South and West Asia, North Africa, and Mexico.

    PubMed

    Sukumaran, Sivakumar; Crossa, Jose; Jarquin, Diego; Lopes, Marta; Reynolds, Matthew P

    2017-02-09

    Developing genomic selection (GS) models is an important step in applying GS to accelerate the rate of genetic gain in grain yield in plant breeding. In this study, seven genomic prediction models under two cross-validation (CV) scenarios were tested on 287 advanced elite spring wheat lines phenotyped for grain yield (GY), thousand-grain weight (GW), grain number (GN), and thermal time for flowering (TTF) in 18 international environments (year-location combinations) in major wheat-producing countries in 2010 and 2011. Prediction models with genomic and pedigree information included main effects and interaction with environments. Two random CV schemes were applied to predict a subset of lines that were not observed in any of the 18 environments (CV1), and a subset of lines that were not observed in a set of the environments, but were observed in other environments (CV2). Genomic prediction models, including genotype × environment (G×E) interaction, had the highest average prediction ability under the CV1 scenario for GY (0.31), GN (0.32), GW (0.45), and TTF (0.27). For CV2, the average prediction ability of the model including the interaction terms was generally high for GY (0.38), GN (0.43), GW (0.63), and TTF (0.53). Wheat lines in site-year combinations in Mexico and India had relatively high prediction ability for GY and GW. Results indicated that prediction ability of lines not observed in certain environments could be relatively high for genomic selection when predicting G×E interaction in multi-environment trials. Copyright © 2017 Sukumaran et al.

  13. Independent external validation of predictive models for urinary dysfunction following external beam radiotherapy of the prostate: Issues in model development and reporting.

    PubMed

    Yahya, Noorazrul; Ebert, Martin A; Bulsara, Max; Kennedy, Angel; Joseph, David J; Denham, James W

    2016-08-01

    Most predictive models are not sufficiently validated for prospective use. We performed independent external validation of published predictive models for urinary dysfunctions following radiotherapy of the prostate. Multivariable models developed to predict atomised and generalised urinary symptoms, both acute and late, were considered for validation using a dataset representing 754 participants from the TROG 03.04-RADAR trial. Endpoints and features were harmonised to match the predictive models. The overall performance, calibration and discrimination were assessed. 14 models from four publications were validated. The discrimination of the predictive models in an independent external validation cohort, measured using the area under the receiver operating characteristic (ROC) curve, ranged from 0.473 to 0.695, generally lower than in internal validation. 4 models had ROC >0.6. Shrinkage was required for all predictive models' coefficients ranging from -0.309 (prediction probability was inverse to observed proportion) to 0.823. Predictive models which include baseline symptoms as a feature produced the highest discrimination. Two models produced a predicted probability of 0 and 1 for all patients. Predictive models vary in performance and transferability illustrating the need for improvements in model development and reporting. Several models showed reasonable potential but efforts should be increased to improve performance. Baseline symptoms should always be considered as potential features for predictive models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    PubMed Central

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399

  15. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.

    PubMed

    Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  16. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    NASA Astrophysics Data System (ADS)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-05-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  17. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    NASA Astrophysics Data System (ADS)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  18. Endometrial cancer risk prediction including serum-based biomarkers: results from the EPIC cohort.

    PubMed

    Fortner, Renée T; Hüsing, Anika; Kühn, Tilman; Konar, Meric; Overvad, Kim; Tjønneland, Anne; Hansen, Louise; Boutron-Ruault, Marie-Christine; Severi, Gianluca; Fournier, Agnès; Boeing, Heiner; Trichopoulou, Antonia; Benetou, Vasiliki; Orfanos, Philippos; Masala, Giovanna; Agnoli, Claudia; Mattiello, Amalia; Tumino, Rosario; Sacerdote, Carlotta; Bueno-de-Mesquita, H B As; Peeters, Petra H M; Weiderpass, Elisabete; Gram, Inger T; Gavrilyuk, Oxana; Quirós, J Ramón; Maria Huerta, José; Ardanaz, Eva; Larrañaga, Nerea; Lujan-Barroso, Leila; Sánchez-Cantalejo, Emilio; Butt, Salma Tunå; Borgquist, Signe; Idahl, Annika; Lundin, Eva; Khaw, Kay-Tee; Allen, Naomi E; Rinaldi, Sabina; Dossus, Laure; Gunter, Marc; Merritt, Melissa A; Tzoulaki, Ioanna; Riboli, Elio; Kaaks, Rudolf

    2017-03-15

    Endometrial cancer risk prediction models including lifestyle, anthropometric and reproductive factors have limited discrimination. Adding biomarker data to these models may improve predictive capacity; to our knowledge, this has not been investigated for endometrial cancer. Using a nested case-control study within the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort, we investigated the improvement in discrimination gained by adding serum biomarker concentrations to risk estimates derived from an existing risk prediction model based on epidemiologic factors. Serum concentrations of sex steroid hormones, metabolic markers, growth factors, adipokines and cytokines were evaluated in a step-wise backward selection process; biomarkers were retained at p < 0.157 indicating improvement in the Akaike information criterion (AIC). Improvement in discrimination was assessed using the C-statistic for all biomarkers alone, and change in C-statistic from addition of biomarkers to preexisting absolute risk estimates. We used internal validation with bootstrapping (1000-fold) to adjust for over-fitting. Adiponectin, estrone, interleukin-1 receptor antagonist, tumor necrosis factor-alpha and triglycerides were selected into the model. After accounting for over-fitting, discrimination was improved by 2.0 percentage points when all evaluated biomarkers were included and 1.7 percentage points in the model including the selected biomarkers. Models including etiologic markers on independent pathways and genetic markers may further improve discrimination. © 2016 UICC.

  19. Issues and Importance of "Good" Starting Points for Nonlinear Regression for Mathematical Modeling with Maple: Basic Model Fitting to Make Predictions with Oscillating Data

    ERIC Educational Resources Information Center

    Fox, William

    2012-01-01

    The purpose of our modeling effort is to predict future outcomes. We assume the data collected are both accurate and relatively precise. For our oscillating data, we examined several mathematical modeling forms for predictions. We also examined both ignoring the oscillations as an important feature and including the oscillations as an important…

  20. Application of an Integrated HPC Reliability Prediction Framework to HMMWV Suspension System

    DTIC Science & Technology

    2010-09-13

    model number M966 (TOW Missle Carrier, Basic Armor without weapons), since they were available. Tires used for all simulations were the bias-type...vehicle fleet, including consideration of all kinds of uncertainty, especially including model uncertainty. The end result will be a tool to use...building an adequate vehicle reliability prediction framework for military vehicles is the accurate modeling of the integration of various types of

  1. Extended-Range Prediction with Low-Dimensional, Stochastic-Dynamic Models: A Data-driven Approach

    DTIC Science & Technology

    2012-09-30

    characterization of extratropical storms and extremes and link these to LFV modes. Mingfang Ting, Yochanan Kushnir, Andrew W. Robertson...simulating and predicting a wide range of climate phenomena including ENSO, tropical Atlantic sea surface temperatures (SSTs), storm track variability...into empirical prediction models. Use observations to improve low-order dynamical MJO models. Adam Sobel, Daehyun Kim. Extratropical variability

  2. A review of logistic regression models used to predict post-fire tree mortality of western North American conifers

    Treesearch

    Travis Woolley; David C. Shaw; Lisa M. Ganio; Stephen Fitzgerald

    2012-01-01

    Logistic regression models used to predict tree mortality are critical to post-fire management, planning prescribed bums and understanding disturbance ecology. We review literature concerning post-fire mortality prediction using logistic regression models for coniferous tree species in the western USA. We include synthesis and review of: methods to develop, evaluate...

  3. Clinical prediction models for mortality and functional outcome following ischemic stroke: A systematic review and meta-analysis

    PubMed Central

    Crayton, Elise; Wolfe, Charles; Douiri, Abdel

    2018-01-01

    Objective We aim to identify and critically appraise clinical prediction models of mortality and function following ischaemic stroke. Methods Electronic databases, reference lists, citations were searched from inception to September 2015. Studies were selected for inclusion, according to pre-specified criteria and critically appraised by independent, blinded reviewers. The discrimination of the prediction models was measured by the area under the curve receiver operating characteristic curve or c-statistic in random effects meta-analysis. Heterogeneity was measured using I2. Appropriate appraisal tools and reporting guidelines were used in this review. Results 31395 references were screened, of which 109 articles were included in the review. These articles described 66 different predictive risk models. Appraisal identified poor methodological quality and a high risk of bias for most models. However, all models precede the development of reporting guidelines for prediction modelling studies. Generalisability of models could be improved, less than half of the included models have been externally validated(n = 27/66). 152 predictors of mortality and 192 predictors and functional outcome were identified. No studies assessing ability to improve patient outcome (model impact studies) were identified. Conclusions Further external validation and model impact studies to confirm the utility of existing models in supporting decision-making is required. Existing models have much potential. Those wishing to predict stroke outcome are advised to build on previous work, to update and adapt validated models to their specific contexts opposed to designing new ones. PMID:29377923

  4. Using connectome-based predictive modeling to predict individual behavior from brain connectivity

    PubMed Central

    Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd

    2017-01-01

    Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017

  5. Risk Prediction Models in Psychiatry: Toward a New Frontier for the Prevention of Mental Illnesses.

    PubMed

    Bernardini, Francesco; Attademo, Luigi; Cleary, Sean D; Luther, Charles; Shim, Ruth S; Quartesan, Roberto; Compton, Michael T

    2017-05-01

    We conducted a systematic, qualitative review of risk prediction models designed and tested for depression, bipolar disorder, generalized anxiety disorder, posttraumatic stress disorder, and psychotic disorders. Our aim was to understand the current state of research on risk prediction models for these 5 disorders and thus future directions as our field moves toward embracing prediction and prevention. Systematic searches of the entire MEDLINE electronic database were conducted independently by 2 of the authors (from 1960 through 2013) in July 2014 using defined search criteria. Search terms included risk prediction, predictive model, or prediction model combined with depression, bipolar, manic depressive, generalized anxiety, posttraumatic, PTSD, schizophrenia, or psychosis. We identified 268 articles based on the search terms and 3 criteria: published in English, provided empirical data (as opposed to review articles), and presented results pertaining to developing or validating a risk prediction model in which the outcome was the diagnosis of 1 of the 5 aforementioned mental illnesses. We selected 43 original research reports as a final set of articles to be qualitatively reviewed. The 2 independent reviewers abstracted 3 types of data (sample characteristics, variables included in the model, and reported model statistics) and reached consensus regarding any discrepant abstracted information. Twelve reports described models developed for prediction of major depressive disorder, 1 for bipolar disorder, 2 for generalized anxiety disorder, 4 for posttraumatic stress disorder, and 24 for psychotic disorders. Most studies reported on sensitivity, specificity, positive predictive value, negative predictive value, and area under the (receiver operating characteristic) curve. Recent studies demonstrate the feasibility of developing risk prediction models for psychiatric disorders (especially psychotic disorders). The field must now advance by (1) conducting more large-scale, longitudinal studies pertaining to depression, bipolar disorder, anxiety disorders, and other psychiatric illnesses; (2) replicating and carrying out external validations of proposed models; (3) further testing potential selective and indicated preventive interventions; and (4) evaluating effectiveness of such interventions in the context of risk stratification using risk prediction models. © Copyright 2017 Physicians Postgraduate Press, Inc.

  6. GetReal in mathematical modelling: a review of studies predicting drug effectiveness in the real world.

    PubMed

    Panayidou, Klea; Gsteiger, Sandro; Egger, Matthias; Kilcher, Gablu; Carreras, Máximo; Efthimiou, Orestis; Debray, Thomas P A; Trelle, Sven; Hummel, Noemi

    2016-09-01

    The performance of a drug in a clinical trial setting often does not reflect its effect in daily clinical practice. In this third of three reviews, we examine the applications that have been used in the literature to predict real-world effectiveness from randomized controlled trial efficacy data. We searched MEDLINE, EMBASE from inception to March 2014, the Cochrane Methodology Register, and websites of key journals and organisations and reference lists. We extracted data on the type of model and predictions, data sources, validation and sensitivity analyses, disease area and software. We identified 12 articles in which four approaches were used: multi-state models, discrete event simulation models, physiology-based models and survival and generalized linear models. Studies predicted outcomes over longer time periods in different patient populations, including patients with lower levels of adherence or persistence to treatment or examined doses not tested in trials. Eight studies included individual patient data. Seven examined cardiovascular and metabolic diseases and three neurological conditions. Most studies included sensitivity analyses, but external validation was performed in only three studies. We conclude that mathematical modelling to predict real-world effectiveness of drug interventions is not widely used at present and not well validated. © 2016 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd. © 2016 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd.

  7. A Comparison between Multiple Regression Models and CUN-BAE Equation to Predict Body Fat in Adults

    PubMed Central

    Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A.; Aguiló, Antoni

    2015-01-01

    Background Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Methods Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. Results The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). Conclusions There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF. PMID:25821960

  8. A comparison between multiple regression models and CUN-BAE equation to predict body fat in adults.

    PubMed

    Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A; Aguiló, Antoni

    2015-01-01

    Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF.

  9. Advances in Parameter and Uncertainty Quantification Using Bayesian Hierarchical Techniques with a Spatially Referenced Watershed Model (Invited)

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Boyer, E. W.; Schwarz, G. E.; Smith, R. A.

    2013-12-01

    Estimating water and material stores and fluxes in watershed studies is frequently complicated by uncertainties in quantifying hydrological and biogeochemical effects of factors such as land use, soils, and climate. Although these process-related effects are commonly measured and modeled in separate catchments, researchers are especially challenged by their complexity across catchments and diverse environmental settings, leading to a poor understanding of how model parameters and prediction uncertainties vary spatially. To address these concerns, we illustrate the use of Bayesian hierarchical modeling techniques with a dynamic version of the spatially referenced watershed model SPARROW (SPAtially Referenced Regression On Watershed attributes). The dynamic SPARROW model is designed to predict streamflow and other water cycle components (e.g., evapotranspiration, soil and groundwater storage) for monthly varying hydrological regimes, using mechanistic functions, mass conservation constraints, and statistically estimated parameters. In this application, the model domain includes nearly 30,000 NHD (National Hydrologic Data) stream reaches and their associated catchments in the Susquehanna River Basin. We report the results of our comparisons of alternative models of varying complexity, including models with different explanatory variables as well as hierarchical models that account for spatial and temporal variability in model parameters and variance (error) components. The model errors are evaluated for changes with season and catchment size and correlations in time and space. The hierarchical models consist of a two-tiered structure in which climate forcing parameters are modeled as random variables, conditioned on watershed properties. Quantification of spatial and temporal variations in the hydrological parameters and model uncertainties in this approach leads to more efficient (lower variance) and less biased model predictions throughout the river network. Moreover, predictions of water-balance components are reported according to probabilistic metrics (e.g., percentiles, prediction intervals) that include both parameter and model uncertainties. These improvements in predictions of streamflow dynamics can inform the development of more accurate predictions of spatial and temporal variations in biogeochemical stores and fluxes (e.g., nutrients and carbon) in watersheds.

  10. Evaluating model structure adequacy: The case of the Maggia Valley groundwater system, southern Switzerland

    USGS Publications Warehouse

    Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,

    2013-01-01

    Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.

  11. REVIEW: Widespread access to predictive models in the motor system: a short review

    NASA Astrophysics Data System (ADS)

    Davidson, Paul R.; Wolpert, Daniel M.

    2005-09-01

    Recent behavioural and computational studies suggest that access to internal predictive models of arm and object dynamics is widespread in the sensorimotor system. Several systems, including those responsible for oculomotor and skeletomotor control, perceptual processing, postural control and mental imagery, are able to access predictions of the motion of the arm. A capacity to make and use predictions of object dynamics is similarly widespread. Here, we review recent studies looking at the predictive capacity of the central nervous system which reveal pervasive access to forward models of the environment.

  12. A crack-closure model for predicting fatigue-crack growth under aircraft spectrum loading

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.

    1981-01-01

    The development and application of an analytical model of cycle crack growth is presented that includes the effects of crack closure. The model was used to correlate crack growth rates under constant amplitude loading and to predict crack growth under aircraft spectrum loading on 2219-T851 aluminum alloy sheet material. The predicted crack growth lives agreed well with experimental data. The ratio of predicted to experimental lives ranged from 0.66 to 1.48. These predictions were made using data from an ASTM E24.06.01 Round Robin.

  13. Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell

    2011-01-01

    Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.

  14. Population Pharmacokinetics of Intravenous Paracetamol (Acetaminophen) in Preterm and Term Neonates: Model Development and External Evaluation.

    PubMed

    Cook, Sarah F; Roberts, Jessica K; Samiee-Zafarghandy, Samira; Stockmann, Chris; King, Amber D; Deutsch, Nina; Williams, Elaine F; Allegaert, Karel; Wilkins, Diana G; Sherwin, Catherine M T; van den Anker, John N

    2016-01-01

    The aims of this study were to develop a population pharmacokinetic model for intravenous paracetamol in preterm and term neonates and to assess the generalizability of the model by testing its predictive performance in an external dataset. Nonlinear mixed-effects models were constructed from paracetamol concentration-time data in NONMEM 7.2. Potential covariates included body weight, gestational age, postnatal age, postmenstrual age, sex, race, total bilirubin, and estimated glomerular filtration rate. An external dataset was used to test the predictive performance of the model through calculation of bias, precision, and normalized prediction distribution errors. The model-building dataset included 260 observations from 35 neonates with a mean gestational age of 33.6 weeks [standard deviation (SD) 6.6]. Data were well-described by a one-compartment model with first-order elimination. Weight predicted paracetamol clearance and volume of distribution, which were estimated as 0.348 L/h (5.5 % relative standard error; 30.8 % coefficient of variation) and 2.46 L (3.5 % relative standard error; 14.3 % coefficient of variation), respectively, at the mean subject weight of 2.30 kg. An external evaluation was performed on an independent dataset that included 436 observations from 60 neonates with a mean gestational age of 35.6 weeks (SD 4.3). The median prediction error was 10.1 % [95 % confidence interval (CI) 6.1-14.3] and the median absolute prediction error was 25.3 % (95 % CI 23.1-28.1). Weight predicted intravenous paracetamol pharmacokinetics in neonates ranging from extreme preterm to full-term gestational status. External evaluation suggested that these findings should be generalizable to other similar patient populations.

  15. Population Pharmacokinetics of Intravenous Paracetamol (Acetaminophen) in Preterm and Term Neonates: Model Development and External Evaluation

    PubMed Central

    Cook, Sarah F.; Roberts, Jessica K.; Samiee-Zafarghandy, Samira; Stockmann, Chris; King, Amber D.; Deutsch, Nina; Williams, Elaine F.; Allegaert, Karel; Sherwin, Catherine M. T.; van den Anker, John N.

    2017-01-01

    Objectives The aims of this study were to develop a population pharmacokinetic model for intravenous paracetamol in preterm and term neonates and to assess the generalizability of the model by testing its predictive performance in an external dataset. Methods Nonlinear mixed-effects models were constructed from paracetamol concentration–time data in NONMEM 7.2. Potential covariates included body weight, gestational age, postnatal age, postmenstrual age, sex, race, total bilirubin, and estimated glomerular filtration rate. An external dataset was used to test the predictive performance of the model through calculation of bias, precision, and normalized prediction distribution errors. Results The model-building dataset included 260 observations from 35 neonates with a mean gestational age of 33.6 weeks [standard deviation (SD) 6.6]. Data were well-described by a one-compartment model with first-order elimination. Weight predicted paracetamol clearance and volume of distribution, which were estimated as 0.348 L/h (5.5 % relative standard error; 30.8 % coefficient of variation) and 2.46 L (3.5 % relative standard error; 14.3 % coefficient of variation), respectively, at the mean subject weight of 2.30 kg. An external evaluation was performed on an independent dataset that included 436 observations from 60 neonates with a mean gestational age of 35.6 weeks (SD 4.3). The median prediction error was 10.1 % [95 % confidence interval (CI) 6.1–14.3] and the median absolute prediction error was 25.3 % (95 % CI 23.1–28.1). Conclusions Weight predicted intravenous paracetamol pharmacokinetics in neonates ranging from extreme preterm to full-term gestational status. External evaluation suggested that these findings should be generalizable to other similar patient populations. PMID:26201306

  16. Fast Running Urban Dispersion Model for Radiological Dispersal Device (RDD) Releases: Model Description and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John

    Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less

  17. Vibrational kinetics in CO electric discharge lasers - Modeling and experiments

    NASA Technical Reports Server (NTRS)

    Stanton, A. C.; Hanson, R. K.; Mitchner, M.

    1980-01-01

    A model of CO laser vibrational kinetics is developed, and predicted vibrational distributions are compared with measurements. The experimental distributions were obtained at various flow locations in a transverse CW discharge in supersonic (M = 3) flow. Good qualitative agreement is obtained in the comparisons, including the prediction of a total inversion at low discharge current densities. The major area of discrepancy is an observed loss in vibrational energy downstream of the discharge which is not predicted by the model. This discrepancy may be due to three-dimensional effects in the experiment which are not included in the model. Possible kinetic effects which may contribute to vibrational energy loss are also examined.

  18. Predicting Time to Hospital Discharge for Extremely Preterm Infants

    PubMed Central

    Hintz, Susan R.; Bann, Carla M.; Ambalavanan, Namasivayam; Cotten, C. Michael; Das, Abhik; Higgins, Rosemary D.

    2010-01-01

    As extremely preterm infant mortality rates have decreased, concerns regarding resource utilization have intensified. Accurate models to predict time to hospital discharge could aid in resource planning, family counseling, and perhaps stimulate quality improvement initiatives. Objectives For infants <27 weeks estimated gestational age (EGA), to develop, validate and compare several models to predict time to hospital discharge based on time-dependent covariates, and based on the presence of 5 key risk factors as predictors. Patients and Methods This was a retrospective analysis of infants <27 weeks EGA, born 7/2002-12/2005 and surviving to discharge from a NICHD Neonatal Research Network site. Time to discharge was modeled as continuous (postmenstrual age at discharge, PMAD), and categorical variables (“Early” and “Late” discharge). Three linear and logistic regression models with time-dependent covariate inclusion were developed (perinatal factors only, perinatal+early neonatal factors, perinatal+early+later factors). Models for Early and Late discharge using the cumulative presence of 5 key risk factors as predictors were also evaluated. Predictive capabilities were compared using coefficient of determination (R2) for linear models, and AUC of ROC curve for logistic models. Results Data from 2254 infants were included. Prediction of PMAD was poor, with only 38% of variation explained by linear models. However, models incorporating later clinical characteristics were more accurate in predicting “Early” or “Late” discharge (full models: AUC 0.76-0.83 vs. perinatal factor models: AUC 0.56-0.69). In simplified key risk factors models, predicted probabilities for Early and Late discharge compared favorably with observed rates. Furthermore, the AUC (0.75-0.77) were similar to those of models including the full factor set. Conclusions Prediction of Early or Late discharge is poor if only perinatal factors are considered, but improves substantially with knowledge of later-occurring morbidities. Prediction using a few key risk factors is comparable to full models, and may offer a clinically applicable strategy. PMID:20008430

  19. Construction of a model for predicting creatinine clearance in Japanese patients treated with Cisplatin therapy.

    PubMed

    Yajima, Airi; Uesawa, Yoshihiro; Ogawa, Chiaki; Yatabe, Megumi; Kondo, Naoki; Saito, Shinichiro; Suzuki, Yoshihiko; Atsuda, Kouichiro; Kagaya, Hajime

    2015-05-01

    There exist various useful predictive models, such as the Cockcroft-Gault model, for estimating creatinine clearance (CLcr). However, the prediction of renal function is difficult in patients with cancer treated with cisplatin. Therefore, we attempted to construct a new model for predicting CLcr in such patients. Japanese patients with head and neck cancer who had received cisplatin-based chemotherapy were used as subjects. A multiple regression equation was constructed as a model for predicting CLcr values based on background and laboratory data. A model for predicting CLcr, which included body surface area, serum creatinine and albumin, was constructed. The model exhibited good performance prior to cisplatin therapy. In addition, it performed better than previously reported models after cisplatin therapy. The predictive model constructed in the present study displayed excellent potential and was useful for estimating the renal function of patients treated with cisplatin therapy. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  20. Runoff as a factor in USLE/RUSLE technology

    NASA Astrophysics Data System (ADS)

    Kinnell, Peter

    2014-05-01

    Modelling erosion for prediction purposes started with the development of the Universal Soil Loss Equation the focus of which was the prediction of long term (~20) average annul soil loss from field sized areas. That purpose has been maintained in the subsequent revision RUSLE, the most widely used erosion prediction model in the world. The lack of ability to predict short term soil loss saw the development of so-called process based models like WEPP and EUROSEM which focussed on predicting event erosion but failed to improve the prediction of long term erosion where the RUSLE worked well. One of the features of erosion recognised in the so-called process based modes is the fact that runoff is a primary factor in rainfall erosion and some modifications of USLE/RUSLE model have been proposed have included runoff as in independent factor in determining event erosivity. However, these models have ignored fundamental mathematical rules. The USLE-M which replaces the EI30 index by the product of the runoff ratio and EI30 was developed from the concept that soil loss is the product of runoff and sediment concentration and operates in a way that obeys the mathematical rules upon which the USLE/RUSLE model was based. In accounts for event soil loss better that the EI30 index where runoff values are known or predicted adequately. RUSLE2 now includes a capacity to model runoff driven erosion.

  1. Numerical models of laser fusion of intestinal tissues.

    PubMed

    Pearce, John A

    2009-01-01

    Numerical models of continuous wave Tm:YAG thermal fusion in rat intestinal tissues were compared to experiment. Optical and thermal FDM models that included tissue damage based on Arrhenius kinetics were used to predict birefringence loss in collagen as the standard of comparison. The models also predicted collagen shrinkage, jellification and water loss. The inclusion of variable optical and thermal properties is essential to achieve favorable agreement between predicted and measured damage boundaries.

  2. Per Aspera ad Astra: Through Complex Population Modeling to Predictive Theory.

    PubMed

    Topping, Christopher J; Alrøe, Hugo Fjelsted; Farrell, Katharine N; Grimm, Volker

    2015-11-01

    Population models in ecology are often not good at predictions, even if they are complex and seem to be realistic enough. The reason for this might be that Occam's razor, which is key for minimal models exploring ideas and concepts, has been too uncritically adopted for more realistic models of systems. This can tie models too closely to certain situations, thereby preventing them from predicting the response to new conditions. We therefore advocate a new kind of parsimony to improve the application of Occam's razor. This new parsimony balances two contrasting strategies for avoiding errors in modeling: avoiding inclusion of nonessential factors (false inclusions) and avoiding exclusion of sometimes-important factors (false exclusions). It involves a synthesis of traditional modeling and analysis, used to describe the essentials of mechanistic relationships, with elements that are included in a model because they have been reported to be or can arguably be assumed to be important under certain conditions. The resulting models should be able to reflect how the internal organization of populations change and thereby generate representations of the novel behavior necessary for complex predictions, including regime shifts.

  3. Simulation analysis of adaptive cruise prediction control

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Cui, Sheng Min

    2017-09-01

    Predictive control is suitable for multi-variable and multi-constraint system control.In order to discuss the effect of predictive control on the vehicle longitudinal motion, this paper establishes the expected spacing model by combining variable pitch spacing and the of safety distance strategy. The model predictive control theory and the optimization method based on secondary planning are designed to obtain and track the best expected acceleration trajectory quickly. Simulation models are established including predictive and adaptive fuzzy control. Simulation results show that predictive control can realize the basic function of the system while ensuring the safety. The application of predictive and fuzzy adaptive algorithm in cruise condition indicates that the predictive control effect is better.

  4. [Comparison of three stand-level biomass estimation methods].

    PubMed

    Dong, Li Hu; Li, Feng Ri

    2016-12-01

    At present, the forest biomass methods of regional scale attract most of attention of the researchers, and developing the stand-level biomass model is popular. Based on the forestry inventory data of larch plantation (Larix olgensis) in Jilin Province, we used non-linear seemly unrelated regression (NSUR) to estimate the parameters in two additive system of stand-level biomass equations, i.e., stand-level biomass equations including the stand variables and stand biomass equations including the biomass expansion factor (i.e., Model system 1 and Model system 2), listed the constant biomass expansion factor for larch plantation and compared the prediction accuracy of three stand-level biomass estimation methods. The results indicated that for two additive system of biomass equations, the adjusted coefficient of determination (R a 2 ) of the total and stem equations was more than 0.95, the root mean squared error (RMSE), the mean prediction error (MPE) and the mean absolute error (MAE) were smaller. The branch and foliage biomass equations were worse than total and stem biomass equations, and the adjusted coefficient of determination (R a 2 ) was less than 0.95. The prediction accuracy of a constant biomass expansion factor was relatively lower than the prediction accuracy of Model system 1 and Model system 2. Overall, although stand-level biomass equation including the biomass expansion factor belonged to the volume-derived biomass estimation method, and was different from the stand biomass equations including stand variables in essence, but the obtained prediction accuracy of the two methods was similar. The constant biomass expansion factor had the lower prediction accuracy, and was inappropriate. In addition, in order to make the model parameter estimation more effective, the established stand-level biomass equations should consider the additivity in a system of all tree component biomass and total biomass equations.

  5. Predictive power of the grace score in population with diabetes.

    PubMed

    Baeza-Román, Anna; de Miguel-Balsa, Eva; Latour-Pérez, Jaime; Carrillo-López, Andrés

    2017-12-01

    Current clinical practice guidelines recommend risk stratification in patients with acute coronary syndrome (ACS) upon admission to hospital. Diabetes mellitus (DM) is widely recognized as an independent predictor of mortality in these patients, although it is not included in the GRACE risk score. The objective of this study is to validate the GRACE risk score in a contemporary population and particularly in the subgroup of patients with diabetes, and to test the effects of including the DM variable in the model. Retrospective cohort study in patients included in the ARIAM-SEMICYUC registry, with a diagnosis of ACS and with available in-hospital mortality data. We tested the predictive power of the GRACE score, calculating the area under the ROC curve. We assessed the calibration of the score and the predictive ability based on type of ACS and the presence of DM. Finally, we evaluated the effect of including the DM variable in the model by calculating the net reclassification improvement. The GRACE score shows good predictive power for hospital mortality in the study population, with a moderate degree of calibration and no significant differences based on ACS type or the presence of DM. Including DM as a variable did not add any predictive value to the GRACE model. The GRACE score has an appropriate predictive power, with good calibration and clinical applicability in the subgroup of diabetic patients. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  6. A simple rain attenuation model for earth-space radio links operating at 10-35 GHz

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Yon, K. M.

    1986-01-01

    The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.

  7. Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation

    DOE PAGES

    Wang, Yan; Swiler, Laura

    2017-09-07

    The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

  8. Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Swiler, Laura

    The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

  9. Arctic Sea Ice Predictability and the Sea Ice Prediction Network

    NASA Astrophysics Data System (ADS)

    Wiggins, H. V.; Stroeve, J. C.

    2014-12-01

    Drastic reductions in Arctic sea ice cover have increased the demand for Arctic sea ice predictions by a range of stakeholders, including local communities, resource managers, industry and the public. The science of sea-ice prediction has been challenged to keep up with these developments. Efforts such as the SEARCH Sea Ice Outlook (SIO; http://www.arcus.org/sipn/sea-ice-outlook) and the Sea Ice for Walrus Outlook have provided a forum for the international sea-ice prediction and observing community to explore and compare different approaches. The SIO, originally organized by the Study of Environmental Change (SEARCH), is now managed by the new Sea Ice Prediction Network (SIPN), which is building a collaborative network of scientists and stakeholders to improve arctic sea ice prediction. The SIO synthesizes predictions from a variety of methods, including heuristic and from a statistical and/or dynamical model. In a recent study, SIO data from 2008 to 2013 were analyzed. The analysis revealed that in some years the predictions were very successful, in other years they were not. Years that were anomalous compared to the long-term trend have proven more difficult to predict, regardless of which method was employed. This year, in response to feedback from users and contributors to the SIO, several enhancements have been made to the SIO reports. One is to encourage contributors to provide spatial probability maps of sea ice cover in September and the first day each location becomes ice-free; these are an example of subseasonal to seasonal, local-scale predictions. Another enhancement is a separate analysis of the modeling contributions. In the June 2014 SIO report, 10 of 28 outlooks were produced from models that explicitly simulate sea ice from dynamic-thermodynamic sea ice models. Half of the models included fully-coupled (atmosphere, ice, and ocean) models that additionally employ data assimilation. Both of these subsets (models and coupled models with data assimilation) have a far narrower spread in their prediction, indicating that the results of these more sophisticated methods are converging. Here we summarize and synthesize the 2014 contributions to the SIO, highlight the important questions and challenges that remain to be addressed, and present data on stakeholder uses of the SIO and related SIPN products.

  10. A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.

    PubMed

    Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao

    2016-07-01

    Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value <0.25 in the univariate analysis were further evaluated in multivariate models using backward elimination. Discrimination was assessed using receiver operating curve. Calibration was evaluated using the McFadden's R2. Net reclassification index (NRI) associated with incorporation of laboratory results was calculated. Results were internally validated. A model similar to existing CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puskar, Joseph David; Quintana, Michael A.; Sorensen, Neil Robert

    A program is underway at Sandia National Laboratories to predict long-term reliability of photovoltaic (PV) systems. The vehicle for the reliability predictions is a Reliability Block Diagram (RBD), which models system behavior. Because this model is based mainly on field failure and repair times, it can be used to predict current reliability, but it cannot currently be used to accurately predict lifetime. In order to be truly predictive, physics-informed degradation processes and failure mechanisms need to be included in the model. This paper describes accelerated life testing of metal foil tapes used in thin-film PV modules, and how tape jointmore » degradation, a possible failure mode, can be incorporated into the model.« less

  12. Emerging approaches in predictive toxicology.

    PubMed

    Zhang, Luoping; McHale, Cliona M; Greene, Nigel; Snyder, Ronald D; Rich, Ivan N; Aardema, Marilyn J; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan

    2014-12-01

    Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. © 2014 Wiley Periodicals, Inc.

  13. Emerging Approaches in Predictive Toxicology

    PubMed Central

    Zhang, Luoping; McHale, Cliona M.; Greene, Nigel; Snyder, Ronald D.; Rich, Ivan N.; Aardema, Marilyn J.; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan

    2016-01-01

    Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. PMID:25044351

  14. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  15. Sources of Uncertainty in Predicting Land Surface Fluxes Using Diverse Data and Models

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Wang, Weile; Michaelis, Andrew; Votava, Petr; Nemani, Ramakrishma

    2010-01-01

    In the domain of predicting land surface fluxes, models are used to bring data from large observation networks and satellite remote sensing together to make predictions about present and future states of the Earth. Characterizing the uncertainty about such predictions is a complex process and one that is not yet fully understood. Uncertainty exists about initialization, measurement and interpolation of input variables; model parameters; model structure; and mixed spatial and temporal supports. Multiple models or structures often exist to describe the same processes. Uncertainty about structure is currently addressed by running an ensemble of different models and examining the distribution of model outputs. To illustrate structural uncertainty, a multi-model ensemble experiment we have been conducting using the Terrestrial Observation and Prediction System (TOPS) will be discussed. TOPS uses public versions of process-based ecosystem models that use satellite-derived inputs along with surface climate data and land surface characterization to produce predictions of ecosystem fluxes including gross and net primary production and net ecosystem exchange. Using the TOPS framework, we have explored the uncertainty arising from the application of models with different assumptions, structures, parameters, and variable definitions. With a small number of models, this only begins to capture the range of possible spatial fields of ecosystem fluxes. Few attempts have been made to systematically address the components of uncertainty in such a framework. We discuss the characterization of uncertainty for this approach including both quantifiable and poorly known aspects.

  16. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    NASA Astrophysics Data System (ADS)

    Anderson, Gemma J.; Lucas, Donald D.

    2018-05-01

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.

  17. Anthropometric measures in cardiovascular disease prediction: comparison of laboratory-based versus non-laboratory-based model.

    PubMed

    Dhana, Klodian; Ikram, M Arfan; Hofman, Albert; Franco, Oscar H; Kavousi, Maryam

    2015-03-01

    Body mass index (BMI) has been used to simplify cardiovascular risk prediction models by substituting total cholesterol and high-density lipoprotein cholesterol. In the elderly, the ability of BMI as a predictor of cardiovascular disease (CVD) declines. We aimed to find the most predictive anthropometric measure for CVD risk to construct a non-laboratory-based model and to compare it with the model including laboratory measurements. The study included 2675 women and 1902 men aged 55-79 years from the prospective population-based Rotterdam Study. We used Cox proportional hazard regression analysis to evaluate the association of BMI, waist circumference, waist-to-hip ratio and a body shape index (ABSI) with CVD, including coronary heart disease and stroke. The performance of the laboratory-based and non-laboratory-based models was evaluated by studying the discrimination, calibration, correlation and risk agreement. Among men, ABSI was the most informative measure associated with CVD, therefore ABSI was used to construct the non-laboratory-based model. Discrimination of the non-laboratory-based model was not different than laboratory-based model (c-statistic: 0.680-vs-0.683, p=0.71); both models were well calibrated (15.3% observed CVD risk vs 16.9% and 17.0% predicted CVD risks by the non-laboratory-based and laboratory-based models, respectively) and Spearman rank correlation and the agreement between non-laboratory-based and laboratory-based models were 0.89 and 91.7%, respectively. Among women, none of the anthropometric measures were independently associated with CVD. Among middle-aged and elderly where the ability of BMI to predict CVD declines, the non-laboratory-based model, based on ABSI, could predict CVD risk as accurately as the laboratory-based model among men. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Prediction model to estimate presence of coronary artery disease: retrospective pooled analysis of existing cohorts

    PubMed Central

    Genders, Tessa S S; Steyerberg, Ewout W; Nieman, Koen; Galema, Tjebbe W; Mollet, Nico R; de Feyter, Pim J; Krestin, Gabriel P; Alkadhi, Hatem; Leschka, Sebastian; Desbiolles, Lotus; Meijs, Matthijs F L; Cramer, Maarten J; Knuuti, Juhani; Kajander, Sami; Bogaert, Jan; Goetschalckx, Kaatje; Cademartiri, Filippo; Maffei, Erica; Martini, Chiara; Seitun, Sara; Aldrovandi, Annachiara; Wildermuth, Simon; Stinn, Björn; Fornaro, Jürgen; Feuchtner, Gudrun; De Zordo, Tobias; Auer, Thomas; Plank, Fabian; Friedrich, Guy; Pugliese, Francesca; Petersen, Steffen E; Davies, L Ceri; Schoepf, U Joseph; Rowe, Garrett W; van Mieghem, Carlos A G; van Driessche, Luc; Sinitsyn, Valentin; Gopalan, Deepa; Nikolaou, Konstantin; Bamberg, Fabian; Cury, Ricardo C; Battle, Juan; Maurovich-Horvat, Pál; Bartykowszki, Andrea; Merkely, Bela; Becker, Dávid; Hadamitzky, Martin; Hausleiter, Jörg; Dewey, Marc; Zimmermann, Elke; Laule, Michael

    2012-01-01

    Objectives To develop prediction models that better estimate the pretest probability of coronary artery disease in low prevalence populations. Design Retrospective pooled analysis of individual patient data. Setting 18 hospitals in Europe and the United States. Participants Patients with stable chest pain without evidence for previous coronary artery disease, if they were referred for computed tomography (CT) based coronary angiography or catheter based coronary angiography (indicated as low and high prevalence settings, respectively). Main outcome measures Obstructive coronary artery disease (≥50% diameter stenosis in at least one vessel found on catheter based coronary angiography). Multiple imputation accounted for missing predictors and outcomes, exploiting strong correlation between the two angiography procedures. Predictive models included a basic model (age, sex, symptoms, and setting), clinical model (basic model factors and diabetes, hypertension, dyslipidaemia, and smoking), and extended model (clinical model factors and use of the CT based coronary calcium score). We assessed discrimination (c statistic), calibration, and continuous net reclassification improvement by cross validation for the four largest low prevalence datasets separately and the smaller remaining low prevalence datasets combined. Results We included 5677 patients (3283 men, 2394 women), of whom 1634 had obstructive coronary artery disease found on catheter based coronary angiography. All potential predictors were significantly associated with the presence of disease in univariable and multivariable analyses. The clinical model improved the prediction, compared with the basic model (cross validated c statistic improvement from 0.77 to 0.79, net reclassification improvement 35%); the coronary calcium score in the extended model was a major predictor (0.79 to 0.88, 102%). Calibration for low prevalence datasets was satisfactory. Conclusions Updated prediction models including age, sex, symptoms, and cardiovascular risk factors allow for accurate estimation of the pretest probability of coronary artery disease in low prevalence populations. Addition of coronary calcium scores to the prediction models improves the estimates. PMID:22692650

  19. Implementing Lumberjacks and Black Swans Into Model-Based Tools to Support Human-Automation Interaction.

    PubMed

    Sebok, Angelia; Wickens, Christopher D

    2017-03-01

    The objectives were to (a) implement theoretical perspectives regarding human-automation interaction (HAI) into model-based tools to assist designers in developing systems that support effective performance and (b) conduct validations to assess the ability of the models to predict operator performance. Two key concepts in HAI, the lumberjack analogy and black swan events, have been studied extensively. The lumberjack analogy describes the effects of imperfect automation on operator performance. In routine operations, an increased degree of automation supports performance, but in failure conditions, increased automation results in more significantly impaired performance. Black swans are the rare and unexpected failures of imperfect automation. The lumberjack analogy and black swan concepts have been implemented into three model-based tools that predict operator performance in different systems. These tools include a flight management system, a remotely controlled robotic arm, and an environmental process control system. Each modeling effort included a corresponding validation. In one validation, the software tool was used to compare three flight management system designs, which were ranked in the same order as predicted by subject matter experts. The second validation compared model-predicted operator complacency with empirical performance in the same conditions. The third validation compared model-predicted and empirically determined time to detect and repair faults in four automation conditions. The three model-based tools offer useful ways to predict operator performance in complex systems. The three tools offer ways to predict the effects of different automation designs on operator performance.

  20. SMOQ: a tool for predicting the absolute residue-specific quality of a single protein model with support vector machines

    PubMed Central

    2014-01-01

    Background It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. Results We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. Conclusion SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:24776231

  1. SMOQ: a tool for predicting the absolute residue-specific quality of a single protein model with support vector machines.

    PubMed

    Cao, Renzhi; Wang, Zheng; Wang, Yiheng; Cheng, Jianlin

    2014-04-28

    It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/.

  2. A systematic review of breast cancer incidence risk prediction models with meta-analysis of their performance.

    PubMed

    Meads, Catherine; Ahmed, Ikhlaaq; Riley, Richard D

    2012-04-01

    A risk prediction model is a statistical tool for estimating the probability that a currently healthy individual with specific risk factors will develop a condition in the future such as breast cancer. Reliably accurate prediction models can inform future disease burdens, health policies and individual decisions. Breast cancer prediction models containing modifiable risk factors, such as alcohol consumption, BMI or weight, condom use, exogenous hormone use and physical activity, are of particular interest to women who might be considering how to reduce their risk of breast cancer and clinicians developing health policies to reduce population incidence rates. We performed a systematic review to identify and evaluate the performance of prediction models for breast cancer that contain modifiable factors. A protocol was developed and a sensitive search in databases including MEDLINE and EMBASE was conducted in June 2010. Extensive use was made of reference lists. Included were any articles proposing or validating a breast cancer prediction model in a general female population, with no language restrictions. Duplicate data extraction and quality assessment were conducted. Results were summarised qualitatively, and where possible meta-analysis of model performance statistics was undertaken. The systematic review found 17 breast cancer models, each containing a different but often overlapping set of modifiable and other risk factors, combined with an estimated baseline risk that was also often different. Quality of reporting was generally poor, with characteristics of included participants and fitted model results often missing. Only four models received independent validation in external data, most notably the 'Gail 2' model with 12 validations. None of the models demonstrated consistently outstanding ability to accurately discriminate between those who did and those who did not develop breast cancer. For example, random-effects meta-analyses of the performance of the 'Gail 2' model showed the average C statistic was 0.63 (95% CI 0.59-0.67), and the expected/observed ratio of events varied considerably across studies (95% prediction interval for E/O ratio when the model was applied in practice was 0.75-1.19). There is a need for models with better predictive performance but, given the large amount of work already conducted, further improvement of existing models based on conventional risk factors is perhaps unlikely. Research to identify new risk factors with large additionally predictive ability is therefore needed, alongside clearer reporting and continual validation of new models as they develop.

  3. Computational Model of Human and System Dynamics in Free Flight: Studies in Distributed Control Technologies

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Pisanich, Gregory; Lebacqz, J. Victor (Technical Monitor)

    1998-01-01

    This paper presents a set of studies in full mission simulation and the development of a predictive computational model of human performance in control of complex airspace operations. NASA and the FAA have initiated programs of research and development to provide flight crew, airline operations and air traffic managers with automation aids to increase capacity in en route and terminal area to support the goals of safe, flexible, predictable and efficient operations. In support of these developments, we present a computational model to aid design that includes representation of multiple cognitive agents (both human operators and intelligent aiding systems). The demands of air traffic management require representation of many intelligent agents sharing world-models, coordinating action/intention, and scheduling goals and actions in a potentially unpredictable world of operations. The operator-model structure includes attention functions, action priority, and situation assessment. The cognitive model has been expanded to include working memory operations including retrieval from long-term store, and interference. The operator's activity structures have been developed to provide for anticipation (knowledge of the intention and action of remote operators), and to respond to failures of the system and other operators in the system in situation-specific paradigms. System stability and operator actions can be predicted by using the model. The model's predictive accuracy was verified using the full-mission simulation data of commercial flight deck operations with advanced air traffic management techniques.

  4. Can spatial statistical river temperature models be transferred between catchments?

    NASA Astrophysics Data System (ADS)

    Jackson, Faye L.; Fryer, Robert J.; Hannah, David M.; Malcolm, Iain A.

    2017-09-01

    There has been increasing use of spatial statistical models to understand and predict river temperature (Tw) from landscape covariates. However, it is not financially or logistically feasible to monitor all rivers and the transferability of such models has not been explored. This paper uses Tw data from four river catchments collected in August 2015 to assess how well spatial regression models predict the maximum 7-day rolling mean of daily maximum Tw (Twmax) within and between catchments. Models were fitted for each catchment separately using (1) landscape covariates only (LS models) and (2) landscape covariates and an air temperature (Ta) metric (LS_Ta models). All the LS models included upstream catchment area and three included a river network smoother (RNS) that accounted for unexplained spatial structure. The LS models transferred reasonably to other catchments, at least when predicting relative levels of Twmax. However, the predictions were biased when mean Twmax differed between catchments. The RNS was needed to characterise and predict finer-scale spatially correlated variation. Because the RNS was unique to each catchment and thus non-transferable, predictions were better within catchments than between catchments. A single model fitted to all catchments found no interactions between the landscape covariates and catchment, suggesting that the landscape relationships were transferable. The LS_Ta models transferred less well, with particularly poor performance when the relationship with the Ta metric was physically implausible or required extrapolation outside the range of the data. A single model fitted to all catchments found catchment-specific relationships between Twmax and the Ta metric, indicating that the Ta metric was not transferable. These findings improve our understanding of the transferability of spatial statistical river temperature models and provide a foundation for developing new approaches for predicting Tw at unmonitored locations across multiple catchments and larger spatial scales.

  5. Lung function parameters improve prediction of VO2peak in an elderly population: The Generation 100 study.

    PubMed

    Hassel, Erlend; Stensvold, Dorthe; Halvorsen, Thomas; Wisløff, Ulrik; Langhammer, Arnulf; Steinshamn, Sigurd

    2017-01-01

    Peak oxygen uptake (VO2peak) is an indicator of cardiovascular health and a useful tool for risk stratification. Direct measurement of VO2peak is resource-demanding and may be contraindicated. There exist several non-exercise models to estimate VO2peak that utilize easily obtainable health parameters, but none of them includes lung function measures or hemoglobin concentrations. We aimed to test whether addition of these parameters could improve prediction of VO2peak compared to an established model that includes age, waist circumference, self-reported physical activity and resting heart rate. We included 1431 subjects aged 69-77 years that completed a laboratory test of VO2peak, spirometry, and a gas diffusion test. Prediction models for VO2peak were developed with multiple linear regression, and goodness of fit was evaluated. Forced expiratory volume in one second (FEV1), diffusing capacity of the lung for carbon monoxide and blood hemoglobin concentration significantly improved the ability of the established model to predict VO2peak. The explained variance of the model increased from 31% to 48% for men and from 32% to 38% for women (p<0.001). FEV1, diffusing capacity of the lungs for carbon monoxide and hemoglobin concentration substantially improved the accuracy of VO2peak prediction when added to an established model in an elderly population.

  6. Benchmarking Deep Learning Models on Large Healthcare Datasets.

    PubMed

    Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan

    2018-06-04

    Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Khavaran, Abbas

    2010-01-01

    Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.

  8. Dynamic Simulation of Human Gait Model With Predictive Capability.

    PubMed

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  9. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  10. Prediction of biodegradability from chemical structure: Modeling or ready biodegradation test data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loonen, H.; Lindgren, F.; Hansen, B.

    1999-08-01

    Biodegradation data were collected and evaluated for 894 substances with widely varying chemical structures. All data were determined according to the Japanese Ministry of International Trade and Industry (MITI) I test protocol. The MITI I test is a screening test for ready biodegradability and has been described by Organization for Economic Cooperation and Development (OECD) test guideline 301 C and European Union (EU) test guideline C4F. The chemicals were characterized by a set of 127 predefined structural fragments. This data set was used to develop a model for the prediction of the biodegradability of chemicals under standardized OECD and EUmore » ready biodegradation test conditions. Partial least squares (PLS) discriminant analysis was used for the model development. The model was evaluated by means of internal cross-validation and repeated external validation. The importance of various structural fragments and fragment interactions was investigated. The most important fragments include the presence of a long alkyl chain; hydroxy, ester, and acid groups (enhancing biodegradation); and the presence of one or more aromatic rings and halogen substituents (regarding biodegradation). More than 85% of the model predictions were correct for using the complete data set. The not readily biodegradable predictions were slightly better than the readily biodegradable predictions (86 vs 84%). The average percentage of correct predictions from four external validation studies was 83%. Model optimization by including fragment interactions improve the model predicting capabilities to 89%. It can be concluded that the PLS model provides predictions of high reliability for a diverse range of chemical structures. The predictions conform to the concept of readily biodegradable (or not readily biodegradable) as defined by OECD and EU test guidelines.« less

  11. Genomic prediction of the polled and horned phenotypes in Merino sheep.

    PubMed

    Duijvesteijn, Naomi; Bolormaa, Sunduimijid; Daetwyler, Hans D; van der Werf, Julius H J

    2018-05-22

    In horned sheep breeds, breeding for polledness has been of interest for decades. The objective of this study was to improve prediction of the horned and polled phenotypes using horn scores classified as polled, scurs, knobs or horns. Derived phenotypes polled/non-polled (P/NP) and horned/non-horned (H/NH) were used to test four different strategies for prediction in 4001 purebred Merino sheep. These strategies include the use of single 'single nucleotide polymorphism' (SNP) genotypes, multiple-SNP haplotypes, genome-wide and chromosome-wide genomic best linear unbiased prediction and information from imputed sequence variants from the region including the RXFP2 gene. Low-density genotypes of these animals were imputed to the Illumina Ovine high-density (600k) chip and the 1.78-kb insertion polymorphism in RXFP2 was included in the imputation process to whole-genome sequence. We evaluated the mode of inheritance and validated models by a fivefold cross-validation and across- and between-family prediction. The most significant SNPs for prediction of P/NP and H/NH were OAR10_29546872.1 and OAR10_29458450, respectively, located on chromosome 10 close to the 1.78-kb insertion at 29.5 Mb. The mode of inheritance included an additive effect and a sex-dependent effect for dominance for P/NP and a sex-dependent additive and dominance effect for H/NH. Models with the highest prediction accuracies for H/NH used either single SNPs or 3-SNP haplotypes and included a polygenic effect estimated based on traditional pedigree relationships. Prediction accuracies for H/NH were 0.323 for females and 0.725 for males. For predicting P/NP, the best models were the same as for H/NH but included a genomic relationship matrix with accuracies of 0.713 for females and 0.620 for males. Our results show that prediction accuracy is high using a single SNP, but does not reach 1 since the causative mutation is not genotyped. Incomplete penetrance or allelic heterogeneity, which can influence expression of the phenotype, may explain why prediction accuracy did not approach 1 with any of the genetic models tested here. Nevertheless, a breeding program to eradicate horns from Merino sheep can be effective by selecting genotypes GG of SNP OAR10_29458450 or TT of SNP OAR10_29546872.1 since all sheep with these genotypes will be non-horned.

  12. A unified thermal and vertical trajectory model for the prediction of high altitude balloon performance

    NASA Technical Reports Server (NTRS)

    Carlson, L. A.; Horn, W. J.

    1981-01-01

    A computer model for the prediction of the trajectory and thermal behavior of zero-pressure high altitude balloon was developed. In accord with flight data, the model permits radiative emission and absorption of the lifting gas and daytime gas temperatures above that of the balloon film. It also includes ballasting, venting, and valving. Predictions obtained with the model are compared with flight data from several flights and newly discovered features are discussed.

  13. Real-time predictive seasonal influenza model in Catalonia, Spain

    PubMed Central

    Basile, Luca; Oviedo de la Fuente, Manuel; Torner, Nuria; Martínez, Ana; Jané, Mireia

    2018-01-01

    Influenza surveillance is critical to monitoring the situation during epidemic seasons and predictive mathematic models may aid the early detection of epidemic patterns. The objective of this study was to design a real-time spatial predictive model of ILI (Influenza Like Illness) incidence rate in Catalonia using one- and two-week forecasts. The available data sources used to select explanatory variables to include in the model were the statutory reporting disease system and the sentinel surveillance system in Catalonia for influenza incidence rates, the official climate service in Catalonia for meteorological data, laboratory data and Google Flu Trend. Time series for every explanatory variable with data from the last 4 seasons (from 2010–2011 to 2013–2014) was created. A pilot test was conducted during the 2014–2015 season to select the explanatory variables to be included in the model and the type of model to be applied. During the 2015–2016 season a real-time model was applied weekly, obtaining the intensity level and predicted incidence rates with 95% confidence levels one and two weeks away for each health region. At the end of the season, the confidence interval success rate (CISR) and intensity level success rate (ILSR) were analysed. For the 2015–2016 season a CISR of 85.3% at one week and 87.1% at two weeks and an ILSR of 82.9% and 82% were observed, respectively. The model described is a useful tool although it is hard to evaluate due to uncertainty. The accuracy of prediction at one and two weeks was above 80% globally, but was lower during the peak epidemic period. In order to improve the predictive power, new explanatory variables should be included. PMID:29513710

  14. Prediction of Emergency Department Hospital Admission Based on Natural Language Processing and Neural Networks.

    PubMed

    Zhang, Xingyu; Kim, Joyce; Patzer, Rachel E; Pitts, Stephen R; Patzer, Aaron; Schrager, Justin D

    2017-10-26

    To describe and compare logistic regression and neural network modeling strategies to predict hospital admission or transfer following initial presentation to Emergency Department (ED) triage with and without the addition of natural language processing elements. Using data from the National Hospital Ambulatory Medical Care Survey (NHAMCS), a cross-sectional probability sample of United States EDs from 2012 and 2013 survey years, we developed several predictive models with the outcome being admission to the hospital or transfer vs. discharge home. We included patient characteristics immediately available after the patient has presented to the ED and undergone a triage process. We used this information to construct logistic regression (LR) and multilayer neural network models (MLNN) which included natural language processing (NLP) and principal component analysis from the patient's reason for visit. Ten-fold cross validation was used to test the predictive capacity of each model and receiver operating curves (AUC) were then calculated for each model. Of the 47,200 ED visits from 642 hospitals, 6,335 (13.42%) resulted in hospital admission (or transfer). A total of 48 principal components were extracted by NLP from the reason for visit fields, which explained 75% of the overall variance for hospitalization. In the model including only structured variables, the AUC was 0.824 (95% CI 0.818-0.830) for logistic regression and 0.823 (95% CI 0.817-0.829) for MLNN. Models including only free-text information generated AUC of 0.742 (95% CI 0.731- 0.753) for logistic regression and 0.753 (95% CI 0.742-0.764) for MLNN. When both structured variables and free text variables were included, the AUC reached 0.846 (95% CI 0.839-0.853) for logistic regression and 0.844 (95% CI 0.836-0.852) for MLNN. The predictive accuracy of hospital admission or transfer for patients who presented to ED triage overall was good, and was improved with the inclusion of free text data from a patient's reason for visit regardless of modeling approach. Natural language processing and neural networks that incorporate patient-reported outcome free text may increase predictive accuracy for hospital admission.

  15. Predicting Risk of Type 2 Diabetes Mellitus with Genetic Risk Models on the Basis of Established Genome-wide Association Markers: A Systematic Review

    PubMed Central

    Bao, Wei; Hu, Frank B.; Rong, Shuang; Rong, Ying; Bowers, Katherine; Schisterman, Enrique F.; Liu, Liegang; Zhang, Cuilin

    2013-01-01

    This study aimed to evaluate the predictive performance of genetic risk models based on risk loci identified and/or confirmed in genome-wide association studies for type 2 diabetes mellitus. A systematic literature search was conducted in the PubMed/MEDLINE and EMBASE databases through April 13, 2012, and published data relevant to the prediction of type 2 diabetes based on genome-wide association marker–based risk models (GRMs) were included. Of the 1,234 potentially relevant articles, 21 articles representing 23 studies were eligible for inclusion. The median area under the receiver operating characteristic curve (AUC) among eligible studies was 0.60 (range, 0.55–0.68), which did not differ appreciably by study design, sample size, participants’ race/ethnicity, or the number of genetic markers included in the GRMs. In addition, the AUCs for type 2 diabetes did not improve appreciably with the addition of genetic markers into conventional risk factor–based models (median AUC, 0.79 (range, 0.63–0.91) vs. median AUC, 0.78 (range, 0.63–0.90), respectively). A limited number of included studies used reclassification measures and yielded inconsistent results. In conclusion, GRMs showed a low predictive performance for risk of type 2 diabetes, irrespective of study design, participants’ race/ethnicity, and the number of genetic markers included. Moreover, the addition of genome-wide association markers into conventional risk models produced little improvement in predictive performance. PMID:24008910

  16. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  17. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction.

    PubMed

    Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.

  18. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction

    PubMed Central

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803

  19. Development and Validation of an Empiric Tool to Predict Favorable Neurologic Outcomes Among PICU Patients.

    PubMed

    Gupta, Punkaj; Rettiganti, Mallikarjuna; Gossett, Jeffrey M; Daufeldt, Jennifer; Rice, Tom B; Wetzel, Randall C

    2018-01-01

    To create a novel tool to predict favorable neurologic outcomes during ICU stay among children with critical illness. Logistic regression models using adaptive lasso methodology were used to identify independent factors associated with favorable neurologic outcomes. A mixed effects logistic regression model was used to create the final prediction model including all predictors selected from the lasso model. Model validation was performed using a 10-fold internal cross-validation approach. Virtual Pediatric Systems (VPS, LLC, Los Angeles, CA) database. Patients less than 18 years old admitted to one of the participating ICUs in the Virtual Pediatric Systems database were included (2009-2015). None. A total of 160,570 patients from 90 hospitals qualified for inclusion. Of these, 1,675 patients (1.04%) were associated with a decline in Pediatric Cerebral Performance Category scale by at least 2 between ICU admission and ICU discharge (unfavorable neurologic outcome). The independent factors associated with unfavorable neurologic outcome included higher weight at ICU admission, higher Pediatric Index of Morality-2 score at ICU admission, cardiac arrest, stroke, seizures, head/nonhead trauma, use of conventional mechanical ventilation and high-frequency oscillatory ventilation, prolonged hospital length of ICU stay, and prolonged use of mechanical ventilation. The presence of chromosomal anomaly, cardiac surgery, and utilization of nitric oxide were associated with favorable neurologic outcome. The final online prediction tool can be accessed at https://soipredictiontool.shinyapps.io/GNOScore/. Our model predicted 139,688 patients with favorable neurologic outcomes in an internal validation sample when the observed number of patients with favorable neurologic outcomes was among 139,591 patients. The area under the receiver operating curve for the validation model was 0.90. This proposed prediction tool encompasses 20 risk factors into one probability to predict favorable neurologic outcome during ICU stay among children with critical illness. Future studies should seek external validation and improved discrimination of this prediction tool.

  20. Perspectives for geographically oriented management of fusarium mycotoxins in the cereal supply chain.

    PubMed

    van der Fels-Klerx, H J; Booij, C J H

    2010-06-01

    This article provides an overview of available systems for management of Fusarium mycotoxins in the cereal grain supply chain, with an emphasis on the use of predictive mathematical modeling. From the state of the art, it proposes future developments in modeling and management and their challenges. Mycotoxin contamination in cereal grain-based feed and food products is currently managed and controlled by good agricultural practices, good manufacturing practices, hazard analysis critical control points, and by checking and more recently by notification systems and predictive mathematical models. Most of the predictive models for Fusarium mycotoxins in cereal grains focus on deoxynivalenol in wheat and aim to help growers make decisions about the application of fungicides during cultivation. Future developments in managing Fusarium mycotoxins should include the linkage between predictive mathematical models and geographical information systems, resulting into region-specific predictions for mycotoxin occurrence. The envisioned geographically oriented decision support system may incorporate various underlying models for specific users' demands and regions and various related databases to feed the particular models with (geographically oriented) input data. Depending on the user requirements, the system selects the best fitting model and available input information. Future research areas include organizing data management in the cereal grain supply chain, developing predictive models for other stakeholders (taking into account the period up to harvest), other Fusarium mycotoxins, and cereal grain types, and understanding the underlying effects of the regional component in the models.

  1. Early prediction of intensive care unit-acquired weakness using easily available parameters: a prospective observational study.

    PubMed

    Wieske, Luuk; Witteveen, Esther; Verhamme, Camiel; Dettling-Ihnenfeldt, Daniela S; van der Schaaf, Marike; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke

    2014-01-01

    An early diagnosis of Intensive Care Unit-acquired weakness (ICU-AW) using muscle strength assessment is not possible in most critically ill patients. We hypothesized that development of ICU-AW can be predicted reliably two days after ICU admission, using patient characteristics, early available clinical parameters, laboratory results and use of medication as parameters. Newly admitted ICU patients mechanically ventilated ≥2 days were included in this prospective observational cohort study. Manual muscle strength was measured according to the Medical Research Council (MRC) scale, when patients were awake and attentive. ICU-AW was defined as an average MRC score <4. A prediction model was developed by selecting predictors from an a-priori defined set of candidate predictors, based on known risk factors. Discriminative performance of the prediction model was evaluated, validated internally and compared to the APACHE IV and SOFA score. Of 212 included patients, 103 developed ICU-AW. Highest lactate levels, treatment with any aminoglycoside in the first two days after admission and age were selected as predictors. The area under the receiver operating characteristic curve of the prediction model was 0.71 after internal validation. The new prediction model improved discrimination compared to the APACHE IV and the SOFA score. The new early prediction model for ICU-AW using a set of 3 easily available parameters has fair discriminative performance. This model needs external validation.

  2. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  3. Prediction of Chemical Function: Model Development and Application

    EPA Science Inventory

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (...

  4. Predicting the biological condition of streams: Use of geospatial indicators of natural and anthropogenic characteristics of watersheds

    USGS Publications Warehouse

    Carlisle, D.M.; Falcone, J.; Meador, M.R.

    2009-01-01

    We developed and evaluated empirical models to predict biological condition of wadeable streams in a large portion of the eastern USA, with the ultimate goal of prediction for unsampled basins. Previous work had classified (i.e., altered vs. unaltered) the biological condition of 920 streams based on a biological assessment of macroinvertebrate assemblages. Predictor variables were limited to widely available geospatial data, which included land cover, topography, climate, soils, societal infrastructure, and potential hydrologic modification. We compared the accuracy of predictions of biological condition class based on models with continuous and binary responses. We also evaluated the relative importance of specific groups and individual predictor variables, as well as the relationships between the most important predictors and biological condition. Prediction accuracy and the relative importance of predictor variables were different for two subregions for which models were created. Predictive accuracy in the highlands region improved by including predictors that represented both natural and human activities. Riparian land cover and road-stream intersections were the most important predictors. In contrast, predictive accuracy in the lowlands region was best for models limited to predictors representing natural factors, including basin topography and soil properties. Partial dependence plots revealed complex and nonlinear relationships between specific predictors and the probability of biological alteration. We demonstrate a potential application of the model by predicting biological condition in 552 unsampled basins across an ecoregion in southeastern Wisconsin (USA). Estimates of the likelihood of biological condition of unsampled streams could be a valuable tool for screening large numbers of basins to focus targeted monitoring of potentially unaltered or altered stream segments. ?? Springer Science+Business Media B.V. 2008.

  5. A deep auto-encoder model for gene expression prediction.

    PubMed

    Xie, Rui; Wen, Jia; Quitadamo, Andrew; Cheng, Jianlin; Shi, Xinghua

    2017-11-17

    Gene expression is a key intermediate level that genotypes lead to a particular trait. Gene expression is affected by various factors including genotypes of genetic variants. With an aim of delineating the genetic impact on gene expression, we build a deep auto-encoder model to assess how good genetic variants will contribute to gene expression changes. This new deep learning model is a regression-based predictive model based on the MultiLayer Perceptron and Stacked Denoising Auto-encoder (MLP-SAE). The model is trained using a stacked denoising auto-encoder for feature selection and a multilayer perceptron framework for backpropagation. We further improve the model by introducing dropout to prevent overfitting and improve performance. To demonstrate the usage of this model, we apply MLP-SAE to a real genomic datasets with genotypes and gene expression profiles measured in yeast. Our results show that the MLP-SAE model with dropout outperforms other models including Lasso, Random Forests and the MLP-SAE model without dropout. Using the MLP-SAE model with dropout, we show that gene expression quantifications predicted by the model solely based on genotypes, align well with true gene expression patterns. We provide a deep auto-encoder model for predicting gene expression from SNP genotypes. This study demonstrates that deep learning is appropriate for tackling another genomic problem, i.e., building predictive models to understand genotypes' contribution to gene expression. With the emerging availability of richer genomic data, we anticipate that deep learning models play a bigger role in modeling and interpreting genomics.

  6. Evaluation of free modeling targets in CASP11 and ROLL.

    PubMed

    Kinch, Lisa N; Li, Wenlin; Monastyrskyy, Bohdan; Kryshtafovych, Andriy; Grishin, Nick V

    2016-09-01

    We present an assessment of 'template-free modeling' (FM) in CASP11and ROLL. Community-wide server performance suggested the use of automated scores similar to previous CASPs would provide a good system of evaluating performance, even in the absence of comprehensive manual assessment. The CASP11 FM category included several outstanding examples, including successful prediction by the Baker group of a 256-residue target (T0806-D1) that lacked sequence similarity to any existing template. The top server model prediction by Zhang's Quark, which was apparently selected and refined by several manual groups, encompassed the entire fold of target T0837-D1. Methods from the same two groups tended to dominate overall CASP11 FM and ROLL rankings. Comparison of top FM predictions with those from the previous CASP experiment revealed progress in the category, particularly reflected in high prediction accuracy for larger protein domains. FM prediction models for two cases were sufficient to provide functional insights that were otherwise not obtainable by traditional sequence analysis methods. Importantly, CASP11 abstracts revealed that alignment-based contact prediction methods brought about much of the CASP11 progress, producing both of the functionally relevant models as well as several of the other outstanding structure predictions. These methodological advances enabled de novo modeling of much larger domain structures than was previously possible and allowed prediction of functional sites. Proteins 2016; 84(Suppl 1):51-66. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  7. Testing the generality of above-ground biomass allometry across plant functional types at the continent scale.

    PubMed

    Paul, Keryn I; Roxburgh, Stephen H; Chave, Jerome; England, Jacqueline R; Zerihun, Ayalsew; Specht, Alison; Lewis, Tom; Bennett, Lauren T; Baker, Thomas G; Adams, Mark A; Huxtable, Dan; Montagu, Kelvin D; Falster, Daniel S; Feller, Mike; Sochacki, Stan; Ritson, Peter; Bastin, Gary; Bartle, John; Wildy, Dan; Hobbs, Trevor; Larmour, John; Waterworth, Rob; Stewart, Hugh T L; Jonson, Justin; Forrester, David I; Applegate, Grahame; Mendham, Daniel; Bradford, Matt; O'Grady, Anthony; Green, Daryl; Sudmeyer, Rob; Rance, Stan J; Turner, John; Barton, Craig; Wenk, Elizabeth H; Grove, Tim; Attiwill, Peter M; Pinkard, Elizabeth; Butler, Don; Brooksbank, Kim; Spencer, Beren; Snowdon, Peter; O'Brien, Nick; Battaglia, Michael; Cameron, David M; Hamilton, Steve; McAuthur, Geoff; Sinclair, Jenny

    2016-06-01

    Accurate ground-based estimation of the carbon stored in terrestrial ecosystems is critical to quantifying the global carbon budget. Allometric models provide cost-effective methods for biomass prediction. But do such models vary with ecoregion or plant functional type? We compiled 15 054 measurements of individual tree or shrub biomass from across Australia to examine the generality of allometric models for above-ground biomass prediction. This provided a robust case study because Australia includes ecoregions ranging from arid shrublands to tropical rainforests, and has a rich history of biomass research, particularly in planted forests. Regardless of ecoregion, for five broad categories of plant functional type (shrubs; multistemmed trees; trees of the genus Eucalyptus and closely related genera; other trees of high wood density; and other trees of low wood density), relationships between biomass and stem diameter were generic. Simple power-law models explained 84-95% of the variation in biomass, with little improvement in model performance when other plant variables (height, bole wood density), or site characteristics (climate, age, management) were included. Predictions of stand-based biomass from allometric models of varying levels of generalization (species-specific, plant functional type) were validated using whole-plot harvest data from 17 contrasting stands (range: 9-356 Mg ha(-1) ). Losses in efficiency of prediction were <1% if generalized models were used in place of species-specific models. Furthermore, application of generalized multispecies models did not introduce significant bias in biomass prediction in 92% of the 53 species tested. Further, overall efficiency of stand-level biomass prediction was 99%, with a mean absolute prediction error of only 13%. Hence, for cost-effective prediction of biomass across a wide range of stands, we recommend use of generic allometric models based on plant functional types. Development of new species-specific models is only warranted when gains in accuracy of stand-based predictions are relatively high (e.g. high-value monocultures). © 2015 John Wiley & Sons Ltd.

  8. Utilizing multiple scale models to improve predictions of extra-axial hemorrhage in the immature piglet.

    PubMed

    Scott, Gregory G; Margulies, Susan S; Coats, Brittany

    2016-10-01

    Traumatic brain injury (TBI) is a leading cause of death and disability in the USA. To help understand and better predict TBI, researchers have developed complex finite element (FE) models of the head which incorporate many biological structures such as scalp, skull, meninges, brain (with gray/white matter differentiation), and vasculature. However, most models drastically simplify the membranes and substructures between the pia and arachnoid membranes. We hypothesize that substructures in the pia-arachnoid complex (PAC) contribute substantially to brain deformation following head rotation, and that when included in FE models accuracy of extra-axial hemorrhage prediction improves. To test these hypotheses, microscale FE models of the PAC were developed to span the variability of PAC substructure anatomy and regional density. The constitutive response of these models were then integrated into an existing macroscale FE model of the immature piglet brain to identify changes in cortical stress distribution and predictions of extra-axial hemorrhage (EAH). Incorporating regional variability of PAC substructures substantially altered the distribution of principal stress on the cortical surface of the brain compared to a uniform representation of the PAC. Simulations of 24 non-impact rapid head rotations in an immature piglet animal model resulted in improved accuracy of EAH prediction (to 94 % sensitivity, 100 % specificity), as well as a high accuracy in regional hemorrhage prediction (to 82-100 % sensitivity, 100 % specificity). We conclude that including a biofidelic PAC substructure variability in FE models of the head is essential for improved predictions of hemorrhage at the brain/skull interface.

  9. Enhanced risk prediction model for emergency department use and hospitalizations in patients in a primary care medical home.

    PubMed

    Takahashi, Paul Y; Heien, Herbert C; Sangaralingham, Lindsey R; Shah, Nilay D; Naessens, James M

    2016-07-01

    With the advent of healthcare payment reform, identifying high-risk populations has become more important to providers. Existing risk-prediction models often focus on chronic conditions. This study sought to better understand other factors to improve identification of the highest risk population. A retrospective cohort study of a paneled primary care population utilizing 2010 data to calibrate a risk prediction model of hospital and emergency department (ED) use in 2011. Data were randomly split into development and validation data sets. We compared the enhanced model containing the additional risk predictors with the Minnesota medical tiering model. The study was conducted in the primary care practice of an integrated delivery system at an academic medical center in Rochester, Minnesota. The study focus was primary care medical home patients in 2010 and 2011 (n = 84,752), with the primary outcome of subsequent hospitalization or ED visit. A total of 42,384 individuals derived the enhanced risk-prediction model and 42,368 individuals validated the model. Predictors included Adjusted Clinical Groups-based Minnesota medical tiering, patient demographics, insurance status, and prior year healthcare utilization. Additional variables included specific mental and medical conditions, use of high-risk medications, and body mass index. The area under the curve in the enhanced model was 0.705 (95% CI, 0.698-0.712) compared with 0.662 (95% CI, 0.656-0.669) in the Minnesota medical tiering-only model. New high-risk patients in the enhanced model were more likely to have lack of health insurance, presence of Medicaid, diagnosed depression, and prior ED utilization. An enhanced model including additional healthcare-related factors improved the prediction of risk of hospitalization or ED visit.

  10. Time-dependent oral absorption models

    NASA Technical Reports Server (NTRS)

    Higaki, K.; Yamashita, S.; Amidon, G. L.

    2001-01-01

    The plasma concentration-time profiles following oral administration of drugs are often irregular and cannot be interpreted easily with conventional models based on first- or zero-order absorption kinetics and lag time. Six new models were developed using a time-dependent absorption rate coefficient, ka(t), wherein the time dependency was varied to account for the dynamic processes such as changes in fluid absorption or secretion, in absorption surface area, and in motility with time, in the gastrointestinal tract. In the present study, the plasma concentration profiles of propranolol obtained in human subjects following oral dosing were analyzed using the newly derived models based on mass balance and compared with the conventional models. Nonlinear regression analysis indicated that the conventional compartment model including lag time (CLAG model) could not predict the rapid initial increase in plasma concentration after dosing and the predicted Cmax values were much lower than that observed. On the other hand, all models with the time-dependent absorption rate coefficient, ka(t), were superior to the CLAG model in predicting plasma concentration profiles. Based on Akaike's Information Criterion (AIC), the fluid absorption model without lag time (FA model) exhibited the best overall fit to the data. The two-phase model including lag time, TPLAG model was also found to be a good model judging from the values of sum of squares. This model also described the irregular profiles of plasma concentration with time and frequently predicted Cmax values satisfactorily. A comparison of the absorption rate profiles also suggested that the TPLAG model is better at prediction of irregular absorption kinetics than the FA model. In conclusion, the incorporation of a time-dependent absorption rate coefficient ka(t) allows the prediction of nonlinear absorption characteristics in a more reliable manner.

  11. Extrapolation of a predictive model for growth of a low inoculum size of Salmonella typhimurium DT104 on chicken skin to higher inoculum sizes

    USDA-ARS?s Scientific Manuscript database

    Validation of model predictions for independent variables not included in model development can save time and money by identifying conditions for which new models are not needed. A single strain of Salmonella Typhimurium DT104 was used to develop a general regression neural network model for growth...

  12. Evaluation of methodology for detecting/predicting migration of forest species

    Treesearch

    Dale S. Solomon; William B. Leak

    1996-01-01

    Available methods for analyzing migration of forest species are evaluated, including simulation models, remeasured plots, resurveys, pollen/vegetation analysis, and age/distance trends. Simulation models have provided some of the most drastic estimates of species changes due to predicted changes in global climate. However, these models require additional testing...

  13. Predicting Document Retrieval System Performance: An Expected Precision Measure.

    ERIC Educational Resources Information Center

    Losee, Robert M., Jr.

    1987-01-01

    Describes an expected precision (EP) measure designed to predict document retrieval performance. Highlights include decision theoretic models; precision and recall as measures of system performance; EP graphs; relevance feedback; and computing the retrieval status value of a document for two models, the Binary Independent Model and the Two Poisson…

  14. First principles nickel-cadmium and nickel hydrogen spacecraft battery models

    NASA Technical Reports Server (NTRS)

    Timmerman, P.; Ratnakumar, B. V.; Distefano, S.

    1996-01-01

    The principles of Nickel-Cadmium and Nickel-Hydrogen spacecraft battery models are discussed. The Ni-Cd battery model includes two phase positive electrode and its predictions are very close to actual data. But the Ni-H2 battery model predictions (without the two phase positive electrode) are unacceptable even though the model is operational. Both models run on UNIX and Macintosh computers.

  15. Use of Linear Prediction Uncertainty Analysis to Guide Conditioning of Models Simulating Surface-Water/Groundwater Interactions

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.; Doherty, J.

    2011-12-01

    Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.

  16. A cross-national analysis of how economic inequality predicts biodiversity loss.

    PubMed

    Holland, Tim G; Peterson, Garry D; Gonzalez, Andrew

    2009-10-01

    We used socioeconomic models that included economic inequality to predict biodiversity loss, measured as the proportion of threatened plant and vertebrate species, across 50 countries. Our main goal was to evaluate whether economic inequality, measured as the Gini index of income distribution, improved the explanatory power of our statistical models. We compared four models that included the following: only population density, economic footprint (i.e., the size of the economy relative to the country area), economic footprint and income inequality (Gini index), and an index of environmental governance. We also tested the environmental Kuznets curve hypothesis, but it was not supported by the data. Statistical comparisons of the models revealed that the model including both economic footprint and inequality was the best predictor of threatened species. It significantly outperformed population density alone and the environmental governance model according to the Akaike information criterion. Inequality was a significant predictor of biodiversity loss and significantly improved the fit of our models. These results confirm that socioeconomic inequality is an important factor to consider when predicting rates of anthropogenic biodiversity loss.

  17. The predictive consequences of parameterization

    NASA Astrophysics Data System (ADS)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  18. Temporal and geographical external validation study and extension of the Mayo Clinic prediction model to predict eGFR in the younger population of Swiss ADPKD patients.

    PubMed

    Girardat-Rotar, Laura; Braun, Julia; Puhan, Milo A; Abraham, Alison G; Serra, Andreas L

    2017-07-17

    Prediction models in autosomal dominant polycystic kidney disease (ADPKD) are useful in clinical settings to identify patients with greater risk of a rapid disease progression in whom a treatment may have more benefits than harms. Mayo Clinic investigators developed a risk prediction tool for ADPKD patients using a single kidney value. Our aim was to perform an independent geographical and temporal external validation as well as evaluate the potential for improving the predictive performance by including additional information on total kidney volume. We used data from the on-going Swiss ADPKD study from 2006 to 2016. The main analysis included a sample size of 214 patients with Typical ADPKD (Class 1). We evaluated the Mayo Clinic model performance calibration and discrimination in our external sample and assessed whether predictive performance could be improved through the addition of subsequent kidney volume measurements beyond the baseline assessment. The calibration of both versions of the Mayo Clinic prediction model using continuous Height adjusted total kidney volume (HtTKV) and using risk subclasses was good, with R 2 of 78% and 70%, respectively. Accuracy was also good with 91.5% and 88.7% of the predicted within 30% of the observed, respectively. Additional information regarding kidney volume did not substantially improve the model performance. The Mayo Clinic prediction models are generalizable to other clinical settings and provide an accurate tool based on available predictors to identify patients at high risk for rapid disease progression.

  19. Predicting risk for portal vein thrombosis in acute pancreatitis patients: A comparison of radical basis function artificial neural network and logistic regression models.

    PubMed

    Fei, Yang; Hu, Jian; Gao, Kun; Tu, Jianfeng; Li, Wei-Qin; Wang, Wei

    2017-06-01

    To construct a radical basis function (RBF) artificial neural networks (ANNs) model to predict the incidence of acute pancreatitis (AP)-induced portal vein thrombosis. The analysis included 353 patients with AP who had admitted between January 2011 and December 2015. RBF ANNs model and logistic regression model were constructed based on eleven factors relevant to AP respectively. Statistical indexes were used to evaluate the value of the prediction in two models. The predict sensitivity, specificity, positive predictive value, negative predictive value and accuracy by RBF ANNs model for PVT were 73.3%, 91.4%, 68.8%, 93.0% and 87.7%, respectively. There were significant differences between the RBF ANNs and logistic regression models in these parameters (P<0.05). In addition, a comparison of the area under receiver operating characteristic curves of the two models showed a statistically significant difference (P<0.05). The RBF ANNs model is more likely to predict the occurrence of PVT induced by AP than logistic regression model. D-dimer, AMY, Hct and PT were important prediction factors of approval for AP-induced PVT. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Prediction of Airfoil Characteristics With Higher Order Turbulence Models

    NASA Technical Reports Server (NTRS)

    Gatski, Thomas B.

    1996-01-01

    This study focuses on the prediction of airfoil characteristics, including lift and drag over a range of Reynolds numbers. Two different turbulence models, which represent two different types of models, are tested. The first is a standard isotropic eddy-viscosity two-equation model, and the second is an explicit algebraic stress model (EASM). The turbulent flow field over a general-aviation airfoil (GA(W)-2) at three Reynolds numbers is studied. At each Reynolds number, predicted lift and drag values at different angles of attack are compared with experimental results, and predicted variations of stall locations with Reynolds number are compared with experimental data. Finally, the size of the separation zone predicted by each model is analyzed, and correlated with the behavior of the lift coefficient near stall. In summary, the EASM model is able to predict the lift and drag coefficients over a wider range of angles of attack than the two-equation model for the three Reynolds numbers studied. However, both models are unable to predict the correct lift and drag behavior near the stall angle, and for the lowest Reynolds number case, the two-equation model did not predict separation on the airfoil near stall.

  1. One-month validation of the Space Weather Modeling Framework geospace model

    NASA Astrophysics Data System (ADS)

    Haiducek, J. D.; Welling, D. T.; Ganushkina, N. Y.; Morley, S.; Ozturk, D. S.

    2017-12-01

    The Space Weather Modeling Framework (SWMF) geospace model consists of a magnetohydrodynamic (MHD) simulation coupled to an inner magnetosphere model and an ionosphere model. This provides a predictive capability for magnetopsheric dynamics, including ground-based and space-based magnetic fields, geomagnetic indices, currents and densities throughout the magnetosphere, cross-polar cap potential, and magnetopause and bow shock locations. The only inputs are solar wind parameters and F10.7 radio flux. We have conducted a rigorous validation effort consisting of a continuous simulation covering the month of January, 2005 using three different model configurations. This provides a relatively large dataset for assessment of the model's predictive capabilities. We find that the model does an excellent job of predicting the Sym-H index, and performs well at predicting Kp and CPCP during active times. Dayside magnetopause and bow shock positions are also well predicted. The model tends to over-predict Kp and CPCP during quiet times and under-predicts the magnitude of AL during disturbances. The model under-predicts the magnitude of night-side geosynchronous Bz, and over-predicts the radial distance to the flank magnetopause and bow shock. This suggests that the model over-predicts stretching of the magnetotail and the overall size of the magnetotail. With the exception of the AL index and the nightside geosynchronous magnetic field, we find the results to be insensitive to grid resolution.

  2. Improved Predictions of the Geographic Distribution of Invasive Plants Using Climatic Niche Models.

    PubMed

    Ramírez-Albores, Jorge E; Bustamante, Ramiro O; Badano, Ernesto I

    2016-01-01

    Climatic niche models for invasive plants are usually constructed with occurrence records taken from literature and collections. Because these data neither discriminate among life-cycle stages of plants (adult or juvenile) nor the origin of individuals (naturally established or man-planted), the resulting models may mispredict the distribution ranges of these species. We propose that more accurate predictions could be obtained by modelling climatic niches with data of naturally established individuals, particularly with occurrence records of juvenile plants because this would restrict the predictions of models to those sites where climatic conditions allow the recruitment of the species. To test this proposal, we focused on the Peruvian peppertree (Schinus molle), a South American species that has largely invaded Mexico. Three climatic niche models were constructed for this species using high-resolution dataset gathered in the field. The first model included all occurrence records, irrespective of the life-cycle stage or origin of peppertrees (generalized niche model). The second model only included occurrence records of naturally established mature individuals (adult niche model), while the third model was constructed with occurrence records of naturally established juvenile plants (regeneration niche model). When models were compared, the generalized climatic niche model predicted the presence of peppertrees in sites located farther beyond the climatic thresholds that naturally established individuals can tolerate, suggesting that human activities influence the distribution of this invasive species. The adult and regeneration climatic niche models concurred in their predictions about the distribution of peppertrees, suggesting that naturally established adult trees only occur in sites where climatic conditions allow the recruitment of juvenile stages. These results support the proposal that climatic niches of invasive plants should be modelled with data of naturally established individuals because this improves the accuracy of predictions about their distribution ranges.

  3. Improved Predictions of the Geographic Distribution of Invasive Plants Using Climatic Niche Models

    PubMed Central

    Ramírez-Albores, Jorge E.; Bustamante, Ramiro O.

    2016-01-01

    Climatic niche models for invasive plants are usually constructed with occurrence records taken from literature and collections. Because these data neither discriminate among life-cycle stages of plants (adult or juvenile) nor the origin of individuals (naturally established or man-planted), the resulting models may mispredict the distribution ranges of these species. We propose that more accurate predictions could be obtained by modelling climatic niches with data of naturally established individuals, particularly with occurrence records of juvenile plants because this would restrict the predictions of models to those sites where climatic conditions allow the recruitment of the species. To test this proposal, we focused on the Peruvian peppertree (Schinus molle), a South American species that has largely invaded Mexico. Three climatic niche models were constructed for this species using high-resolution dataset gathered in the field. The first model included all occurrence records, irrespective of the life-cycle stage or origin of peppertrees (generalized niche model). The second model only included occurrence records of naturally established mature individuals (adult niche model), while the third model was constructed with occurrence records of naturally established juvenile plants (regeneration niche model). When models were compared, the generalized climatic niche model predicted the presence of peppertrees in sites located farther beyond the climatic thresholds that naturally established individuals can tolerate, suggesting that human activities influence the distribution of this invasive species. The adult and regeneration climatic niche models concurred in their predictions about the distribution of peppertrees, suggesting that naturally established adult trees only occur in sites where climatic conditions allow the recruitment of juvenile stages. These results support the proposal that climatic niches of invasive plants should be modelled with data of naturally established individuals because this improves the accuracy of predictions about their distribution ranges. PMID:27195983

  4. Current status and future needs of the BehavePlus Fire Modeling System

    Treesearch

    Patricia L. Andrews

    2014-01-01

    The BehavePlus Fire Modeling System is among the most widely used systems for wildland fire prediction. It is designed for use in a range of tasks including wildfire behaviour prediction, prescribed fire planning, fire investigation, fuel hazard assessment, fire model understanding, communication and research. BehavePlus is based on mathematical models for fire...

  5. Supplement to The User's Guide for The Stand Prognosis Model-version 5.0

    Treesearch

    William R. Wykoff

    1986-01-01

    Differences between Prognosis Model versions 4.0 and 5.0 are described. Additions to version 5.0 include an event monitor that schedules activities contingent on stand characteristics, a regeneration establishment model that predicts the structure of the regeneration stand following treatment, and a COVER model that predicts shrub development and total canopy cover....

  6. Bellows flow-induced vibrations

    NASA Technical Reports Server (NTRS)

    Tygielski, P. J.; Smyly, H. M.; Gerlach, C. R.

    1983-01-01

    The bellows flow excitation mechanism and results of comprehensive test program are summarized. The analytical model for predicting bellows flow induced stress is refined. The model includes the effects of an upstream elbow, arbitrary geometry, and multiple piles. A refined computer code for predicting flow induced stress is described which allows life prediction if a material S-N diagram is available.

  7. Can We Predict Individual Combined Benefit and Harm of Therapy? Warfarin Therapy for Atrial Fibrillation as a Test Case

    PubMed Central

    Li, Guowei; Thabane, Lehana; Delate, Thomas; Witt, Daniel M.; Levine, Mitchell A. H.; Cheng, Ji; Holbrook, Anne

    2016-01-01

    Objectives To construct and validate a prediction model for individual combined benefit and harm outcomes (stroke with no major bleeding, major bleeding with no stroke, neither event, or both) in patients with atrial fibrillation (AF) with and without warfarin therapy. Methods Using the Kaiser Permanente Colorado databases, we included patients newly diagnosed with AF between January 1, 2005 and December 31, 2012 for model construction and validation. The primary outcome was a prediction model of composite of stroke or major bleeding using polytomous logistic regression (PLR) modelling. The secondary outcome was a prediction model of all-cause mortality using the Cox regression modelling. Results We included 9074 patients with 4537 and 4537 warfarin users and non-users, respectively. In the derivation cohort (n = 4632), there were 136 strokes (2.94%), 280 major bleedings (6.04%) and 1194 deaths (25.78%) occurred. In the prediction models, warfarin use was not significantly associated with risk of stroke, but increased the risk of major bleeding and decreased the risk of death. Both the PLR and Cox models were robust, internally and externally validated, and with acceptable model performances. Conclusions In this study, we introduce a new methodology for predicting individual combined benefit and harm outcomes associated with warfarin therapy for patients with AF. Should this approach be validated in other patient populations, it has potential advantages over existing risk stratification approaches as a patient-physician aid for shared decision-making PMID:27513986

  8. Including non-additive genetic effects in Bayesian methods for the prediction of genetic values based on genome-wide markers

    PubMed Central

    2011-01-01

    Background Molecular marker information is a common source to draw inferences about the relationship between genetic and phenotypic variation. Genetic effects are often modelled as additively acting marker allele effects. The true mode of biological action can, of course, be different from this plain assumption. One possibility to better understand the genetic architecture of complex traits is to include intra-locus (dominance) and inter-locus (epistasis) interaction of alleles as well as the additive genetic effects when fitting a model to a trait. Several Bayesian MCMC approaches exist for the genome-wide estimation of genetic effects with high accuracy of genetic value prediction. Including pairwise interaction for thousands of loci would probably go beyond the scope of such a sampling algorithm because then millions of effects are to be estimated simultaneously leading to months of computation time. Alternative solving strategies are required when epistasis is studied. Methods We extended a fast Bayesian method (fBayesB), which was previously proposed for a purely additive model, to include non-additive effects. The fBayesB approach was used to estimate genetic effects on the basis of simulated datasets. Different scenarios were simulated to study the loss of accuracy of prediction, if epistatic effects were not simulated but modelled and vice versa. Results If 23 QTL were simulated to cause additive and dominance effects, both fBayesB and a conventional MCMC sampler BayesB yielded similar results in terms of accuracy of genetic value prediction and bias of variance component estimation based on a model including additive and dominance effects. Applying fBayesB to data with epistasis, accuracy could be improved by 5% when all pairwise interactions were modelled as well. The accuracy decreased more than 20% if genetic variation was spread over 230 QTL. In this scenario, accuracy based on modelling only additive and dominance effects was generally superior to that of the complex model including epistatic effects. Conclusions This simulation study showed that the fBayesB approach is convenient for genetic value prediction. Jointly estimating additive and non-additive effects (especially dominance) has reasonable impact on the accuracy of prediction and the proportion of genetic variation assigned to the additive genetic source. PMID:21867519

  9. Design of the Next Generation Aircraft Noise Prediction Program: ANOPP2

    NASA Technical Reports Server (NTRS)

    Lopes, Leonard V., Dr.; Burley, Casey L.

    2011-01-01

    The requirements, constraints, and design of NASA's next generation Aircraft NOise Prediction Program (ANOPP2) are introduced. Similar to its predecessor (ANOPP), ANOPP2 provides the U.S. Government with an independent aircraft system noise prediction capability that can be used as a stand-alone program or within larger trade studies that include performance, emissions, and fuel burn. The ANOPP2 framework is designed to facilitate the combination of acoustic approaches of varying fidelity for the analysis of noise from conventional and unconventional aircraft. ANOPP2 integrates noise prediction and propagation methods, including those found in ANOPP, into a unified system that is compatible for use within general aircraft analysis software. The design of the system is described in terms of its functionality and capability to perform predictions accounting for distributed sources, installation effects, and propagation through a non-uniform atmosphere including refraction and the influence of terrain. The philosophy of mixed fidelity noise prediction through the use of nested Ffowcs Williams and Hawkings surfaces is presented and specific issues associated with its implementation are identified. Demonstrations for a conventional twin-aisle and an unconventional hybrid wing body aircraft configuration are presented to show the feasibility and capabilities of the system. Isolated model-scale jet noise predictions are also presented using high-fidelity and reduced order models, further demonstrating ANOPP2's ability to provide predictions for model-scale test configurations.

  10. Explicit Pore Pressure Material Model in Carbon-Cloth Phenolic

    NASA Technical Reports Server (NTRS)

    Gutierrez-Lemini, Danton; Ehle, Curt

    2003-01-01

    An explicit material model that uses predicted pressure in the pores of a carbon-cloth phenolic (CCP) composite has been developed. This model is intended to be used within a finite-element model to predict phenomena specific to CCP components of solid-fuel-rocket nozzles subjected to high operating temperatures and to mechanical stresses that can be great enough to cause structural failures. Phenomena that can be predicted with the help of this model include failures of specimens in restrained-thermal-growth (RTG) tests, pocketing erosion, and ply lifting

  11. Predicting Grizzly Bear Density in Western North America

    PubMed Central

    Mowat, Garth; Heard, Douglas C.; Schwarz, Carl J.

    2013-01-01

    Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend. PMID:24367552

  12. Predicting grizzly bear density in western North America.

    PubMed

    Mowat, Garth; Heard, Douglas C; Schwarz, Carl J

    2013-01-01

    Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.

  13. A statistical model including age to predict passenger postures in the rear seats of automobiles.

    PubMed

    Park, Jangwoon; Ebert, Sheila M; Reed, Matthew P; Hallman, Jason J

    2016-06-01

    Few statistical models of rear seat passenger posture have been published, and none has taken into account the effects of occupant age. This study developed new statistical models for predicting passenger postures in the rear seats of automobiles. Postures of 89 adults with a wide range of age and body size were measured in a laboratory mock-up in seven seat configurations. Posture-prediction models for female and male passengers were separately developed by stepwise regression using age, body dimensions, seat configurations and two-way interactions as potential predictors. Passenger posture was significantly associated with age and the effects of other two-way interaction variables depended on age. A set of posture-prediction models are presented for women and men, and the prediction results are compared with previously published models. This study is the first study of passenger posture to include a large cohort of older passengers and the first to report a significant effect of age for adults. The presented models can be used to position computational and physical human models for vehicle design and assessment. Practitioner Summary: The significant effects of age, body dimensions and seat configuration on rear seat passenger posture were identified. The models can be used to accurately position computational human models or crash test dummies for older passengers in known rear seat configurations.

  14. Torsion-inversion tunneling patterns in the CH-stretch vibrationally excited states of the G12 family of molecules including methylamine.

    PubMed

    Dawadi, Mahesh B; Bhatta, Ram S; Perry, David S

    2013-12-19

    Two torsion-inversion tunneling models (models I and II) are reported for the CH-stretch vibrationally excited states in the G12 family of molecules. The torsion and inversion tunneling parameters, h(2v) and h(3v), respectively, are combined with low-order coupling terms involving the CH-stretch vibrations. Model I is a group theoretical treatment starting from the symmetric rotor methyl CH-stretch vibrations; model II is an internal coordinate model including the local-local CH-stretch coupling. Each model yields predicted torsion-inversion tunneling patterns of the four symmetry species, A, B, E1, and E2, in the CH-stretch excited states. Although the predicted tunneling patterns for the symmetric CH-stretch excited state are the same as for the ground state, inverted tunneling patterns are predicted for the asymmetric CH-stretches. The qualitative tunneling patterns predicted are independent of the model type and of the particular coupling terms considered. In model I, the magnitudes of the tunneling splittings in the two asymmetric CH-stretch excited states are equal to half of that in the ground state, but in model II, they differ when the tunneling rate is fast. The model predictions are compared across the series of molecules methanol, methylamine, 2-methylmalonaldehyde, and 5-methyltropolone and to the available experimental data.

  15. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    PubMed

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  16. Modelling complex phenomena in optical fibres

    NASA Astrophysics Data System (ADS)

    Allington-Smith, Jeremy; Murray, Graham; Lemke, Ulrike

    2012-09-01

    We present a new model for predicting the performance of fibre systems in the multimode limit. This is based on ray-­-tracing but includes a semi-­-empirical description of Focal Ratio Degradation (FRD). We show how FRD is simulated by the model. With this ability, it can be used to investigate a wide variety of phenomena including scrambling and the loss of light close to the limiting numerical aperture. It can also be used to predict the performance of non-­-round and asymmetric fibres.

  17. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.

    PubMed

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José

    2018-03-28

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.

  18. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    PubMed Central

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  19. Empirical models to predict the volumes of debris flows generated by recently burned basins in the western U.S.

    USGS Publications Warehouse

    Gartner, J.E.; Cannon, S.H.; Santi, P.M.; deWolfe, V.G.

    2008-01-01

    Recently burned basins frequently produce debris flows in response to moderate-to-severe rainfall. Post-fire hazard assessments of debris flows are most useful when they predict the volume of material that may flow out of a burned basin. This study develops a set of empirically-based models that predict potential volumes of wildfire-related debris flows in different regions and geologic settings. The models were developed using data from 53 recently burned basins in Colorado, Utah and California. The volumes of debris flows in these basins were determined by either measuring the volume of material eroded from the channels, or by estimating the amount of material removed from debris retention basins. For each basin, independent variables thought to affect the volume of the debris flow were determined. These variables include measures of basin morphology, basin areas burned at different severities, soil material properties, rock type, and rainfall amounts and intensities for storms triggering debris flows. Using these data, multiple regression analyses were used to create separate predictive models for volumes of debris flows generated by burned basins in six separate regions or settings, including the western U.S., southern California, the Rocky Mountain region, and basins underlain by sedimentary, metamorphic and granitic rocks. An evaluation of these models indicated that the best model (the Western U.S. model) explains 83% of the variability in the volumes of the debris flows, and includes variables that describe the basin area with slopes greater than or equal to 30%, the basin area burned at moderate and high severity, and total storm rainfall. This model was independently validated by comparing volumes of debris flows reported in the literature, to volumes estimated using the model. Eighty-seven percent of the reported volumes were within two residual standard errors of the volumes predicted using the model. This model is an improvement over previous models in that it includes a measure of burn severity and an estimate of modeling errors. The application of this model, in conjunction with models for the probability of debris flows, will enable more complete and rapid assessments of debris flow hazards following wildfire.

  20. Model-based learning and the contribution of the orbitofrontal cortex to the model-free world

    PubMed Central

    McDannald, Michael A.; Takahashi, Yuji K.; Lopatina, Nina; Pietras, Brad W.; Jones, Josh L.; Schoenbaum, Geoffrey

    2012-01-01

    Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. PMID:22487030

  1. Incorporating uncertainty in predictive species distribution modelling.

    PubMed

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  2. Genomic models with genotype × environment interaction for predicting hybrid performance: an application in maize hybrids.

    PubMed

    Acosta-Pech, Rocío; Crossa, José; de Los Campos, Gustavo; Teyssèdre, Simon; Claustres, Bruno; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino

    2017-07-01

    A new genomic model that incorporates genotype × environment interaction gave increased prediction accuracy of untested hybrid response for traits such as percent starch content, percent dry matter content and silage yield of maize hybrids. The prediction of hybrid performance (HP) is very important in agricultural breeding programs. In plant breeding, multi-environment trials play an important role in the selection of important traits, such as stability across environments, grain yield and pest resistance. Environmental conditions modulate gene expression causing genotype × environment interaction (G × E), such that the estimated genetic correlations of the performance of individual lines across environments summarize the joint action of genes and environmental conditions. This article proposes a genomic statistical model that incorporates G × E for general and specific combining ability for predicting the performance of hybrids in environments. The proposed model can also be applied to any other hybrid species with distinct parental pools. In this study, we evaluated the predictive ability of two HP prediction models using a cross-validation approach applied in extensive maize hybrid data, comprising 2724 hybrids derived from 507 dent lines and 24 flint lines, which were evaluated for three traits in 58 environments over 12 years; analyses were performed for each year. On average, genomic models that include the interaction of general and specific combining ability with environments have greater predictive ability than genomic models without interaction with environments (ranging from 12 to 22%, depending on the trait). We concluded that including G × E in the prediction of untested maize hybrids increases the accuracy of genomic models.

  3. A Mechanism-Based Model for the Prediction of the Metabolic Sites of Steroids Mediated by Cytochrome P450 3A4.

    PubMed

    Dai, Zi-Ru; Ai, Chun-Zhi; Ge, Guang-Bo; He, Yu-Qi; Wu, Jing-Jing; Wang, Jia-Yue; Man, Hui-Zi; Jia, Yan; Yang, Ling

    2015-06-30

    Early prediction of xenobiotic metabolism is essential for drug discovery and development. As the most important human drug-metabolizing enzyme, cytochrome P450 3A4 has a large active cavity and metabolizes a broad spectrum of substrates. The poor substrate specificity of CYP3A4 makes it a huge challenge to predict the metabolic site(s) on its substrates. This study aimed to develop a mechanism-based prediction model based on two key parameters, including the binding conformation and the reaction activity of ligands, which could reveal the process of real metabolic reaction(s) and the site(s) of modification. The newly established model was applied to predict the metabolic site(s) of steroids; a class of CYP3A4-preferred substrates. 38 steroids and 12 non-steroids were randomly divided into training and test sets. Two major metabolic reactions, including aliphatic hydroxylation and N-dealkylation, were involved in this study. At least one of the top three predicted metabolic sites was validated by the experimental data. The overall accuracy for the training and test were 82.14% and 86.36%, respectively. In summary, a mechanism-based prediction model was established for the first time, which could be used to predict the metabolic site(s) of CYP3A4 on steroids with high predictive accuracy.

  4. A computer program for predicting nonlinear uniaxial material responses using viscoplastic models

    NASA Technical Reports Server (NTRS)

    Chang, T. Y.; Thompson, R. L.

    1984-01-01

    A computer program was developed for predicting nonlinear uniaxial material responses using viscoplastic constitutive models. Four specific models, i.e., those due to Miller, Walker, Krieg-Swearengen-Rhode, and Robinson, are included. Any other unified model is easily implemented into the program in the form of subroutines. Analysis features include stress-strain cycling, creep response, stress relaxation, thermomechanical fatigue loop, or any combination of these responses. An outline is given on the theoretical background of uniaxial constitutive models, analysis procedure, and numerical integration methods for solving the nonlinear constitutive equations. In addition, a discussion on the computer program implementation is also given. Finally, seven numerical examples are included to demonstrate the versatility of the computer program developed.

  5. Recalibration of the Shear Stress Transport Model to Improve Calculation of Shock Separated Flows

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Yoder, Dennis A.

    2013-01-01

    The Menter Shear Stress Transport (SST) k . turbulence model is one of the most widely used two-equation Reynolds-averaged Navier-Stokes turbulence models for aerodynamic analyses. The model extends Menter s baseline (BSL) model to include a limiter that prevents the calculated turbulent shear stress from exceeding a prescribed fraction of the turbulent kinetic energy via a proportionality constant, a1, set to 0.31. Compared to other turbulence models, the SST model yields superior predictions of mild adverse pressure gradient flows including those with small separations. In shock - boundary layer interaction regions, the SST model produces separations that are too large while the BSL model is on the other extreme, predicting separations that are too small. In this paper, changing a1 to a value near 0.355 is shown to significantly improve predictions of shock separated flows. Several cases are examined computationally and experimental data is also considered to justify raising the value of a1 used for shock separated flows.

  6. Accounting for dominance to improve genomic evaluations of dairy cows for fertility and milk production traits.

    PubMed

    Aliloo, Hassan; Pryce, Jennie E; González-Recio, Oscar; Cocks, Benjamin G; Hayes, Ben J

    2016-02-01

    Dominance effects may contribute to genetic variation of complex traits in dairy cattle, especially for traits closely related to fitness such as fertility. However, traditional genetic evaluations generally ignore dominance effects and consider additive genetic effects only. Availability of dense single nucleotide polymorphisms (SNPs) panels provides the opportunity to investigate the role of dominance in quantitative variation of complex traits at both the SNP and animal levels. Including dominance effects in the genomic evaluation of animals could also help to increase the accuracy of prediction of future phenotypes. In this study, we estimated additive and dominance variance components for fertility and milk production traits of genotyped Holstein and Jersey cows in Australia. The predictive abilities of a model that accounts for additive effects only (additive), and a model that accounts for both additive and dominance effects (additive + dominance) were compared in a fivefold cross-validation. Estimates of the proportion of dominance variation relative to phenotypic variation that is captured by SNPs, for production traits, were up to 3.8 and 7.1 % in Holstein and Jersey cows, respectively, whereas, for fertility, they were equal to 1.2 % in Holstein and very close to zero in Jersey cows. We found that including dominance in the model was not consistently advantageous. Based on maximum likelihood ratio tests, the additive + dominance model fitted the data better than the additive model, for milk, fat and protein yields in both breeds. However, regarding the prediction of phenotypes assessed with fivefold cross-validation, including dominance effects in the model improved accuracy only for fat yield in Holstein cows. Regression coefficients of phenotypes on genetic values and mean squared errors of predictions showed that the predictive ability of the additive + dominance model was superior to that of the additive model for some of the traits. In both breeds, dominance effects were significant (P < 0.01) for all milk production traits but not for fertility. Accuracy of prediction of phenotypes was slightly increased by including dominance effects in the genomic evaluation model. Thus, it can help to better identify highly performing individuals and be useful for culling decisions.

  7. Particle-Surface Interaction Model and Method of Determining Particle-Surface Interactions

    NASA Technical Reports Server (NTRS)

    Hughes, David W. (Inventor)

    2012-01-01

    A method and model of predicting particle-surface interactions with a surface, such as the surface of a spacecraft. The method includes the steps of: determining a trajectory path of a plurality of moving particles; predicting whether any of the moving particles will intersect a surface; predicting whether any of the particles will be captured by the surface and/or; predicting a reflected trajectory and velocity of particles reflected from the surface.

  8. Predicting concrete corrosion of sewers using artificial neural network.

    PubMed

    Jiang, Guangming; Keller, Jurg; Bond, Philip L; Yuan, Zhiguo

    2016-04-01

    Corrosion is often a major failure mechanism for concrete sewers and under such circumstances the sewer service life is largely determined by the progression of microbially induced concrete corrosion. The modelling of sewer processes has become possible due to the improved understanding of in-sewer transformation. Recent systematic studies about the correlation between the corrosion processes and sewer environment factors should be utilized to improve the prediction capability of service life by sewer models. This paper presents an artificial neural network (ANN)-based approach for modelling the concrete corrosion processes in sewers. The approach included predicting the time for the corrosion to initiate and then predicting the corrosion rate after the initiation period. The ANN model was trained and validated with long-term (4.5 years) corrosion data obtained in laboratory corrosion chambers, and further verified with field measurements in real sewers across Australia. The trained model estimated the corrosion initiation time and corrosion rates very close to those measured in Australian sewers. The ANN model performed better than a multiple regression model also developed on the same dataset. Additionally, the ANN model can serve as a prediction framework for sewer service life, which can be progressively improved and expanded by including corrosion rates measured in different sewer conditions. Furthermore, the proposed methodology holds promise to facilitate the construction of analytical models associated with corrosion processes of concrete sewers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. In-hospital risk prediction for post-stroke depression: development and validation of the Post-stroke Depression Prediction Scale.

    PubMed

    de Man-van Ginkel, Janneke M; Hafsteinsdóttir, Thóra B; Lindeman, Eline; Ettema, Roelof G A; Grobbee, Diederick E; Schuurmans, Marieke J

    2013-09-01

    The timely detection of post-stroke depression is complicated by a decreasing length of hospital stay. Therefore, the Post-stroke Depression Prediction Scale was developed and validated. The Post-stroke Depression Prediction Scale is a clinical prediction model for the early identification of stroke patients at increased risk for post-stroke depression. The study included 410 consecutive stroke patients who were able to communicate adequately. Predictors were collected within the first week after stroke. Between 6 to 8 weeks after stroke, major depressive disorder was diagnosed using the Composite International Diagnostic Interview. Multivariable logistic regression models were fitted. A bootstrap-backward selection process resulted in a reduced model. Performance of the model was expressed by discrimination, calibration, and accuracy. The model included a medical history of depression or other psychiatric disorders, hypertension, angina pectoris, and the Barthel Index item dressing. The model had acceptable discrimination, based on an area under the receiver operating characteristic curve of 0.78 (0.72-0.85), and calibration (P value of the U-statistic, 0.96). Transforming the model to an easy-to-use risk-assessment table, the lowest risk category (sum score, <-10) showed a 2% risk of depression, which increased to 82% in the highest category (sum score, >21). The clinical prediction model enables clinicians to estimate the degree of the depression risk for an individual patient within the first week after stroke.

  10. Predictive information processing in music cognition. A critical review.

    PubMed

    Rohrmeier, Martin A; Koelsch, Stefan

    2012-02-01

    Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Estimating the fates of organic contaminants in an aquifer using QSAR.

    PubMed

    Lim, Seung Joo; Fox, Peter

    2013-01-01

    The quantitative structure activity relationship (QSAR) model, BIOWIN, was modified to more accurately estimate the fates of organic contaminants in an aquifer. The predictions from BIOWIN were modified to include oxidation and sorption effects. The predictive model therefore included the effects of sorption, biodegradation, and oxidation. A total of 35 organic compounds were used to validate the predictive model. The majority of the ratios of predicted half-life to measured half-life were within a factor of 2 and no ratio values were greater than a factor of 5. In addition, the accuracy of estimating the persistence of organic compounds in the sub-surface was superior when modified by the relative fraction adsorbed to the solid phase, 1/Rf, to that when modified by the remaining fraction of a given compound adsorbed to a solid, 1 - fs.

  12. Toward an Integrative Model of Creativity and Personality: Theoretical Suggestions and Preliminary Empirical Testing

    ERIC Educational Resources Information Center

    Fü rst, Guillaume; Ghisletta, Paolo; Lubart, Todd

    2016-01-01

    The present work proposes an integrative model of creativity that includes personality traits and cognitive processes. This model hypothesizes that three high-order personality factors predict two main process factors, which in turn predict intensity and achievement of creative activities. The personality factors are: "Plasticity" (high…

  13. A NONSTEADY-STATE ANALYTICAL MODEL TO PREDICT GASEOUS EMISSIONS OF VOLATILE ORGANIC COMPOUNDS FROM LANDFILLS. (R825689C072)

    EPA Science Inventory

    Abstract

    A general mathematical model is developed to predict emissions of volatile organic compounds (VOCs) from hazardous or sanitary landfills. The model is analytical in nature and includes important mechanisms occurring in unsaturated subsurface landfill environme...

  14. Fuel consumption models for pine flatwoods fuel types in the southeastern United States

    Treesearch

    Clinton S. Wright

    2013-01-01

    Modeling fire effects, including terrestrial and atmospheric carbon fluxes and pollutant emissions during wildland fires, requires accurate predictions of fuel consumption. Empirical models were developed for predicting fuel consumption from fuel and environmental measurements on a series of operational prescribed fires in pine flatwoods ecosystems in the southeastern...

  15. Inhibition by ultraviolet and photosynthetically available radiation lowers model estimates of depth-integrated picophytoplankton photosynthesis: global predictions for Prochlorococcus and Synechococcus.

    PubMed

    Neale, Patrick J; Thomas, Brian C

    2017-01-01

    Phytoplankton photosynthesis is often inhibited by ultraviolet (UV) and intense photosynthetically available radiation (PAR), but the effects on ocean productivity have received little consideration aside from polar areas subject to periodic enhanced UV-B due to depletion of stratospheric ozone. A more comprehensive assessment is important for understanding the contribution of phytoplankton production to the global carbon budget, present and future. Here, we consider responses in the temperate and tropical mid-ocean regions typically dominated by picophytoplankton including the prokaryotic lineages, Prochlorococcus and Synechococcus. Spectral models of photosynthetic response for each lineage were constructed using model strains cultured at different growth irradiances and temperatures. In the model, inhibition becomes more severe once exposure exceeds a threshold (E max ) related to repair capacity. Model parameters are presented for Prochlorococcus adding to those previously presented for Synechococcus. The models were applied to estimate midday, water column photosynthesis based on an atmospheric model of spectral radiation, satellite-derived spectral water transparency and temperature. Based on a global survey of inhibitory exposure severity, a full-latitude section of the mid-Pacific and near-equatorial region of the east Pacific were identified as representative regions for prediction of responses over the entire water column. Comparing predictions integrated over the water column including versus excluding inhibition, production was 7-28% lower due to inhibition depending on strain and site conditions. Inhibition was consistently greater for Prochlorococcus compared to two strains of Synechococcus. Considering only the surface mixed layer, production was inhibited 7-73%. On average, including inhibition lowered estimates of midday productivity around 20% for the modeled region of the Pacific with UV accounting for two-thirds of the reduction. In contrast, most other productivity models either ignore inhibition or only include PAR inhibition. Incorporation of E max model responses into an existing spectral model of depth-integrated, daily production will enable efficient global predictions of picophytoplankton productivity including inhibition. © 2016 John Wiley & Sons Ltd.

  16. Jet Noise Modeling for Supersonic Business Jet Application

    NASA Technical Reports Server (NTRS)

    Stone, James R.; Krejsa, Eugene A.; Clark, Bruce J.

    2004-01-01

    This document describes the development of an improved predictive model for coannular jet noise, including noise suppression modifications applicable to small supersonic-cruise aircraft such as the Supersonic Business Jet (SBJ), for NASA Langley Research Center (LaRC). For such aircraft a wide range of propulsion and integration options are under consideration. Thus there is a need for very versatile design tools, including a noise prediction model. The approach used is similar to that used with great success by the Modern Technologies Corporation (MTC) in developing a noise prediction model for two-dimensional mixer ejector (2DME) nozzles under the High Speed Research Program and in developing a more recent model for coannular nozzles over a wide range of conditions. If highly suppressed configurations are ultimately required, the 2DME model is expected to provide reasonable prediction for these smaller scales, although this has not been demonstrated. It is considered likely that more modest suppression approaches, such as dual stream nozzles featuring chevron or chute suppressors, perhaps in conjunction with inverted velocity profiles (IVP), will be sufficient for the SBJ.

  17. Comparison of in silico models for prediction of mutagenicity.

    PubMed

    Bakhtyari, Nazanin G; Raitano, Giuseppa; Benfenati, Emilio; Martin, Todd; Young, Douglas

    2013-01-01

    Using a dataset with more than 6000 compounds, the performance of eight quantitative structure activity relationships (QSAR) models was evaluated: ACD/Tox Suite, Absorption, Distribution, Metabolism, Elimination, and Toxicity of chemical substances (ADMET) predictor, Derek, Toxicity Estimation Software Tool (T.E.S.T.), TOxicity Prediction by Komputer Assisted Technology (TOPKAT), Toxtree, CEASAR, and SARpy (SAR in python). In general, the results showed a high level of performance. To have a realistic estimate of the predictive ability, the results for chemicals inside and outside the training set for each model were considered. The effect of applicability domain tools (when available) on the prediction accuracy was also evaluated. The predictive tools included QSAR models, knowledge-based systems, and a combination of both methods. Models based on statistical QSAR methods gave better results.

  18. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  19. A prediction model for periodontal disease: modelling and validation from a National Survey of 4061 Taiwanese adults.

    PubMed

    Lai, Hongmin; Su, Chiu-Wen; Yen, Amy Ming-Fang; Chiu, Sherry Yueh-Hsia; Fann, Jean Ching-Yuan; Wu, Wendy Yi-Ying; Chuang, Shu-Lin; Liu, Hsing-Chih; Chen, Hsiu-Hsi; Chen, Li-Sheng

    2015-05-01

    The aim of this study was to predict periodontal disease (PD) with demographical features, oral health behaviour, and clinical correlates based on a national survey of periodontal disease in Taiwan. A total of 4061 subjects who were enrolled in a cross-sectional nationwide survey on periodontal conditions of residents aged 18 years or older in Taiwan between 2007 and 2008 were included. The community periodontal index (CPI) was used to measure the periodontal status at the subject and sextant levels. Information on demographical features and other relevant predictive factors for PD was collected using a questionnaire. In our study population, 56.2% of subjects had CPI grades ≥3. Periodontitis, as defined by CPI ≥3, was best predicted by a model including age, gender, education, brushing frequency, mobile teeth, gingival bleeding, smoking, and BMI. The area under the curve (AUC) for the final prediction model was 0.712 (0.690-0.734). The AUC was 0.702 (0.665-0.740) according to cross-validation. A prediction model for PD using information obtained from questionnaires was developed. The feasibility of its application to risk stratification of PD should be considered with regard to community-based screening for asymptomatic PD. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Mathematical model to predict drivers' reaction speeds.

    PubMed

    Long, Benjamin L; Gillespie, A Isabella; Tanaka, Martin L

    2012-02-01

    Mental distractions and physical impairments can increase the risk of accidents by affecting a driver's ability to control the vehicle. In this article, we developed a linear mathematical model that can be used to quantitatively predict drivers' performance over a variety of possible driving conditions. Predictions were not limited only to conditions tested, but also included linear combinations of these tests conditions. Two groups of 12 participants were evaluated using a custom drivers' reaction speed testing device to evaluate the effect of cell phone talking, texting, and a fixed knee brace on the components of drivers' reaction speed. Cognitive reaction time was found to increase by 24% for cell phone talking and 74% for texting. The fixed knee brace increased musculoskeletal reaction time by 24%. These experimental data were used to develop a mathematical model to predict reaction speed for an untested condition, talking on a cell phone with a fixed knee brace. The model was verified by comparing the predicted reaction speed to measured experimental values from an independent test. The model predicted full braking time within 3% of the measured value. Although only a few influential conditions were evaluated, we present a general approach that can be expanded to include other types of distractions, impairments, and environmental conditions.

  1. Toward Process-resolving Synthesis and Prediction of Arctic Climate Change Using the Regional Arctic System Model

    NASA Astrophysics Data System (ADS)

    Maslowski, W.

    2017-12-01

    The Regional Arctic System Model (RASM) has been developed to better understand the operation of Arctic System at process scale and to improve prediction of its change at a spectrum of time scales. RASM is a pan-Arctic, fully coupled ice-ocean-atmosphere-land model with marine biogeochemistry extension to the ocean and sea ice models. The main goal of our research is to advance a system-level understanding of critical processes and feedbacks in the Arctic and their links with the Earth System. The secondary, an equally important objective, is to identify model needs for new or additional observations to better understand such processes and to help constrain models. Finally, RASM has been used to produce sea ice forecasts for September 2016 and 2017, in contribution to the Sea Ice Outlook of the Sea Ice Prediction Network. Future RASM forecasts, are likely to include increased resolution for model components and ecosystem predictions. Such research is in direct support of the US environmental assessment and prediction needs, including those of the U.S. Navy, Department of Defense, and the recent IARPC Arctic Research Plan 2017-2021. In addition to an overview of RASM technical details, selected model results are presented from a hierarchy of climate models together with available observations in the region to better understand potential oceanic contributions to polar amplification. RASM simulations are analyzed to evaluate model skill in representing seasonal climatology as well as interannual and multi-decadal climate variability and predictions. Selected physical processes and resulting feedbacks are discussed to emphasize the need for fully coupled climate model simulations, high model resolution and sensitivity of simulated sea ice states to scale dependent model parameterizations controlling ice dynamics, thermodynamics and coupling with the atmosphere and ocean.

  2. CFD validation experiments at McDonnell Aircraft Company

    NASA Technical Reports Server (NTRS)

    Verhoff, August

    1987-01-01

    Information is given in viewgraph form on computational fluid dynamics (CFD) validation experiments at McDonnell Aircraft Company. Topics covered include a high speed research model, a supersonic persistence fighter model, a generic fighter wing model, surface grids, force and moment predictions, surface pressure predictions, forebody models with 65 degree clipped delta wings, and the low aspect ratio wing/body experiment.

  3. Atmospheric prediction model survey

    NASA Technical Reports Server (NTRS)

    Wellck, R. E.

    1976-01-01

    As part of the SEASAT Satellite program of NASA, a survey of representative primitive equation atmospheric prediction models that exist in the world today was written for the Jet Propulsion Laboratory. Seventeen models developed by eleven different operational and research centers throughout the world are included in the survey. The surveys are tutorial in nature describing the features of the various models in a systematic manner.

  4. Model-free and model-based reward prediction errors in EEG.

    PubMed

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. NASA AVOSS Fast-Time Wake Prediction Models: User's Guide

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew

    2014-01-01

    The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.

  6. Use of a machine learning framework to predict substance use disorder treatment success

    PubMed Central

    Kelmansky, Diana; van der Laan, Mark; Sahker, Ethan; Jones, DeShauna; Arndt, Stephan

    2017-01-01

    There are several methods for building prediction models. The wealth of currently available modeling techniques usually forces the researcher to judge, a priori, what will likely be the best method. Super learning (SL) is a methodology that facilitates this decision by combining all identified prediction algorithms pertinent for a particular prediction problem. SL generates a final model that is at least as good as any of the other models considered for predicting the outcome. The overarching aim of this work is to introduce SL to analysts and practitioners. This work compares the performance of logistic regression, penalized regression, random forests, deep learning neural networks, and SL to predict successful substance use disorders (SUD) treatment. A nationwide database including 99,013 SUD treatment patients was used. All algorithms were evaluated using the area under the receiver operating characteristic curve (AUC) in a test sample that was not included in the training sample used to fit the prediction models. AUC for the models ranged between 0.793 and 0.820. SL was superior to all but one of the algorithms compared. An explanation of SL steps is provided. SL is the first step in targeted learning, an analytic framework that yields double robust effect estimation and inference with fewer assumptions than the usual parametric methods. Different aspects of SL depending on the context, its function within the targeted learning framework, and the benefits of this methodology in the addiction field are discussed. PMID:28394905

  7. Use of a machine learning framework to predict substance use disorder treatment success.

    PubMed

    Acion, Laura; Kelmansky, Diana; van der Laan, Mark; Sahker, Ethan; Jones, DeShauna; Arndt, Stephan

    2017-01-01

    There are several methods for building prediction models. The wealth of currently available modeling techniques usually forces the researcher to judge, a priori, what will likely be the best method. Super learning (SL) is a methodology that facilitates this decision by combining all identified prediction algorithms pertinent for a particular prediction problem. SL generates a final model that is at least as good as any of the other models considered for predicting the outcome. The overarching aim of this work is to introduce SL to analysts and practitioners. This work compares the performance of logistic regression, penalized regression, random forests, deep learning neural networks, and SL to predict successful substance use disorders (SUD) treatment. A nationwide database including 99,013 SUD treatment patients was used. All algorithms were evaluated using the area under the receiver operating characteristic curve (AUC) in a test sample that was not included in the training sample used to fit the prediction models. AUC for the models ranged between 0.793 and 0.820. SL was superior to all but one of the algorithms compared. An explanation of SL steps is provided. SL is the first step in targeted learning, an analytic framework that yields double robust effect estimation and inference with fewer assumptions than the usual parametric methods. Different aspects of SL depending on the context, its function within the targeted learning framework, and the benefits of this methodology in the addiction field are discussed.

  8. Development of a multi-ensemble Prediction Model for China

    NASA Astrophysics Data System (ADS)

    Brasseur, G. P.; Bouarar, I.; Petersen, A. K.

    2016-12-01

    As part of the EU-sponsored Panda and MarcoPolo Projects, a multi-model prediction system including 7 models has been developed. Most regional models use global air quality predictions provided by the Copernicus Atmospheric Monitoring Service and downscale the forecast at relatively high spatial resolution in eastern China. The paper will describe the forecast system and show examples of forecasts produced for several Chinese urban areas and displayed on a web site developed by the Dutch Meteorological service. A discussion on the accuracy of the predictions based on a detailed validation process using surface measurements from the Chinese monitoring network will be presented.

  9. Analysis of significant factors for dengue fever incidence prediction.

    PubMed

    Siriyasatien, Padet; Phumee, Atchara; Ongruk, Phatsavee; Jampachaisri, Katechan; Kesorn, Kraisak

    2016-04-16

    Many popular dengue forecasting techniques have been used by several researchers to extrapolate dengue incidence rates, including the K-H model, support vector machines (SVM), and artificial neural networks (ANN). The time series analysis methodology, particularly ARIMA and SARIMA, has been increasingly applied to the field of epidemiological research for dengue fever, dengue hemorrhagic fever, and other infectious diseases. The main drawback of these methods is that they do not consider other variables that are associated with the dependent variable. Additionally, new factors correlated to the disease are needed to enhance the prediction accuracy of the model when it is applied to areas of similar climates, where weather factors such as temperature, total rainfall, and humidity are not substantially different. Such drawbacks may consequently lower the predictive power for the outbreak. The predictive power of the forecasting model-assessed by Akaike's information criterion (AIC), Bayesian information criterion (BIC), and the mean absolute percentage error (MAPE)-is improved by including the new parameters for dengue outbreak prediction. This study's selected model outperforms all three other competing models with the lowest AIC, the lowest BIC, and a small MAPE value. The exclusive use of climate factors from similar locations decreases a model's prediction power. The multivariate Poisson regression, however, effectively forecasts even when climate variables are slightly different. Female mosquitoes and seasons were strongly correlated with dengue cases. Therefore, the dengue incidence trends provided by this model will assist the optimization of dengue prevention. The present work demonstrates the important roles of female mosquito infection rates from the previous season and climate factors (represented as seasons) in dengue outbreaks. Incorporating these two factors in the model significantly improves the predictive power of dengue hemorrhagic fever forecasting models, as confirmed by AIC, BIC, and MAPE.

  10. Conceptual modelling to predict unobserved system states - the case of groundwater flooding in the UK Chalk

    NASA Astrophysics Data System (ADS)

    Hartmann, A. J.; Ireson, A. M.

    2017-12-01

    Chalk aquifers represent an important source of drinking water in the UK. Due to its fractured-porous structure, Chalk aquifers are characterized by highly dynamic groundwater fluctuations that enhance the risk of groundwater flooding. The risk of groundwater flooding can be assessed by physically-based groundwater models. But for reliable results, a-priori information about the distribution of hydraulic conductivities and porosities is necessary, which is often not available. For that reason, conceptual simulation models are often used to predict groundwater behaviour. They commonly require calibration by historic groundwater observations. Consequently, their prediction performance may reduce significantly, when it comes to system states that did not occur within the calibration time series. In this study, we calibrate a conceptual model to the observed groundwater level observations at several locations within a Chalk system in Southern England. During the calibration period, no groundwater flooding occurred. We then apply our model to predict the groundwater dynamics of the system at a time that includes a groundwater flooding event. We show that the calibrated model provides reasonable predictions before and after the flooding event but it over-estimates groundwater levels during the event. After modifying the model structure to include topographic information, the model is capable of prediction the groundwater flooding event even though groundwater flooding never occurred in the calibration period. Although straight forward, our approach shows how conceptual process-based models can be applied to predict system states and dynamics that did not occur in the calibration period. We believe such an approach can be transferred to similar cases, especially to regions where rainfall intensities are expected to trigger processes and system states that may have not yet been observed.

  11. Parameter and prediction uncertainty in an optimized terrestrial carbon cycle model: Effects of constraining variables and data record length

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.

    2011-03-01

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.

  12. A micromechanics-based strength prediction methodology for notched metal matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1992-01-01

    An analytical micromechanics based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and post fatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.

  13. A micromechanics-based strength prediction methodology for notched metal-matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1993-01-01

    An analytical micromechanics-based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three-dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and postfatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics-based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.

  14. A Real-time Breakdown Prediction Method for Urban Expressway On-ramp Bottlenecks

    NASA Astrophysics Data System (ADS)

    Ye, Yingjun; Qin, Guoyang; Sun, Jian; Liu, Qiyuan

    2018-01-01

    Breakdown occurrence on expressway is considered to relate with various factors. Therefore, to investigate the association between breakdowns and these factors, a Bayesian network (BN) model is adopted in this paper. Based on the breakdown events identified at 10 urban expressways on-ramp in Shanghai, China, 23 parameters before breakdowns are extracted, including dynamic environment conditions aggregated with 5-minutes and static geometry features. Different time periods data are used to predict breakdown. Results indicate that the models using 5-10 min data prior to breakdown performs the best prediction, with the prediction accuracies higher than 73%. Moreover, one unified model for all bottlenecks is also built and shows reasonably good prediction performance with the classification accuracy of breakdowns about 75%, at best. Additionally, to simplify the model parameter input, the random forests (RF) model is adopted to identify the key variables. Modeling with the selected 7 parameters, the refined BN model can predict breakdown with adequate accuracy.

  15. Mortality and One-Year Functional Outcome in Elderly and Very Old Patients with Severe Traumatic Brain Injuries: Observed and Predicted

    PubMed Central

    Røe, Cecilie; Skandsen, Toril; Manskow, Unn; Ader, Tiina; Anke, Audny

    2015-01-01

    The aim of the present study was to evaluate mortality and functional outcome in old and very old patients with severe traumatic brain injury (TBI) and compare to the predicted outcome according to the internet based CRASH (Corticosteroid Randomization After Significant Head injury) model based prediction, from the Medical Research Council (MRC). Methods. Prospective, national multicenter study including patients with severe TBI ≥65 years. Predicted mortality and outcome were calculated based on clinical information (CRASH basic) (age, GCS score, and pupil reactivity to light), as well as with additional CT findings (CRASH CT). Observed 14-day mortality and favorable/unfavorable outcome according to the Glasgow Outcome Scale at one year was compared to the predicted outcome according to the CRASH models. Results. 97 patients, mean age 75 (SD 7) years, 64% men, were included. Two patients were lost to follow-up; 48 died within 14 days. The predicted versus the observed odds ratio (OR) for mortality was 2.65. Unfavorable outcome (GOSE < 5) was observed at one year follow-up in 72% of patients. The CRASH models predicted unfavorable outcome in all patients. Conclusion. The CRASH model overestimated mortality and unfavorable outcome in old and very old Norwegian patients with severe TBI. PMID:26688614

  16. External validation of structure-biodegradation relationship (SBR) models for predicting the biodegradability of xenobiotics.

    PubMed

    Devillers, J; Pandard, P; Richard, B

    2013-01-01

    Biodegradation is an important mechanism for eliminating xenobiotics by biotransforming them into simple organic and inorganic products. Faced with the ever growing number of chemicals available on the market, structure-biodegradation relationship (SBR) and quantitative structure-biodegradation relationship (QSBR) models are increasingly used as surrogates of the biodegradation tests. Such models have great potential for a quick and cheap estimation of the biodegradation potential of chemicals. The Estimation Programs Interface (EPI) Suite™ includes different models for predicting the potential aerobic biodegradability of organic substances. They are based on different endpoints, methodologies and/or statistical approaches. Among them, Biowin 5 and 6 appeared the most robust, being derived from the largest biodegradation database with results obtained only from the Ministry of International Trade and Industry (MITI) test. The aim of this study was to assess the predictive performances of these two models from a set of 356 chemicals extracted from notification dossiers including compatible biodegradation data. Another set of molecules with no more than four carbon atoms and substituted by various heteroatoms and/or functional groups was also embodied in the validation exercise. Comparisons were made with the predictions obtained with START (Structural Alerts for Reactivity in Toxtree). Biowin 5 and Biowin 6 gave satisfactorily prediction results except for the prediction of readily degradable chemicals. A consensus model built with Biowin 1 allowed the diminution of this tendency.

  17. National Centers for Environmental Prediction

    Science.gov Websites

    Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar The Mesoscale Modeling Branch conducts a program of research and development in support of the prediction. This research and development includes mesoscale four-dimensional data assimilation of domestic

  18. Ensemble learning of QTL models improves prediction of complex traits

    USDA-ARS?s Scientific Manuscript database

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability, but are less useful for genetic prediction due to difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage ...

  19. Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.

    PubMed

    Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong

    2007-09-01

    Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.

  20. High speed turboprop aeroacoustic study (counterrotation). Volume 1: Model development

    NASA Technical Reports Server (NTRS)

    Whitfield, C. E.; Mani, R.; Gliebe, P. R.

    1990-01-01

    The isolated counterrotating high speed turboprop noise prediction program was compared with model data taken in the GE Aircraft Engines Cell 41 anechoic facility, the Boeing Transonic Wind Tunnel, and in NASA-Lewis' 8x6 and 9x15 wind tunnels. The predictions show good agreement with measured data under both low and high speed simulated flight conditions. The installation effect model developed for single rotation, high speed turboprops was extended to include counterotation. The additional effect of mounting a pylon upstream of the forward rotor was included in the flow field modeling. A nontraditional mechanism concerning the acoustic radiation from a propeller at angle of attach was investigated. Predictions made using this approach show results that are in much closer agreement with measurement over a range of operating conditions than those obtained via traditional fluctuating force methods. The isolated rotors and installation effects models were combines into a single prediction program, results of which were compared with data taken during the flight test of the B727/UDF engine demonstrator aircraft. Satisfactory comparisons between prediction and measured data for the demonstrator airplane, together with the identification of a nontraditional radiation mechanism for propellers at angle of attack are achieved.

  1. High speed turboprop aeroacoustic study (counterrotation). Volume 1: Model development

    NASA Astrophysics Data System (ADS)

    Whitfield, C. E.; Mani, R.; Gliebe, P. R.

    1990-07-01

    The isolated counterrotating high speed turboprop noise prediction program was compared with model data taken in the GE Aircraft Engines Cell 41 anechoic facility, the Boeing Transonic Wind Tunnel, and in NASA-Lewis' 8x6 and 9x15 wind tunnels. The predictions show good agreement with measured data under both low and high speed simulated flight conditions. The installation effect model developed for single rotation, high speed turboprops was extended to include counterotation. The additional effect of mounting a pylon upstream of the forward rotor was included in the flow field modeling. A nontraditional mechanism concerning the acoustic radiation from a propeller at angle of attach was investigated. Predictions made using this approach show results that are in much closer agreement with measurement over a range of operating conditions than those obtained via traditional fluctuating force methods. The isolated rotors and installation effects models were combines into a single prediction program, results of which were compared with data taken during the flight test of the B727/UDF engine demonstrator aircraft. Satisfactory comparisons between prediction and measured data for the demonstrator airplane, together with the identification of a nontraditional radiation mechanism for propellers at angle of attack are achieved.

  2. Applicability of a panel method, which includes nonlinear effects, to a forward-swept-wing aircraft

    NASA Technical Reports Server (NTRS)

    Ross, J. C.

    1984-01-01

    The ability of a lower order panel method VSAERO, to accurately predict the lift and pitching moment of a complete forward-swept-wing/canard configuration was investigated. The program can simulate nonlinear effects including boundary-layer displacement thickness, wake roll up, and to a limited extent, separated wakes. The predictions were compared with experimental data obtained using a small-scale model in the 7- by 10- Foot Wind Tunnel at NASA Ames Research Center. For the particular configuration under investigation, wake roll up had only a small effect on the force and moment predictions. The effect of the displacement thickness modeling was to reduce the lift curve slope slightly, thus bringing the predicted lift into good agreement with the measured value. Pitching moment predictions were also improved by the boundary-layer simulation. The separation modeling was found to be sensitive to user inputs, but appears to give a reasonable representation of a separated wake. In general, the nonlinear capabilities of the code were found to improve the agreement with experimental data. The usefullness of the code would be enhanced by improving the reliability of the separated wake modeling and by the addition of a leading edge separation model.

  3. Physiologically-Based Pharmacokinetic Modeling of Macitentan: Prediction of Drug-Drug Interactions.

    PubMed

    de Kanter, Ruben; Sidharta, Patricia N; Delahaye, Stéphane; Gnerre, Carmela; Segrestaa, Jerome; Buchmann, Stephan; Kohl, Christopher; Treiber, Alexander

    2016-03-01

    Macitentan is a novel dual endothelin receptor antagonist for the treatment of pulmonary arterial hypertension (PAH). It is metabolized by cytochrome P450 (CYP) enzymes, mainly CYP3A4, to its active metabolite ACT-132577. A physiological-based pharmacokinetic (PBPK) model was developed by combining observations from clinical studies and physicochemical parameters as well as absorption, distribution, metabolism and excretion parameters determined in vitro. The model predicted the observed pharmacokinetics of macitentan and its active metabolite ACT-132577 after single and multiple dosing. It performed well in recovering the observed effect of the CYP3A4 inhibitors ketoconazole and cyclosporine, and the CYP3A4 inducer rifampicin, as well as in predicting interactions with S-warfarin and sildenafil. The model was robust enough to allow prospective predictions of macitentan-drug combinations not studied, including an alternative dosing regimen of ketoconazole and nine other CYP3A4-interacting drugs. Among these were the HIV drugs ritonavir and saquinavir, which were included because HIV infection is a known risk factor for the development of PAH. This example of the application of PBPK modeling to predict drug-drug interactions was used to support the labeling of macitentan (Opsumit).

  4. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    PubMed

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.

  5. Modeling Emergent Macrophyte Distributions: Including Sub-dominant Species

    EPA Science Inventory

    Mixed stands of emergent vegetation are often present following drawdowns but models of wetland plant distributions fail to include subdominant species when predicting distributions. Three variations of a spatial plant distribution cellular automaton model were developed to explo...

  6. Classification and disease prediction via mathematical programming

    NASA Astrophysics Data System (ADS)

    Lee, Eva K.; Wu, Tsung-Lin

    2007-11-01

    In this chapter, we present classification models based on mathematical programming approaches. We first provide an overview on various mathematical programming approaches, including linear programming, mixed integer programming, nonlinear programming and support vector machines. Next, we present our effort of novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule) and (5) successive multi-stage classification capability to handle data points placed in the reserved judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multigroup prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; multistage discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80% to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.

  7. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  8. Dermal uptake directly from air under transient conditions: advances in modeling and comparisons with experimental results for human subjects.

    PubMed

    Morrison, G C; Weschler, C J; Bekö, G

    2016-12-01

    To better understand the dermal exposure pathway, we enhance an existing mechanistic model of transdermal uptake by including skin surface lipids (SSL) and consider the impact of clothing. Addition of SSL increases the overall resistance to uptake of SVOCs from air but also allows for rapid transfer of SVOCs to sinks like clothing or clean air. We test the model by simulating di-ethyl phthalate (DEP) and di-n-butyl phthalate (DnBP) exposures of six bare-skinned (Weschler et al. 2015, Environ. Health Perspect., 123, 928) and one clothed participant (Morrison et al. 2016, J. Expo. Sci. Environ. Epidemiol., 26, 113). The model predicts total uptake values that are consistent with the measured values. For bare-skinned participants, the model predicts a normalized mass uptake of DEP of 3.1 (μg/m 2 )/(μg/m 3 ), whereas the experimental results range from 1.0 to 4.3 (μg/m 2 )/(μg/m 3 ); uptake of DnBP is somewhat overpredicted: 4.6 (μg/m 2 )/(μg/m 3 ) vs. the experimental range of 0.5-3.2 (μg/m 2 )/(μg/m 3 ). For the clothed participant, the model predicts higher than observed uptake for both species. Uncertainty in model inputs, including convective mass transfer coefficients, partition coefficients, and diffusion coefficients, could account for overpredictions. Simulations that include transfer of skin oil to clothing improve model predictions. A dynamic model that includes SSL is more sensitive to changes that impact external mass transfer such as putting on and removing clothes and bathing. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Evaluation of anthropogenic influence in probabilistic forecasting of coastal change

    NASA Astrophysics Data System (ADS)

    Hapke, C. J.; Wilson, K.; Adams, P. N.

    2014-12-01

    Prediction of large scale coastal behavior is especially challenging in areas of pervasive human activity. Many coastal zones on the Gulf and Atlantic coasts are moderately to highly modified through the use of soft sediment and hard stabilization techniques. These practices have the potential to alter sediment transport and availability, as well as reshape the beach profile, ultimately transforming the natural evolution of the coastal system. We present the results of a series of probabilistic models, designed to predict the observed geomorphic response to high wave events at Fire Island, New York. The island comprises a variety of land use types, including inhabited communities with modified beaches, where beach nourishment and artificial dune construction (scraping) occur, unmodified zones, and protected national seashore. This variation in land use presents an opportunity for comparison of model accuracy across highly modified and rarely modified stretches of coastline. Eight models with basic and expanded structures were developed, resulting in sixteen models, informed with observational data from Fire Island. The basic model type does not include anthropogenic modification. The expanded model includes records of nourishment and scraping, designed to quantify the improved accuracy when anthropogenic activity is represented. Modification was included as frequency of occurrence divided by the time since the most recent event, to distinguish between recent and historic events. All but one model reported improved predictive accuracy from the basic to expanded form. The addition of nourishment and scraping parameters resulted in a maximum reduction in predictive error of 36%. The seven improved models reported an average 23% reduction in error. These results indicate that it is advantageous to incorporate the human forcing into a coastal hazards probability model framework.

  10. Development of a model for predicting NASA/MSFC program success

    NASA Technical Reports Server (NTRS)

    Riggs, Jeffrey; Miller, Tracy; Finley, Rosemary

    1990-01-01

    Research conducted during the execution of a previous contract (NAS8-36955/0039) firmly established the feasibility of developing a tool to aid decision makers in predicting the potential success of proposed projects. The final report from that investigation contains an outline of the method to be applied in developing this Project Success Predictor Model. As a follow-on to the previous study, this report describes in detail the development of this model and includes full explanation of the data-gathering techniques used to poll expert opinion. The report includes the presentation of the model code itself.

  11. Food-web models predict species abundances in response to habitat change.

    PubMed

    Gotelli, Nicholas J; Ellison, Aaron M

    2006-10-01

    Plant and animal population sizes inevitably change following habitat loss, but the mechanisms underlying these changes are poorly understood. We experimentally altered habitat volume and eliminated top trophic levels of the food web of invertebrates that inhabit rain-filled leaves of the carnivorous pitcher plant Sarracenia purpurea. Path models that incorporated food-web structure better predicted population sizes of food-web constituents than did simple keystone species models, models that included only autecological responses to habitat volume, or models including both food-web structure and habitat volume. These results provide the first experimental confirmation that trophic structure can determine species abundances in the face of habitat loss.

  12. Evaluation and prediction of long-term environmental effects of nonmetallic materials

    NASA Technical Reports Server (NTRS)

    Papazian, H.

    1985-01-01

    The properties of a number of nonmetallic materials were evaluated experimentally in simulated space environments in order to develop models for accelerated test methods useful for predicting such behavioral changes. Graphite-epoxy composites were exposed to thermal cycling. Adhesive foam tapes were subjected to a vacuum environment. Metal-matrix composites were tested for baseline data. Predictive modeling designed to include strength and aging effects on composites, polymeric films, and metals under such space conditions (including the atomic oxygen environment) is discussed. The Korel 8031-00 high strength adhesive foam tape was shown to be superior to the other two tested.

  13. Assessing the accuracy of ANFIS, EEMD-GRNN, PCR, and MLR models in predicting PM2.5

    NASA Astrophysics Data System (ADS)

    Ausati, Shadi; Amanollahi, Jamil

    2016-10-01

    Since Sanandaj is considered one of polluted cities of Iran, prediction of any type of pollution especially prediction of suspended particles of PM2.5, which are the cause of many diseases, could contribute to health of society by timely announcements and prior to increase of PM2.5. In order to predict PM2.5 concentration in the Sanandaj air the hybrid models consisting of an ensemble empirical mode decomposition and general regression neural network (EEMD-GRNN), Adaptive Neuro-Fuzzy Inference System (ANFIS), principal component regression (PCR), and linear model such as multiple liner regression (MLR) model were used. In these models the data of suspended particles of PM2.5 were the dependent variable and the data related to air quality including PM2.5, PM10, SO2, NO2, CO, O3 and meteorological data including average minimum temperature (Min T), average maximum temperature (Max T), average atmospheric pressure (AP), daily total precipitation (TP), daily relative humidity level of the air (RH) and daily wind speed (WS) for the year 2014 in Sanandaj were the independent variables. Among the used models, EEMD-GRNN model with values of R2 = 0.90, root mean square error (RMSE) = 4.9218 and mean absolute error (MAE) = 3.4644 in the training phase and with values of R2 = 0.79, RMSE = 5.0324 and MAE = 3.2565 in the testing phase, exhibited the best function in predicting this phenomenon. It can be concluded that hybrid models have accurate results to predict PM2.5 concentration compared with linear model.

  14. Communication Efficacy and Couples’ Cancer Management: Applying a Dyadic Appraisal Model

    PubMed Central

    Magsamen-Conrad, Kate; Checton, Maria G.; Venetis, Maria K.; Greene, Kathryn

    2014-01-01

    The purpose of the present study was to apply Berg and Upchurch’s (2007) developmental-conceptual model to understand better how couples cope with cancer. Specifically, we hypothesized a dyadic appraisal model in which proximal factors (relational quality), dyadic appraisal (prognosis uncertainty), and dyadic coping (communication efficacy) predicted adjustment (cancer management). The study was cross-sectional and included 83 dyads in which one partner had been diagnosed with and/or treated for cancer. For both patients and partners, multilevel analyses using the actor-partner interdependence model (APIM) indicated that proximal contextual factors predicted dyadic appraisal and dyadic coping. Dyadic appraisal predicted dyadic coping, which then predicted dyadic adjustment. Patients’ confidence in their ability to talk about the cancer predicted their own cancer management. Partners’ confidence predicted their own and the patient’s ability to cope with cancer, which then predicted patients’ perceptions of their general health. Implications and future research are discussed. PMID:25983382

  15. Communication Efficacy and Couples' Cancer Management: Applying a Dyadic Appraisal Model.

    PubMed

    Magsamen-Conrad, Kate; Checton, Maria G; Venetis, Maria K; Greene, Kathryn

    2015-06-01

    The purpose of the present study was to apply Berg and Upchurch's (2007) developmental-conceptual model to understand better how couples cope with cancer. Specifically, we hypothesized a dyadic appraisal model in which proximal factors (relational quality), dyadic appraisal (prognosis uncertainty), and dyadic coping (communication efficacy) predicted adjustment (cancer management). The study was cross-sectional and included 83 dyads in which one partner had been diagnosed with and/or treated for cancer. For both patients and partners, multilevel analyses using the actor-partner interdependence model (APIM) indicated that proximal contextual factors predicted dyadic appraisal and dyadic coping. Dyadic appraisal predicted dyadic coping, which then predicted dyadic adjustment. Patients' confidence in their ability to talk about the cancer predicted their own cancer management. Partners' confidence predicted their own and the patient's ability to cope with cancer, which then predicted patients' perceptions of their general health. Implications and future research are discussed.

  16. Development of an Uncertainty Quantification Predictive Chemical Reaction Model for Syngas Combustion

    DOE PAGES

    Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.; ...

    2017-01-24

    An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less

  17. Development of an Uncertainty Quantification Predictive Chemical Reaction Model for Syngas Combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.

    An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less

  18. Innovative Liner Concepts: Experiments and Impedance Modeling of Liners Including the Effect of Bias Flow

    NASA Technical Reports Server (NTRS)

    Kelly, Jeff; Betts, Juan Fernando; Fuller, Chris

    2000-01-01

    The study of normal impedance of perforated plate acoustic liners including the effect of bias flow was studied. Two impedance models were developed by modeling the internal flows of perforate orifices as infinite tubes with the inclusion of end corrections to handle finite length effects. These models assumed incompressible and compressible flows, respectively, between the far field and the perforate orifice. The incompressible model was used to predict impedance results for perforated plates with percent open areas ranging from 5% to 15%. The predicted resistance results showed better agreement with experiments for the higher percent open area samples. The agreement also tended to deteriorate as bias flow was increased. For perforated plates with percent open areas ranging from 1% to 5%, the compressible model was used to predict impedance results. The model predictions were closer to the experimental resistance results for the 2% to 3% open area samples. The predictions tended to deteriorate as bias flow was increased. The reactance results were well predicted by the models for the higher percent open area, but deteriorated as the percent open area was lowered (5%) and bias flow was increased. A fit was done on the incompressible model to the experimental database. The fit was performed using an optimization routine that found the optimal set of multiplication coefficients to the non-dimensional groups that minimized the least squares slope error between predictions and experiments. The result of the fit indicated that terms not associated with bias flow required a greater degree of correction than the terms associated with the bias flow. This model improved agreement with experiments by nearly 15% for the low percent open area (5%) samples when compared to the unfitted model. The fitted model and the unfitted model performed equally well for the higher percent open area (10% and 15%).

  19. External validation of a prediction model for surgical site infection after thoracolumbar spine surgery in a Western European cohort.

    PubMed

    Janssen, Daniël M C; van Kuijk, Sander M J; d'Aumerie, Boudewijn B; Willems, Paul C

    2018-05-16

    A prediction model for surgical site infection (SSI) after spine surgery was developed in 2014 by Lee et al. This model was developed to compute an individual estimate of the probability of SSI after spine surgery based on the patient's comorbidity profile and invasiveness of surgery. Before any prediction model can be validly implemented in daily medical practice, it should be externally validated to assess how the prediction model performs in patients sampled independently from the derivation cohort. We included 898 consecutive patients who underwent instrumented thoracolumbar spine surgery. To quantify overall performance using Nagelkerke's R 2 statistic, the discriminative ability was quantified as the area under the receiver operating characteristic curve (AUC). We computed the calibration slope of the calibration plot, to judge prediction accuracy. Sixty patients developed an SSI. The overall performance of the prediction model in our population was poor: Nagelkerke's R 2 was 0.01. The AUC was 0.61 (95% confidence interval (CI) 0.54-0.68). The estimated slope of the calibration plot was 0.52. The previously published prediction model showed poor performance in our academic external validation cohort. To predict SSI after instrumented thoracolumbar spine surgery for the present population, a better fitting prediction model should be developed.

  20. Effect of Adding McKenzie Syndrome, Centralization, Directional Preference, and Psychosocial Classification Variables to a Risk-Adjusted Model Predicting Functional Status Outcomes for Patients With Lumbar Impairments.

    PubMed

    Werneke, Mark W; Edmond, Susan; Deutscher, Daniel; Ward, Jason; Grigsby, David; Young, Michelle; McGill, Troy; McClenahan, Brian; Weinberg, Jon; Davidow, Amy L

    2016-09-01

    Study Design Retrospective cohort. Background Patient-classification subgroupings may be important prognostic factors explaining outcomes. Objectives To determine effects of adding classification variables (McKenzie syndrome and pain patterns, including centralization and directional preference; Symptom Checklist Back Pain Prediction Model [SCL BPPM]; and the Fear-Avoidance Beliefs Questionnaire subscales of work and physical activity) to a baseline risk-adjusted model predicting functional status (FS) outcomes. Methods Consecutive patients completed a battery of questionnaires that gathered information on 11 risk-adjustment variables. Physical therapists trained in Mechanical Diagnosis and Therapy methods classified each patient by McKenzie syndromes and pain pattern. Functional status was assessed at discharge by patient-reported outcomes. Only patients with complete data were included. Risk of selection bias was assessed. Prediction of discharge FS was assessed using linear stepwise regression models, allowing 13 variables to enter the model. Significant variables were retained in subsequent models. Model power (R(2)) and beta coefficients for model variables were estimated. Results Two thousand sixty-six patients with lumbar impairments were evaluated. Of those, 994 (48%), 10 (<1%), and 601 (29%) were excluded due to incomplete psychosocial data, McKenzie classification data, and missing FS at discharge, respectively. The final sample for analyses was 723 (35%). Overall R(2) for the baseline prediction FS model was 0.40. Adding classification variables to the baseline model did not result in significant increases in R(2). McKenzie syndrome or pain pattern explained 2.8% and 3.0% of the variance, respectively. When pain pattern and SCL BPPM were added simultaneously, overall model R(2) increased to 0.44. Although none of these increases in R(2) were significant, some classification variables were stronger predictors compared with some other variables included in the baseline model. Conclusion The small added prognostic capabilities identified when combining McKenzie or pain-pattern classifications with the SCL BPPM classification did not significantly improve prediction of FS outcomes in this study. Additional research is warranted to investigate the importance of classification variables compared with those used in the baseline model to maximize predictive power. Level of Evidence Prognosis, level 4. J Orthop Sports Phys Ther 2016;46(9):726-741. Epub 31 Jul 2016. doi:10.2519/jospt.2016.6266.

  1. Biogeochemical metabolic modeling of methanogenesis by Methanosarcina barkeri

    NASA Astrophysics Data System (ADS)

    Jensvold, Z. D.; Jin, Q.

    2015-12-01

    Methanogenesis, the biological process of methane production, is the final step of natural organic matter degradation. In studying natural methanogenesis, important questions include how fast methanogenesis proceeds and how methanogens adapt to the environment. To address these questions, we propose a new approach - biogeochemical reaction modeling - by simulating the metabolic networks of methanogens. Biogeochemical reaction modeling combines geochemical reaction modeling and genome-scale metabolic modeling. Geochemical reaction modeling focuses on the speciation of electron donors and acceptors in the environment, and therefore the energy available to methanogens. Genome-scale metabolic modeling predicts microbial rates and metabolic strategies. Specifically, this approach describes methanogenesis using an enzyme network model, and computes enzyme rates by accounting for both the kinetics and thermodynamics. The network model is simulated numerically to predict enzyme abundances and rates of methanogen metabolism. We applied this new approach to Methanosarcina barkeri strain fusaro, a model methanogen that makes methane by reducing carbon dioxide and oxidizing dihydrogen. The simulation results match well with the results of previous laboratory experiments, including the magnitude of proton motive force and the kinetic parameters of Methanosarcina barkeri. The results also predict that in natural environments, the configuration of methanogenesis network, including the concentrations of enzymes and metabolites, differs significantly from that under laboratory settings.

  2. Computation of turbulent high speed mixing layers using a two-equation turbulence model

    NASA Technical Reports Server (NTRS)

    Narayan, J. R.; Sekar, B.

    1991-01-01

    A two-equation turbulence model was extended to be applicable for compressible flows. A compressibility correction based on modelling the dilational terms in the Reynolds stress equations were included in the model. The model is used in conjunction with the SPARK code for the computation of high speed mixing layers. The observed trend of decreasing growth rate with increasing convective Mach number in compressible mixing layers is well predicted by the model. The predictions agree well with the experimental data and the results from a compressible Reynolds stress model. The present model appears to be well suited for the study of compressible free shear flows. Preliminary results obtained for the reacting mixing layers are included.

  3. Multivariate statistical assessment of predictors of firefighters' muscular and aerobic work capacity.

    PubMed

    Lindberg, Ann-Sofie; Oksa, Juha; Antti, Henrik; Malm, Christer

    2015-01-01

    Physical capacity has previously been deemed important for firefighters physical work capacity, and aerobic fitness, muscular strength, and muscular endurance are the most frequently investigated parameters of importance. Traditionally, bivariate and multivariate linear regression statistics have been used to study relationships between physical capacities and work capacities among firefighters. An alternative way to handle datasets consisting of numerous correlated variables is to use multivariate projection analyses, such as Orthogonal Projection to Latent Structures. The first aim of the present study was to evaluate the prediction and predictive power of field and laboratory tests, respectively, on firefighters' physical work capacity on selected work tasks. Also, to study if valid predictions could be achieved without anthropometric data. The second aim was to externally validate selected models. The third aim was to validate selected models on firefighters' and on civilians'. A total of 38 (26 men and 12 women) + 90 (38 men and 52 women) subjects were included in the models and the external validation, respectively. The best prediction (R2) and predictive power (Q2) of Stairs, Pulling, Demolition, Terrain, and Rescue work capacities included field tests (R2 = 0.73 to 0.84, Q2 = 0.68 to 0.82). The best external validation was for Stairs work capacity (R2 = 0.80) and worst for Demolition work capacity (R2 = 0.40). In conclusion, field and laboratory tests could equally well predict physical work capacities for firefighting work tasks, and models excluding anthropometric data were valid. The predictive power was satisfactory for all included work tasks except Demolition.

  4. Urban RoGeR: Merging process-based high-resolution flash flood model for urban areas with long-term water balance predictions

    NASA Astrophysics Data System (ADS)

    Weiler, M.

    2016-12-01

    Heavy rain induced flash floods are still a serious hazard and generate high damages in urban areas. In particular in the spatially complex urban areas, the temporal and spatial pattern of runoff generation processes at a wide spatial range during extreme rainfall events need to be predicted including the specific effects of green infrastructure and urban forests. In addition, the initial conditions (soil moisture pattern, water storage of green infrastructure) and the effect of lateral redistribution of water (run-on effects and re-infiltration) have to be included in order realistically predict flash flood generation. We further developed the distributed, process-based model RoGeR (Runoff Generation Research) to include the relevant features and processes in urban areas in order to test the effects of different settings, initial conditions and the lateral redistribution of water on the predicted flood response. The uncalibrated model RoGeR runs at a spatial resolution of 1*1m² (LiDAR, degree of sealing, landuse), soil properties and geology (1:50.000). In addition, different green infrastructures are included into the model as well as the effect of trees on interception and transpiration. A hydraulic model was included into RoGeR to predict surface runoff, water redistribution, and re-infiltration. During rainfall events, RoGeR predicts at 5 min temporal resolution, but the model also simulates evapotranspiration and groundwater recharge during rain-free periods at a longer time step. The model framework was applied to several case studies in Germany where intense rainfall events produced flash floods causing high damage in urban areas and to a long-term research catchment in an urban setting (Vauban, Freiburg), where a variety of green infrastructures dominates the hydrology. Urban-RoGeR allowed us to study the effects of different green infrastructures on reducing the flood peak, but also its effect on the water balance (evapotranspiration and groundwater recharge). We could also show that infiltration of surface runoff from areas with a low infiltration (lateral redistribution) reduce the flood peaks by over 90% in certain areas and situations. Finally, we also evaluated the model to long-term runoff observations (surface runoff, ET, roof runoff) and to flood marks in the selected case studies.

  5. Assessing the relative importance of parameter and forcing uncertainty and their interactions in conceptual hydrological model simulations

    NASA Astrophysics Data System (ADS)

    Mockler, E. M.; Chun, K. P.; Sapriza-Azuri, G.; Bruen, M.; Wheater, H. S.

    2016-11-01

    Predictions of river flow dynamics provide vital information for many aspects of water management including water resource planning, climate adaptation, and flood and drought assessments. Many of the subjective choices that modellers make including model and criteria selection can have a significant impact on the magnitude and distribution of the output uncertainty. Hydrological modellers are tasked with understanding and minimising the uncertainty surrounding streamflow predictions before communicating the overall uncertainty to decision makers. Parameter uncertainty in conceptual rainfall-runoff models has been widely investigated, and model structural uncertainty and forcing data have been receiving increasing attention. This study aimed to assess uncertainties in streamflow predictions due to forcing data and the identification of behavioural parameter sets in 31 Irish catchments. By combining stochastic rainfall ensembles and multiple parameter sets for three conceptual rainfall-runoff models, an analysis of variance model was used to decompose the total uncertainty in streamflow simulations into contributions from (i) forcing data, (ii) identification of model parameters and (iii) interactions between the two. The analysis illustrates that, for our subjective choices, hydrological model selection had a greater contribution to overall uncertainty, while performance criteria selection influenced the relative intra-annual uncertainties in streamflow predictions. Uncertainties in streamflow predictions due to the method of determining parameters were relatively lower for wetter catchments, and more evenly distributed throughout the year when the Nash-Sutcliffe Efficiency of logarithmic values of flow (lnNSE) was the evaluation criterion.

  6. Evaluating soil carbon in global climate models: benchmarking, future projections, and model drivers

    NASA Astrophysics Data System (ADS)

    Todd-Brown, K. E.; Randerson, J. T.; Post, W. M.; Allison, S. D.

    2012-12-01

    The carbon cycle plays a critical role in how the climate responds to anthropogenic carbon dioxide. To evaluate how well Earth system models (ESMs) from the Climate Model Intercomparison Project (CMIP5) represent the carbon cycle, we examined predictions of current soil carbon stocks from the historical simulation. We compared the soil and litter carbon pools from 17 ESMs with data on soil carbon stocks from the Harmonized World Soil Database (HWSD). We also examined soil carbon predictions for 2100 from 16 ESMs from the rcp85 (highest radiative forcing) simulation to investigate the effects of climate change on soil carbon stocks. In both analyses, we used a reduced complexity model to separate the effects of variation in model drivers from the effects of model parameters on soil carbon predictions. Drivers included NPP, soil temperature, and soil moisture, and the reduced complexity model represented one pool of soil carbon as a function of these drivers. The ESMs predicted global soil carbon totals of 500 to 2980 Pg-C, compared to 1260 Pg-C in the HWSD. This 5-fold variation in predicted soil stocks was a consequence of a 3.4-fold variation in NPP inputs and 3.8-fold variability in mean global turnover times. None of the ESMs correlated well with the global distribution of soil carbon in the HWSD (Pearson's correlation <0.40, RMSE 9-22 kg m-2). On a biome level there was a broad range of agreement between the ESMs and the HWSD. Some models predicted HWSD biome totals well (R2=0.91) while others did not (R2=0.23). All of the ESM terrestrial decomposition models are structurally similar with outputs that were well described by a reduced complexity model that included NPP and soil temperature (R2 of 0.73-0.93). However, MPI-ESM-LR outputs showed only a moderate fit to this model (R2=0.51), and CanESM2 outputs were better described by a reduced model that included soil moisture (R2=0.74), We also found a broad range in soil carbon responses to climate change predicted by the ESMs, with changes of -480 to 230 Pg-C from 2005-2100. All models that reported NPP and heterotrophic respiration showed increases in both of these processes over the simulated period. In two of the models, soils switched from a global sink for carbon to a net source. Of the remaining models, half predicted that soils were a sink for carbon throughout the time period and the other half predicted that soils were a carbon source.. Heterotrophic respiration in most of the models from 2005-2100 was well explained by a reduced complexity model dependent on soil carbon, soil temperature, and soil moisture (R2 values >0.74). However, MPI-ESM (R2=0.45) showed only moderate fit to this model. Our analysis shows that soil carbon predictions from ESMs are highly variable, with much of this variability due to model parameterization and variations in driving variables. Furthermore, our reduced complexity models show that most variation in ESM outputs can be explained by a simple one-pool model with a small number of drivers and parameters. Therefore, agreement between soil carbon predictions across models could improve substantially by reconciling differences in driving variables and the parameters that link soil carbon with environmental drivers. However it is unclear if this model agreement would reflect what is truly happening in the Earth system.

  7. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  8. Modeling of exposure to carbon monoxide in fires

    NASA Technical Reports Server (NTRS)

    Cagliostro, D. E.

    1980-01-01

    A mathematical model is developed to predict carboxyhemoglobin concentrations in regions of the body for short exposures to carbon monoxide levels expected during escape from aircraft fires. The model includes the respiratory and circulatory dynamics of absorption and distribution of carbon monoxide and carboxyhemoglobin. Predictions of carboxyhemoglobin concentrations are compared to experimental values obtained for human exposures to constant high carbon monoxide levels. Predictions are within 20% of experimental values. For short exposure times, transient concentration effects are predicted. The effect of stress is studied and found to increase carboxyhemoglobin levels substantially compared to a rest state.

  9. The Potential for Predicting Precipitation on Seasonal-to-Interannual Timescales

    NASA Technical Reports Server (NTRS)

    Koster, R. D.

    1999-01-01

    The ability to predict precipitation several months in advance would have a significant impact on water resource management. This talk provides an overview of a project aimed at developing this prediction capability. NASA's Seasonal-to-Interannual Prediction Project (NSIPP) will generate seasonal-to-interannual sea surface temperature predictions through detailed ocean circulation modeling and will then translate these SST forecasts into forecasts of continental precipitation through the application of an atmospheric general circulation model and a "SVAT"-type land surface model. As part of the process, ocean variables (e.g., height) and land variables (e.g., soil moisture) will be updated regularly via data assimilation. The overview will include a discussion of the variability inherent in such a modeling system and will provide some quantitative estimates of the absolute upper limits of seasonal-to-interannual precipitation predictability.

  10. A novel model for estimating organic chemical bioconcentration in agricultural plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hung, H.; Mackay, D.; Di Guardo, A.

    1995-12-31

    There is increasing recognition that much human and wildlife exposure to organic contaminants can be traced through the food chain to bioconcentration in vegetation. For risk assessment, there is a need for an accurate model to predict organic chemical concentrations in plants. Existing models range from relatively simple correlations of concentrations using octanol-water or octanol-air partition coefficients, to complex models involving extensive physiological data. To satisfy the need for a relatively accurate model of intermediate complexity, a novel approach has been devised to predict organic chemical concentrations in agricultural plants as a function of soil and air concentrations, without themore » need for extensive plant physiological data. The plant is treated as three compartments, namely, leaves, roots and stems (including fruit and seeds). Data readily available from the literature, including chemical properties, volume, density and composition of each compartment; metabolic and growth rate of plant; and readily obtainable environmental conditions at the site are required as input. Results calculated from the model are compared with observed and experimentally-determined concentrations. It is suggested that the model, which includes a physiological database for agricultural plants, gives acceptably accurate predictions of chemical partitioning between plants, air and soil.« less

  11. Predictive Analytics for Identification of Patients at Risk for QT Interval Prolongation - A Systematic Review.

    PubMed

    Tomaselli Muensterman, Elena; Tisdale, James E

    2018-06-08

    Prolongation of the heart rate-corrected QT (QTc) interval increases the risk for torsades de pointes (TdP), a potentially fatal arrhythmia. The likelihood of TdP is higher in patients with risk factors, which include female sex, older age, heart failure with reduced ejection fraction, hypokalemia, hypomagnesemia, concomitant administration of ≥ 2 QTc interval-prolonging medications, among others. Assessment and quantification of risk factors may facilitate prediction of patients at highest risk for developing QTc interval prolongation and TdP. Investigators have utilized the field of predictive analytics, which generates predictions using techniques including data mining, modeling, machine learning, and others, to develop methods of risk quantification and prediction of QTc interval prolongation. Predictive analytics have also been incorporated into clinical decision support (CDS) tools to alert clinicians regarding patients at increased risk of developing QTc interval prolongation. The objectives of this paper are to assess the effectiveness of predictive analytics for identification of patients at risk of drug-induced QTc interval prolongation, and to discuss the efficacy of incorporation of predictive analytics into CDS tools in clinical practice. A systematic review of English language articles (human subjects only) was performed, yielding 57 articles, with an additional 4 articles identified from other sources; a total of 10 articles were included in this review. Risk scores for QTc interval prolongation have been developed in various patient populations including those in cardiac intensive care units (ICUs) and in broader populations of hospitalized or health system patients. One group developed a risk score that includes information regarding genetic polymorphisms; this score significantly predicted TdP. Development of QTc interval prolongation risk prediction models and incorporation of these models into CDS tools reduces the risk of QTc interval prolongation in cardiac ICUs and identifies health-system patients at increased risk for mortality. The impact of these QTc interval prolongation predictive analytics on overall patient safety outcomes, such as TdP and sudden cardiac death relative to the cost of development and implementation, requires further study. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  12. In-situ biogas upgrading process: Modeling and simulations aspects.

    PubMed

    Lovato, Giovanna; Alvarado-Morales, Merlin; Kovalovszki, Adam; Peprah, Maria; Kougias, Panagiotis G; Rodrigues, José Alberto Domingues; Angelidaki, Irini

    2017-12-01

    Biogas upgrading processes by in-situ hydrogen (H 2 ) injection are still challenging and could benefit from a mathematical model to predict system performance. Therefore, a previous model on anaerobic digestion was updated and expanded to include the effect of H 2 injection into the liquid phase of a fermenter with the aim of modeling and simulating these processes. This was done by including hydrogenotrophic methanogen kinetics for H 2 consumption and inhibition effect on the acetogenic steps. Special attention was paid to gas to liquid transfer of H 2 . The final model was successfully validated considering a set of Case Studies. Biogas composition and H 2 utilization were correctly predicted, with overall deviation below 10% compared to experimental measurements. Parameter sensitivity analysis revealed that the model is highly sensitive to the H 2 injection rate and mass transfer coefficient. The model developed is an effective tool for predicting process performance in scenarios with biogas upgrading. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Animal Models and Bone Histomorphometry: Translational Research for the Human Research Program

    NASA Technical Reports Server (NTRS)

    Sibonga, Jean D.

    2010-01-01

    This slide presentation reviews the use of animal models to research and inform bone morphology, in particular relating to human research in bone loss as a result of low gravity environments. Reasons for use of animal models as tools for human research programs include: time-efficient, cost-effective, invasive measures, and predictability as some model are predictive for drug effects.

  14. NOAA's National Air Quality Predictions and Development of Aerosol and Atmospheric Composition Prediction Components for the Next Generation Global Prediction System

    NASA Astrophysics Data System (ADS)

    Stajner, I.; Hou, Y. T.; McQueen, J.; Lee, P.; Stein, A. F.; Tong, D.; Pan, L.; Huang, J.; Huang, H. C.; Upadhayay, S.

    2016-12-01

    NOAA provides operational air quality predictions using the National Air Quality Forecast Capability (NAQFC): ozone and wildfire smoke for the United States and airborne dust for the contiguous 48 states at http://airquality.weather.gov. NOAA's predictions of fine particulate matter (PM2.5) became publicly available in February 2016. Ozone and PM2.5 predictions are produced using a system that operationally links the Community Multiscale Air Quality (CMAQ) model with meteorological inputs from the North American mesoscale forecast Model (NAM). Smoke and dust predictions are provided using the Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model. Current NAQFC focus is on updating CMAQ to version 5.0.2, improving PM2.5 predictions, and updating emissions estimates, especially for NOx using recently observed trends. Wildfire smoke emissions from a newer version of the USFS BlueSky system are being included in a new configuration of the NAQFC NAM-CMAQ system, which is re-run for the previous 24 hours when the wildfires were observed from satellites, to better represent wildfire emissions prior to initiating predictions for the next 48 hours. In addition, NOAA is developing the Next Generation Global Prediction System (NGGPS) to represent the earth system for extended weather prediction. NGGPS will include a representation of atmospheric dynamics, physics, aerosols and atmospheric composition as well as coupling with ocean, wave, ice and land components. NGGPS is being developed with a broad community involvement, including community developed components and academic research to develop and test potential improvements for potentially inclusion in NGGPS. Several investigators at NOAA's research laboratories and in academia are working to improve the aerosol and gaseous chemistry representation for NGGPS, to develop and evaluate the representation of atmospheric composition, and to establish and improve the coupling with radiation and microphysics. Additional efforts may include the improved use of predicted atmospheric composition in assimilation of observations and the linkage of full global atmospheric composition predictions with national air quality predictions.

  15. Several examples where turbulence models fail in inlet flow field analysis

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.

    1993-01-01

    Computational uncertainties in turbulence modeling for three dimensional inlet flow fields include flows approaching separation, strength of secondary flow field, three dimensional flow predictions of vortex liftoff, and influence of vortex-boundary layer interactions; computational uncertainties in vortex generator modeling include representation of generator vorticity field and the relationship between generator and vorticity field. The objectives of the inlet flow field studies presented in this document are to advance the understanding, prediction, and control of intake distortion and to study the basic interactions that influence this design problem.

  16. Predicting tree species presence and basal area in Utah: A comparison of stochastic gradient boosting, generalized additive models, and tree-based methods

    USGS Publications Warehouse

    Moisen, Gretchen G.; Freeman, E.A.; Blackard, J.A.; Frescino, T.S.; Zimmermann, N.E.; Edwards, T.C.

    2006-01-01

    Many efforts are underway to produce broad-scale forest attribute maps by modelling forest class and structure variables collected in forest inventories as functions of satellite-based and biophysical information. Typically, variants of classification and regression trees implemented in Rulequest's?? See5 and Cubist (for binary and continuous responses, respectively) are the tools of choice in many of these applications. These tools are widely used in large remote sensing applications, but are not easily interpretable, do not have ties with survey estimation methods, and use proprietary unpublished algorithms. Consequently, three alternative modelling techniques were compared for mapping presence and basal area of 13 species located in the mountain ranges of Utah, USA. The modelling techniques compared included the widely used See5/Cubist, generalized additive models (GAMs), and stochastic gradient boosting (SGB). Model performance was evaluated using independent test data sets. Evaluation criteria for mapping species presence included specificity, sensitivity, Kappa, and area under the curve (AUC). Evaluation criteria for the continuous basal area variables included correlation and relative mean squared error. For predicting species presence (setting thresholds to maximize Kappa), SGB had higher values for the majority of the species for specificity and Kappa, while GAMs had higher values for the majority of the species for sensitivity. In evaluating resultant AUC values, GAM and/or SGB models had significantly better results than the See5 models where significant differences could be detected between models. For nine out of 13 species, basal area prediction results for all modelling techniques were poor (correlations less than 0.5 and relative mean squared errors greater than 0.8), but SGB provided the most stable predictions in these instances. SGB and Cubist performed equally well for modelling basal area for three species with moderate prediction success, while all three modelling tools produced comparably good predictions (correlation of 0.68 and relative mean squared error of 0.56) for one species. ?? 2006 Elsevier B.V. All rights reserved.

  17. Learning Instance-Specific Predictive Models

    PubMed Central

    Visweswaran, Shyam; Cooper, Gregory F.

    2013-01-01

    This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325

  18. Development and Validation of a Predictive Model to Identify Individuals Likely to Have Undiagnosed Chronic Obstructive Pulmonary Disease Using an Administrative Claims Database.

    PubMed

    Moretz, Chad; Zhou, Yunping; Dhamane, Amol D; Burslem, Kate; Saverno, Kim; Jain, Gagan; Devercelli, Giovanna; Kaila, Shuchita; Ellis, Jeffrey J; Hernandez, Gemzel; Renda, Andrew

    2015-12-01

    Despite the importance of early detection, delayed diagnosis of chronic obstructive pulmonary disease (COPD) is relatively common. Approximately 12 million people in the United States have undiagnosed COPD. Diagnosis of COPD is essential for the timely implementation of interventions, such as smoking cessation programs, drug therapies, and pulmonary rehabilitation, which are aimed at improving outcomes and slowing disease progression. To develop and validate a predictive model to identify patients likely to have undiagnosed COPD using administrative claims data. A predictive model was developed and validated utilizing a retro-spective cohort of patients with and without a COPD diagnosis (cases and controls), aged 40-89, with a minimum of 24 months of continuous health plan enrollment (Medicare Advantage Prescription Drug [MAPD] and commercial plans), and identified between January 1, 2009, and December 31, 2012, using Humana's claims database. Stratified random sampling based on plan type (commercial or MAPD) and index year was performed to ensure that cases and controls had a similar distribution of these variables. Cases and controls were compared to identify demographic, clinical, and health care resource utilization (HCRU) characteristics associated with a COPD diagnosis. Stepwise logistic regression (SLR), neural networking, and decision trees were used to develop a series of models. The models were trained, validated, and tested on randomly partitioned subsets of the sample (Training, Validation, and Test data subsets). Measures used to evaluate and compare the models included area under the curve (AUC); index of the receiver operating characteristics (ROC) curve; sensitivity, specificity, positive predictive value (PPV); and negative predictive value (NPV). The optimal model was selected based on AUC index on the Test data subset. A total of 50,880 cases and 50,880 controls were included, with MAPD patients comprising 92% of the study population. Compared with controls, cases had a statistically significantly higher comorbidity burden and HCRU (including hospitalizations, emergency room visits, and medical procedures). The optimal predictive model was generated using SLR, which included 34 variables that were statistically significantly associated with a COPD diagnosis. After adjusting for covariates, anticholinergic bronchodilators (OR = 3.336) and tobacco cessation counseling (OR = 2.871) were found to have a large influence on the model. The final predictive model had an AUC of 0.754, sensitivity of 60%, specificity of 78%, PPV of 73%, and an NPV of 66%. This claims-based predictive model provides an acceptable level of accuracy in identifying patients likely to have undiagnosed COPD in a large national health plan. Identification of patients with undiagnosed COPD may enable timely management and lead to improved health outcomes and reduced COPD-related health care expenditures.

  19. Proactive Supply Chain Performance Management with Predictive Analytics

    PubMed Central

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605

  20. Proactive supply chain performance management with predictive analytics.

    PubMed

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.

  1. Real-time prediction of respiratory motion based on a local dynamic model in an augmented space

    NASA Astrophysics Data System (ADS)

    Hong, S.-M.; Jung, B.-H.; Ruan, D.

    2011-03-01

    Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively low observation rate. Sensitivity analysis indicates its robustness toward the choice of parameters. Its simplicity, robustness and low computation cost makes the proposed local dynamic model an attractive tool for real-time prediction with system latencies below 0.4 s.

  2. Real-time prediction of respiratory motion based on a local dynamic model in an augmented space.

    PubMed

    Hong, S-M; Jung, B-H; Ruan, D

    2011-03-21

    Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively low observation rate. Sensitivity analysis indicates its robustness toward the choice of parameters. Its simplicity, robustness and low computation cost makes the proposed local dynamic model an attractive tool for real-time prediction with system latencies below 0.4 s.

  3. On the estimation of risk associated with an attenuation prediction

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1992-01-01

    Viewgraphs from a presentation on the estimation of risk associated with an attenuation prediction is presented. Topics covered include: link failure - attenuation exceeding a specified threshold for a specified time interval or intervals; risk - the probability of one or more failures during the lifetime of the link or during a specified accounting interval; the problem - modeling the probability of attenuation by rainfall to provide a prediction of the attenuation threshold for a specified risk; and an accounting for the inadequacy of a model or models.

  4. Charge-coupled-device X-ray detector performance model

    NASA Technical Reports Server (NTRS)

    Bautz, M. W.; Berman, G. E.; Doty, J. P.; Ricker, G. R.

    1987-01-01

    A model that predicts the performance characteristics of CCD detectors being developed for use in X-ray imaging is presented. The model accounts for the interactions of both X-rays and charged particles with the CCD and simulates the transport and loss of charge in the detector. Predicted performance parameters include detective and net quantum efficiencies, split-event probability, and a parameter characterizing the effective thickness presented by the detector to cosmic-ray protons. The predicted performance of two CCDs of different epitaxial layer thicknesses is compared. The model predicts that in each device incomplete recovery of the charge liberated by a photon of energy between 0.1 and 10 keV is very likely to be accompanied by charge splitting between adjacent pixels. The implications of the model predictions for CCD data processing algorithms are briefly discussed.

  5. Ecological niche modeling for a cultivated plant species: a case study on taro (Colocasia esculenta) in Hawaii.

    PubMed

    Kodis, Mali'o; Galante, Peter; Sterling, Eleanor J; Blair, Mary E

    2018-04-26

    Under the threat of ongoing and projected climate change, communities in the Pacific Islands face challenges of adapting culture and lifestyle to accommodate a changing landscape. Few models can effectively predict how biocultural livelihoods might be impacted. Here, we examine how environmental and anthropogenic factors influence an ecological niche model (ENM) for the realized niche of cultivated taro (Colocasia esculenta) in Hawaii. We created and tuned two sets of ENMs: one using only environmental variables, and one using both environmental and cultural characteristics of Hawaii. These models were projected under two different Intergovernmental Panel on Climate Change (IPCC) Representative Concentration Pathways (RCPs) for 2070. Models were selected and evaluated using average omission rate and area under the receiver operating characteristic curve (AUC). We compared optimal model predictions by comparing the percentage of taro plots predicted present and measured ENM overlap using Schoener's D statistic. The model including only environmental variables consisted of 19 Worldclim bioclimatic variables, in addition to slope, altitude, distance to perennial streams, soil evaporation, and soil moisture. The optimal model with environmental variables plus anthropogenic features also included a road density variable (which we assumed as a proxy for urbanization) and a variable indicating agricultural lands of importance to the state of Hawaii. The model including anthropogenic features performed better than the environment-only model based on omission rate, AUC, and review of spatial projections. The two models also differed in spatial projections for taro under anticipated future climate change. Our results demonstrate how ENMs including anthropogenic features can predict which areas might be best suited to plant cultivated species in the future, and how these areas could change under various climate projections. These predictions might inform biocultural conservation priorities and initiatives. In addition, we discuss the incongruences that arise when traditional ENM theory is applied to species whose distribution has been significantly impacted by human intervention, particularly at a fine scale relevant to biocultural conservation initiatives. © 2018 by the Ecological Society of America.

  6. Prediction of new onset of end stage renal disease in Chinese patients with type 2 diabetes mellitus - a population-based retrospective cohort study.

    PubMed

    Wan, Eric Yuk Fai; Fong, Daniel Yee Tak; Fung, Colman Siu Cheung; Yu, Esther Yee Tak; Chin, Weng Yee; Chan, Anca Ka Chun; Lam, Cindy Lo Kuen

    2017-08-01

    Since diabetes mellitus (DM) is the leading cause of end stage renal disease (ESRD), this study aimed to develop a 5-year ESRD risk prediction model among Chinese patients with Type 2 DM (T2DM) in primary care. A retrospective cohort study was conducted on 149,333 Chinese adult T2DM primary care patients without ESRD in 2010. Using the derivation cohort over a median of 5 years follow-up, the gender-specific models including the interaction effect between predictors and age were derived using Cox regression with a forward stepwise approach. Harrell's C-statistic and calibration plot were applied to the validation cohort to assess discrimination and calibration of the models. Prediction models showed better discrimination with Harrell's C-statistics of 0.866 (males) and 0.862 (females) and calibration power from the plots than other established models. The predictors included age, usages of anti-hypertensive drugs, anti-glucose drugs, and Hemogloblin A1c, blood pressure, urine albumin/creatinine ratio (ACR) and estimated glomerular filtration rate (eGFR). Specific predictors for male were smoking and presence of sight threatening diabetic retinopathy while additional predictors for female included longer duration of diabetes and quadratic effect of body mass index. Interaction factors with age showed a greater weighting of insulin and urine ACR in younger males, and eGFR in younger females. Our newly developed gender-specific models provide a more accurate 5-year ESRD risk predictions for Chinese diabetic primary care patients than other existing models. The models included several modifiable risk factors that clinicians can use to counsel patients, and to target at in the delivery of care to patients.

  7. Toward Sub-seasonal to Seasonal Arctic Sea Ice Forecasting Using the Regional Arctic System Model (RASM)

    NASA Astrophysics Data System (ADS)

    Kamal, S.; Maslowski, W.; Roberts, A.; Osinski, R.; Cassano, J. J.; Seefeldt, M. W.

    2017-12-01

    The Regional Arctic system model has been developed and used to advance the current state of Arctic modeling and increase the skill of sea ice forecast. RASM is a fully coupled, limited-area model that includes the atmosphere, ocean, sea ice, land hydrology and runoff routing components and the flux coupler to exchange information among them. Boundary conditions are derived from NCEP Climate Forecasting System Reanalyses (CFSR) or Era Iterim (ERA-I) for hindcast simulations or from NCEP Coupled Forecast System Model version 2 (CFSv2) for seasonal forecasts. We have used RASM to produce sea ice forecasts for September 2016 and 2017, in contribution to the Sea Ice Outlook (SIO) of the Sea Ice Prediction Network (SIPN). Each year, we produced three SIOs for the September minimum, initialized on June 1, July 1 and August 1. In 2016, predictions used a simple linear regression model to correct for systematic biases and included the mean September sea ice extent, the daily minimum and the week of the minimum. In 2017, we produced a 12-member ensemble on June 1 and July 1, and 28-member ensemble August 1. The predictions of September 2017 included the pan-Arctic and regional Alaskan sea ice extent, daily and monthly mean pan-Arctic maps of sea ice probability, concentration and thickness. No bias correction was applied to the 2017 forecasts. Finally, we will also discuss future plans for RASM forecasts, which include increased resolution for model components, ecosystem predictions with marine biogeochemistry extensions (mBGC) to the ocean and sea ice components, and feasibility of optional boundary conditions using the Navy Global Environmental Model (NAVGEM).

  8. Modelling impacts of performance on the probability of reproducing, and thereby on productive lifespan, allow prediction of lifetime efficiency in dairy cows.

    PubMed

    Phuong, H N; Blavy, P; Martin, O; Schmidely, P; Friggens, N C

    2016-01-01

    Reproductive success is a key component of lifetime efficiency - which is the ratio of energy in milk (MJ) to energy intake (MJ) over the lifespan, of cows. At the animal level, breeding and feeding management can substantially impact milk yield, body condition and energy balance of cows, which are known as major contributors to reproductive failure in dairy cattle. This study extended an existing lifetime performance model to incorporate the impacts that performance changes due to changing breeding and feeding strategies have on the probability of reproducing and thereby on the productive lifespan, and thus allow the prediction of a cow's lifetime efficiency. The model is dynamic and stochastic, with an individual cow being the unit modelled and one day being the unit of time. To evaluate the model, data from a French study including Holstein and Normande cows fed high-concentrate diets and data from a Scottish study including Holstein cows selected for high and average genetic merit for fat plus protein that were fed high- v. low-concentrate diets were used. Generally, the model consistently simulated productive and reproductive performance of various genotypes of cows across feeding systems. In the French data, the model adequately simulated the reproductive performance of Holsteins but significantly under-predicted that of Normande cows. In the Scottish data, conception to first service was comparably simulated, whereas interval traits were slightly under-predicted. Selection for greater milk production impaired the reproductive performance and lifespan but not lifetime efficiency. The definition of lifetime efficiency used in this model did not include associated costs or herd-level effects. Further works should include such economic indicators to allow more accurate simulation of lifetime profitability in different production scenarios.

  9. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, Timothy K.; Chrostowski, Jon D.

    1991-01-01

    Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.

  10. TOPEX/POSEIDON orbit maintenance maneuver design

    NASA Technical Reports Server (NTRS)

    Bhat, R. S.; Frauenholz, R. B.; Cannell, Patrick E.

    1990-01-01

    The Ocean Topography Experiment (TOPEX/POSEIDON) mission orbit requirements are outlined, as well as its control and maneuver spacing requirements including longitude and time targeting. A ground-track prediction model dealing with geopotential, luni-solar gravity, and atmospheric-drag perturbations is considered. Targeting with all modeled perturbations is discussed, and such ground-track prediction errors as initial semimajor axis, orbit-determination, maneuver-execution, and atmospheric-density modeling errors are assessed. A longitude targeting strategy for two extreme situations is investigated employing all modeled perturbations and prediction errors. It is concluded that atmospheric-drag modeling errors are the prevailing ground-track prediction error source early in the mission during high solar flux, and that low solar-flux levels expected late in the experiment stipulate smaller maneuver magnitudes.

  11. Mathematical modeling to predict residential solid waste generation.

    PubMed

    Benítez, Sara Ojeda; Lozano-Olvera, Gabriela; Morelos, Raúl Adalberto; Vega, Carolina Armijo de

    2008-01-01

    One of the challenges faced by waste management authorities is determining the amount of waste generated by households in order to establish waste management systems, as well as trying to charge rates compatible with the principle applied worldwide, and design a fair payment system for households according to the amount of residential solid waste (RSW) they generate. The goal of this research work was to establish mathematical models that correlate the generation of RSW per capita to the following variables: education, income per household, and number of residents. This work was based on data from a study on generation, quantification and composition of residential waste in a Mexican city in three stages. In order to define prediction models, five variables were identified and included in the model. For each waste sampling stage a different mathematical model was developed, in order to find the model that showed the best linear relation to predict residential solid waste generation. Later on, models to explore the combination of included variables and select those which showed a higher R(2) were established. The tests applied were normality, multicolinearity and heteroskedasticity. Another model, formulated with four variables, was generated and the Durban-Watson test was applied to it. Finally, a general mathematical model is proposed to predict residential waste generation, which accounts for 51% of the total.

  12. Air pollution exposure prediction approaches used in air pollution epidemiology studies.

    PubMed

    Özkaynak, Halûk; Baxter, Lisa K; Dionisio, Kathie L; Burke, Janet

    2013-01-01

    Epidemiological studies of the health effects of outdoor air pollution have traditionally relied upon surrogates of personal exposures, most commonly ambient concentration measurements from central-site monitors. However, this approach may introduce exposure prediction errors and misclassification of exposures for pollutants that are spatially heterogeneous, such as those associated with traffic emissions (e.g., carbon monoxide, elemental carbon, nitrogen oxides, and particulate matter). We review alternative air quality and human exposure metrics applied in recent air pollution health effect studies discussed during the International Society of Exposure Science 2011 conference in Baltimore, MD. Symposium presenters considered various alternative exposure metrics, including: central site or interpolated monitoring data, regional pollution levels predicted using the national scale Community Multiscale Air Quality model or from measurements combined with local-scale (AERMOD) air quality models, hybrid models that include satellite data, statistically blended modeling and measurement data, concentrations adjusted by home infiltration rates, and population-based human exposure model (Stochastic Human Exposure and Dose Simulation, and Air Pollutants Exposure models) predictions. These alternative exposure metrics were applied in epidemiological applications to health outcomes, including daily mortality and respiratory hospital admissions, daily hospital emergency department visits, daily myocardial infarctions, and daily adverse birth outcomes. This paper summarizes the research projects presented during the symposium, with full details of the work presented in individual papers in this journal issue.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branstator, Grant

    The overall aim of our project was to quantify and characterize predictability of the climate as it pertains to decadal time scale predictions. By predictability we mean the degree to which a climate forecast can be distinguished from the climate that exists at initial forecast time, taking into consideration the growth of uncertainty that occurs as a result of the climate system being chaotic. In our project we were especially interested in predictability that arises from initializing forecasts from some specific state though we also contrast this predictability with predictability arising from forecasting the reaction of the system to externalmore » forcing – for example changes in greenhouse gas concentration. Also, we put special emphasis on the predictability of prominent intrinsic patterns of the system because they often dominate system behavior. Highlights from this work include: • Development of novel methods for estimating the predictability of climate forecast models. • Quantification of the initial value predictability limits of ocean heat content and the overturning circulation in the Atlantic as they are represented in various state of the art climate models. These limits varied substantially from model to model but on average were about a decade with North Atlantic heat content tending to be more predictable than North Pacific heat content. • Comparison of predictability resulting from knowledge of the current state of the climate system with predictability resulting from estimates of how the climate system will react to changes in greenhouse gas concentrations. It turned out that knowledge of the initial state produces a larger impact on forecasts for the first 5 to 10 years of projections. • Estimation of the predictability of dominant patterns of ocean variability including well-known patterns of variability in the North Pacific and North Atlantic. For the most part these patterns were predictable for 5 to 10 years. • Determination of especially predictable patterns in the North Atlantic. The most predictable of these retain predictability substantially longer than generic patterns, with some being predictable for two decades.« less

  14. Is demography destiny? Application of machine learning techniques to accurately predict population health outcomes from a minimal demographic dataset.

    PubMed

    Luo, Wei; Nguyen, Thin; Nichols, Melanie; Tran, Truyen; Rana, Santu; Gupta, Sunil; Phung, Dinh; Venkatesh, Svetha; Allender, Steve

    2015-01-01

    For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.

  15. Clinical prediction of functional outcome after ischemic stroke: the surprising importance of periventricular white matter disease and race.

    PubMed

    Kissela, Brett; Lindsell, Christopher J; Kleindorfer, Dawn; Alwell, Kathleen; Moomaw, Charles J; Woo, Daniel; Flaherty, Matthew L; Air, Ellen; Broderick, Joseph; Tsevat, Joel

    2009-02-01

    We sought to build models that address questions of interest to patients and families by predicting short- and long-term mortality and functional outcome after ischemic stroke, while allowing for risk restratification as comorbid events accumulate. A cohort of 451 ischemic stroke subjects in 1999 were interviewed during hospitalization, at 3 months, and at approximately 4 years. Medical records from the acute hospitalization were abstracted. All hospitalizations for 3 months poststroke were reviewed to ascertain medical and psychiatric comorbidities, which were categorized for analysis. Multivariable models were derived to predict mortality and functional outcome (modified Rankin Scale) at 3 months and 4 years. Comorbidities were included as modifiers of the 3-month models, and included in 4-year predictions. Poststroke medical and psychiatric comorbidities significantly increased short-term poststroke mortality and morbidity. Severe periventricular white matter disease (PVWMD) was significantly associated with poor functional outcome at 3 months, independent of other factors, such as diabetes and age; inclusion of this imaging variable eliminated other traditional risk factors often found in stroke outcomes models. Outcome at 3 months was a significant predictor of long-term mortality and functional outcome. Black race was a predictor of 4-year mortality. We propose that predictive models for stroke outcome, as well as analysis of clinical trials, should include adjustment for comorbid conditions. The effects of PVWMD on short-term functional outcomes and black race on long-term mortality are findings that require confirmation.

  16. An Update on Experimental Climate Prediction and Analysis Products Being Developed at NASA's Global Modeling and Assimilation Office

    NASA Technical Reports Server (NTRS)

    Schubert, Siegfried

    2011-01-01

    The Global Modeling and Assimilation Office at NASA's Goddard Space Flight Center is developing a number of experimental prediction and analysis products suitable for research and applications. The prediction products include a large suite of subseasonal and seasonal hindcasts and forecasts (as a contribution to the US National MME), a suite of decadal (10-year) hindcasts (as a contribution to the IPCC decadal prediction project), and a series of large ensemble and high resolution simulations of selected extreme events, including the 2010 Russian and 2011 US heat waves. The analysis products include an experimental atlas of climate (in particular drought) and weather extremes. This talk will provide an update on those activities, and discuss recent efforts by WCRP to leverage off these and similar efforts at other institutions throughout the world to develop an experimental global drought early warning system.

  17. Modelling milk production from feed intake in dairy cattle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarke, D.L.

    1985-05-01

    Predictive models were developed for both Holstein and Jersey cows. Since Holsteins comprised eighty-five percent of the data, the predictive models developed for Holsteins were used for the development of a user-friendly computer model. Predictive models included: milk production (squared multiple correlation .73), natural log (ln) of milk production (.73), four percent fat-corrected milk (.67), ln four percent fat-corrected milk (.68), fat-free milk (.73), ln fat-free milk (.73), dry matter intake (.61), ln dry matter intake (.60), milk fat (.52), and ln milk fat (.56). The predictive models for ln milk production, ln fat-free milk and ln dry matter intakemore » were incorporated into a computer model. The model was written in standard Fortran for use on mainframe or micro-computers. Daily milk production, fat-free milk production, and dry matter intake were predicted on a daily basis with the previous day's dry matter intake serving as an independent variable in the prediction of the daily milk and fat-free milk production. 21 refs.« less

  18. Development of a Melanoma Risk Prediction Model Incorporating MC1R Genotype and Indoor Tanning Exposure: Impact of Mole Phenotype on Model Performance

    PubMed Central

    Penn, Lauren A.; Qian, Meng; Zhang, Enhan; Ng, Elise; Shao, Yongzhao; Berwick, Marianne; Lazovich, DeAnn; Polsky, David

    2014-01-01

    Background Identifying individuals at increased risk for melanoma could potentially improve public health through targeted surveillance and early detection. Studies have separately demonstrated significant associations between melanoma risk, melanocortin receptor (MC1R) polymorphisms, and indoor ultraviolet light (UV) exposure. Existing melanoma risk prediction models do not include these factors; therefore, we investigated their potential to improve the performance of a risk model. Methods Using 875 melanoma cases and 765 controls from the population-based Minnesota Skin Health Study we compared the predictive ability of a clinical melanoma risk model (Model A) to an enhanced model (Model F) using receiver operating characteristic (ROC) curves. Model A used self-reported conventional risk factors including mole phenotype categorized as “none”, “few”, “some” or “many” moles. Model F added MC1R genotype and measures of indoor and outdoor UV exposure to Model A. We also assessed the predictive ability of these models in subgroups stratified by mole phenotype (e.g. nevus-resistant (“none” and “few” moles) and nevus-prone (“some” and “many” moles)). Results Model A (the reference model) yielded an area under the ROC curve (AUC) of 0.72 (95% CI = 0.69, 0.74). Model F was improved with an AUC = 0.74 (95% CI = 0.71–0.76, p<0.01). We also observed substantial variations in the AUCs of Models A & F when examined in the nevus-prone and nevus-resistant subgroups. Conclusions These results demonstrate that adding genotypic information and environmental exposure data can increase the predictive ability of a clinical melanoma risk model, especially among nevus-prone individuals. PMID:25003831

  19. (Q)SARs to predict environmental toxicities: current status and future needs.

    PubMed

    Cronin, Mark T D

    2017-03-22

    The current state of the art of (Quantitative) Structure-Activity Relationships ((Q)SARs) to predict environmental toxicity is assessed along with recommendations to develop these models further. The acute toxicity of compounds acting by the non-polar narcotic mechanism of action can be well predicted, however other approaches, including read-across, may be required for compounds acting by specific mechanisms of action. The chronic toxicity of compounds to environmental species is more difficult to predict from (Q)SARs, with robust data sets and more mechanistic information required. In addition, the toxicity of mixtures is little addressed by (Q)SAR approaches. Developments in environmental toxicology including Adverse Outcome Pathways (AOPs) and omics responses should be utilised to develop better, more mechanistically relevant, (Q)SAR models.

  20. ADOT state-specific crash prediction models : an Arizona needs study.

    DOT National Transportation Integrated Search

    2016-12-01

    The predictive method in the Highway Safety Manual (HSM) includes a safety performance function (SPF), : crash modification factors (CMFs), and a local calibration factor (C), if available. Two alternatives exist for : applying the HSM prediction met...

  1. Measurement error and timing of predictor values for multivariable risk prediction models are poorly reported.

    PubMed

    Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D

    2018-05-18

    Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.

  2. Flexible parametric survival models built on age-specific antimüllerian hormone percentiles are better predictors of menopause.

    PubMed

    Ramezani Tehrani, Fahimeh; Mansournia, Mohammad Ali; Solaymani-Dodaran, Masoud; Steyerberg, Ewout; Azizi, Fereidoun

    2016-06-01

    This study aimed to improve existing prediction models for age at menopause. We identified all reproductive aged women with regular menstrual cycles who met our eligibility criteria (n = 1,015) in the Tehran Lipid and Glucose Study-an ongoing population-based cohort study initiated in 1998. Participants were examined every 3 years and their reproductive histories were recorded. Blood levels of antimüllerian hormone (AMH) were measured at the time of recruitment. Age at menopause was estimated based on serum concentrations of AMH using flexible parametric survival models. The optimum model was selected according to Akaike Information Criteria and the realness of the range of predicted median menopause age. We followed study participants for a median of 9.8 years during which 277 women reached menopause and found that a spline-based proportional odds model including age-specific AMH percentiles as the covariate performed well in terms of statistical criteria and provided the most clinically relevant and realistic predictions. The range of predicted median age at menopause for this model was 47.1 to 55.9 years. For those who reached menopause, the median of the absolute mean difference between actual and predicted age at menopause was 1.9 years (interquartile range 2.9). The model including the age-specific AMH percentiles as the covariate and using proportional odds as its covariate metrics meets all the statistical criteria for the best model and provides the most clinically relevant and realistic predictions for age at menopause for reproductive-aged women.

  3. Developing and validating risk prediction models in an individual participant data meta-analysis

    PubMed Central

    2014-01-01

    Background Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. Methods A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. Results The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their model’s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. Conclusions An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ‘internal-external cross-validation’ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction. PMID:24397587

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Post, Wilfred M; King, Anthony Wayne; Dragoni, Danilo

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties aremore » then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.« less

  5. Quantitative imaging features of pretreatment CT predict volumetric response to chemotherapy in patients with colorectal liver metastases.

    PubMed

    Creasy, John M; Midya, Abhishek; Chakraborty, Jayasree; Adams, Lauryn B; Gomes, Camilla; Gonen, Mithat; Seastedt, Kenneth P; Sutton, Elizabeth J; Cercek, Andrea; Kemeny, Nancy E; Shia, Jinru; Balachandran, Vinod P; Kingham, T Peter; Allen, Peter J; DeMatteo, Ronald P; Jarnagin, William R; D'Angelica, Michael I; Do, Richard K G; Simpson, Amber L

    2018-06-19

    This study investigates whether quantitative image analysis of pretreatment CT scans can predict volumetric response to chemotherapy for patients with colorectal liver metastases (CRLM). Patients treated with chemotherapy for CRLM (hepatic artery infusion (HAI) combined with systemic or systemic alone) were included in the study. Patients were imaged at baseline and approximately 8 weeks after treatment. Response was measured as the percentage change in tumour volume from baseline. Quantitative imaging features were derived from the index hepatic tumour on pretreatment CT, and features statistically significant on univariate analysis were included in a linear regression model to predict volumetric response. The regression model was constructed from 70% of data, while 30% were reserved for testing. Test data were input into the trained model. Model performance was evaluated with mean absolute prediction error (MAPE) and R 2 . Clinicopatholologic factors were assessed for correlation with response. 157 patients were included, split into training (n = 110) and validation (n = 47) sets. MAPE from the multivariate linear regression model was 16.5% (R 2 = 0.774) and 21.5% in the training and validation sets, respectively. Stratified by HAI utilisation, MAPE in the validation set was 19.6% for HAI and 25.1% for systemic chemotherapy alone. Clinical factors associated with differences in median tumour response were treatment strategy, systemic chemotherapy regimen, age and KRAS mutation status (p < 0.05). Quantitative imaging features extracted from pretreatment CT are promising predictors of volumetric response to chemotherapy in patients with CRLM. Pretreatment predictors of response have the potential to better select patients for specific therapies. • Colorectal liver metastases (CRLM) are downsized with chemotherapy but predicting the patients that will respond to chemotherapy is currently not possible. • Heterogeneity and enhancement patterns of CRLM can be measured with quantitative imaging. • Prediction model constructed that predicts volumetric response with 20% error suggesting that quantitative imaging holds promise to better select patients for specific treatments.

  6. Validation of asphalt mixture pavement skid prediction model and development of skid prediction model for surface treatments.

    DOT National Transportation Integrated Search

    2017-04-01

    Pavement skid resistance is primarily a function of the surface texture, which includes both microtexture and macrotexture. Earlier, under the Texas Department of Transportation (TxDOT) Research Project 0-5627, the researchers developed a method to p...

  7. On the comparison of stochastic model predictive control strategies applied to a hydrogen-based microgrid

    NASA Astrophysics Data System (ADS)

    Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.

    2017-03-01

    In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.

  8. Long-term prediction of fish growth under varying ambient temperature using a multiscale dynamic model

    PubMed Central

    2009-01-01

    Background Feed composition has a large impact on the growth of animals, particularly marine fish. We have developed a quantitative dynamic model that can predict the growth and body composition of marine fish for a given feed composition over a timespan of several months. The model takes into consideration the effects of environmental factors, particularly temperature, on growth, and it incorporates detailed kinetics describing the main metabolic processes (protein, lipid, and central metabolism) known to play major roles in growth and body composition. Results For validation, we compared our model's predictions with the results of several experimental studies. We showed that the model gives reliable predictions of growth, nutrient utilization (including amino acid retention), and body composition over a timespan of several months, longer than most of the previously developed predictive models. Conclusion We demonstrate that, despite the difficulties involved, multiscale models in biology can yield reasonable and useful results. The model predictions are reliable over several timescales and in the presence of strong temperature fluctuations, which are crucial factors for modeling marine organism growth. The model provides important improvements over existing models. PMID:19903354

  9. A Predictive Model for Readmissions Among Medicare Patients in a California Hospital.

    PubMed

    Duncan, Ian; Huynh, Nhan

    2017-11-17

    Predictive models for hospital readmission rates are in high demand because of the Centers for Medicare & Medicaid Services (CMS) Hospital Readmission Reduction Program (HRRP). The LACE index is one of the most popular predictive tools among hospitals in the United States. The LACE index is a simple tool with 4 parameters: Length of stay, Acuity of admission, Comorbidity, and Emergency visits in the previous 6 months. The authors applied logistic regression to develop a predictive model for a medium-sized not-for-profit community hospital in California using patient-level data with more specific patient information (including 13 explanatory variables). Specifically, the logistic regression is applied to 2 populations: a general population including all patients and the specific group of patients targeted by the CMS penalty (characterized as ages 65 or older with select conditions). The 2 resulting logistic regression models have a higher sensitivity rate compared to the sensitivity of the LACE index. The C statistic values of the model applied to both populations demonstrate moderate levels of predictive power. The authors also build an economic model to demonstrate the potential financial impact of the use of the model for targeting high-risk patients in a sample hospital and demonstrate that, on balance, whether the hospital gains or loses from reducing readmissions depends on its margin and the extent of its readmission penalties.

  10. Impacts of Considering Climate Variability on Investment Decisions in Ethiopia

    NASA Astrophysics Data System (ADS)

    Strzepek, K.; Block, P.; Rosegrant, M.; Diao, X.

    2005-12-01

    In Ethiopia, climate extremes, inducing droughts or floods, are not unusual. Monitoring the effects of these extremes, and climate variability in general, is critical for economic prediction and assessment of the country's future welfare. The focus of this study involves adding climate variability to a deterministic, mean climate-driven agro-economic model, in an attempt to understand its effects and degree of influence on general economic prediction indicators for Ethiopia. Four simulations are examined, including a baseline simulation and three investment strategies: simulations of irrigation investment, roads investment, and a combination investment of both irrigation and roads. The deterministic model is transformed into a stochastic model by dynamically adding year-to-year climate variability through climate-yield factors. Nine sets of actual, historic, variable climate data are individually assembled and implemented into the 12-year stochastic model simulation, producing an ensemble of economic prediction indicators. This ensemble allows for a probabilistic approach to planning and policy making, allowing decision makers to consider risk. The economic indicators from the deterministic and stochastic approaches, including rates of return to investments, are significantly different. The predictions of the deterministic model appreciably overestimate the future welfare of Ethiopia; the predictions of the stochastic model, utilizing actual climate data, tend to give a better semblance of what may be expected. Inclusion of climate variability is vital for proper analysis of the predictor values from this agro-economic model.

  11. Risk prediction for chronic kidney disease progression using heterogeneous electronic health record data and time series analysis.

    PubMed

    Perotte, Adler; Ranganath, Rajesh; Hirsch, Jamie S; Blei, David; Elhadad, Noémie

    2015-07-01

    As adoption of electronic health records continues to increase, there is an opportunity to incorporate clinical documentation as well as laboratory values and demographics into risk prediction modeling. The authors develop a risk prediction model for chronic kidney disease (CKD) progression from stage III to stage IV that includes longitudinal data and features drawn from clinical documentation. The study cohort consisted of 2908 primary-care clinic patients who had at least three visits prior to January 1, 2013 and developed CKD stage III during their documented history. Development and validation cohorts were randomly selected from this cohort and the study datasets included longitudinal inpatient and outpatient data from these populations. Time series analysis (Kalman filter) and survival analysis (Cox proportional hazards) were combined to produce a range of risk models. These models were evaluated using concordance, a discriminatory statistic. A risk model incorporating longitudinal data on clinical documentation and laboratory test results (concordance 0.849) predicts progression from state III CKD to stage IV CKD more accurately when compared to a similar model without laboratory test results (concordance 0.733, P<.001), a model that only considers the most recent laboratory test results (concordance 0.819, P < .031) and a model based on estimated glomerular filtration rate (concordance 0.779, P < .001). A risk prediction model that takes longitudinal laboratory test results and clinical documentation into consideration can predict CKD progression from stage III to stage IV more accurately than three models that do not take all of these variables into consideration. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  12. A Bayesian prediction model between a biomarker and the clinical endpoint for dichotomous variables.

    PubMed

    Jiang, Zhiwei; Song, Yang; Shou, Qiong; Xia, Jielai; Wang, William

    2014-12-20

    Early biomarkers are helpful for predicting clinical endpoints and for evaluating efficacy in clinical trials even if the biomarker cannot replace clinical outcome as a surrogate. The building and evaluation of an association model between biomarkers and clinical outcomes are two equally important concerns regarding the prediction of clinical outcome. This paper is to address both issues in a Bayesian framework. A Bayesian meta-analytic approach is proposed to build a prediction model between the biomarker and clinical endpoint for dichotomous variables. Compared with other Bayesian methods, the proposed model only requires trial-level summary data of historical trials in model building. By using extensive simulations, we evaluate the link function and the application condition of the proposed Bayesian model under scenario (i) equal positive predictive value (PPV) and negative predictive value (NPV) and (ii) higher NPV and lower PPV. In the simulations, the patient-level data is generated to evaluate the meta-analytic model. PPV and NPV are employed to describe the patient-level relationship between the biomarker and the clinical outcome. The minimum number of historical trials to be included in building the model is also considered. It is seen from the simulations that the logit link function performs better than the odds and cloglog functions under both scenarios. PPV/NPV ≥0.5 for equal PPV and NPV, and PPV + NPV ≥1 for higher NPV and lower PPV are proposed in order to predict clinical outcome accurately and precisely when the proposed model is considered. Twenty historical trials are required to be included in model building when PPV and NPV are equal. For unequal PPV and NPV, the minimum number of historical trials for model building is proposed to be five. A hypothetical example shows an application of the proposed model in global drug development. The proposed Bayesian model is able to predict well the clinical endpoint from the observed biomarker data for dichotomous variables as long as the conditions are satisfied. It could be applied in drug development. But the practical problems in applications have to be studied in further research.

  13. Risk-Based, Hypothesis-Driven Framework for Hydrological Field Campaigns with Case Studies

    NASA Astrophysics Data System (ADS)

    Harken, B.; Rubin, Y.

    2014-12-01

    There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration or plume travel time. These predictions often have significant bearing on a decision that must be made. Examples include: how to allocate limited remediation resources between contaminated groundwater sites or where to place a waste repository site. Answering such questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in EPM predictions stems from uncertainty in model parameters, which can be reduced by measurements taken in field campaigns. The costly nature of field measurements motivates a rational basis for determining a measurement strategy that is optimal with respect to the uncertainty in the EPM prediction. The tool of hypothesis testing allows this uncertainty to be quantified by computing the significance of the test resulting from a proposed field campaign. The significance of the test gives a rational basis for determining the optimality of a proposed field campaign. This hypothesis testing framework is demonstrated and discussed using various synthetic case studies. This study involves contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a specified location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical amount of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. The optimality of different field campaigns is assessed by computing the significance of the test resulting from each one. Evaluating the level of significance caused by a field campaign involves steps including likelihood-based inverse modeling and semi-analytical conditional particle tracking.

  14. Studying Individual Differences in Predictability with Gamma Regression and Nonlinear Multilevel Models

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2010-01-01

    Statistical prediction remains an important tool for decisions in a variety of disciplines. An equally important issue is identifying factors that contribute to more or less accurate predictions. The time series literature includes well developed methods for studying predictability and volatility over time. This article develops…

  15. Tank System Integrated Model: A Cryogenic Tank Performance Prediction Program

    NASA Technical Reports Server (NTRS)

    Bolshinskiy, L. G.; Hedayat, A.; Hastings, L. J.; Sutherlin, S. G.; Schnell, A. R.; Moder, J. P.

    2017-01-01

    Accurate predictions of the thermodynamic state of the cryogenic propellants, pressurization rate, and performance of pressure control techniques in cryogenic tanks are required for development of cryogenic fluid long-duration storage technology and planning for future space exploration missions. This Technical Memorandum (TM) presents the analytical tool, Tank System Integrated Model (TankSIM), which can be used for modeling pressure control and predicting the behavior of cryogenic propellant for long-term storage for future space missions. Utilizing TankSIM, the following processes can be modeled: tank self-pressurization, boiloff, ullage venting, mixing, and condensation on the tank wall. This TM also includes comparisons of TankSIM program predictions with the test data andexamples of multiphase mission calculations.

  16. [Risk factor analysis of the patients with solitary pulmonary nodules and establishment of a prediction model for the probability of malignancy].

    PubMed

    Wang, X; Xu, Y H; Du, Z Y; Qian, Y J; Xu, Z H; Chen, R; Shi, M H

    2018-02-23

    Objective: This study aims to analyze the relationship among the clinical features, radiologic characteristics and pathological diagnosis in patients with solitary pulmonary nodules, and establish a prediction model for the probability of malignancy. Methods: Clinical data of 372 patients with solitary pulmonary nodules who underwent surgical resection with definite postoperative pathological diagnosis were retrospectively analyzed. In these cases, we collected clinical and radiologic features including gender, age, smoking history, history of tumor, family history of cancer, the location of lesion, ground-glass opacity, maximum diameter, calcification, vessel convergence sign, vacuole sign, pleural indentation, speculation and lobulation. The cases were divided to modeling group (268 cases) and validation group (104 cases). A new prediction model was established by logistic regression analying the data from modeling group. Then the data of validation group was planned to validate the efficiency of the new model, and was compared with three classical models(Mayo model, VA model and LiYun model). With the calculated probability values for each model from validation group, SPSS 22.0 was used to draw the receiver operating characteristic curve, to assess the predictive value of this new model. Results: 112 benign SPNs and 156 malignant SPNs were included in modeling group. Multivariable logistic regression analysis showed that gender, age, history of tumor, ground -glass opacity, maximum diameter, and speculation were independent predictors of malignancy in patients with SPN( P <0.05). We calculated a prediction model for the probability of malignancy as follow: p =e(x)/(1+ e(x)), x=-4.8029-0.743×gender+ 0.057×age+ 1.306×history of tumor+ 1.305×ground-glass opacity+ 0.051×maximum diameter+ 1.043×speculation. When the data of validation group was added to the four-mathematical prediction model, The area under the curve of our mathematical prediction model was 0.742, which is greater than other models (Mayo 0.696, VA 0.634, LiYun 0.681), while the differences between any two of the four models were not significant ( P >0.05). Conclusions: Age of patient, gender, history of tumor, ground-glass opacity, maximum diameter and speculation are independent predictors of malignancy in patients with solitary pulmonary nodule. This logistic regression prediction mathematic model is not inferior to those classical models in estimating the prognosis of SPNs.

  17. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling

    PubMed Central

    Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W. F.; Jeelani, Owase; Dunaway, David J.; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face. PMID:29742139

  18. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling.

    PubMed

    Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.

  19. Coupling centennial-scale shoreline change to sea-level rise and coastal morphology in the Gulf of Mexico using a Bayesian network

    USGS Publications Warehouse

    Plant, Nathaniel G.

    2016-01-01

    Predictions of coastal evolution driven by episodic and persistent processes associated with storms and relative sea-level rise (SLR) are required to test our understanding, evaluate our predictive capability, and to provide guidance for coastal management decisions. Previous work demonstrated that the spatial variability of long-term shoreline change can be predicted using observed SLR rates, tide range, wave height, coastal slope, and a characterization of the geomorphic setting. The shoreline is not suf- ficient to indicate which processes are important in causing shoreline change, such as overwash that depends on coastal dune elevations. Predicting dune height is intrinsically important to assess future storm vulnerability. Here, we enhance shoreline-change predictions by including dune height as a vari- able in a statistical modeling approach. Dune height can also be used as an input variable, but it does not improve the shoreline-change prediction skill. Dune-height input does help to reduce prediction uncer- tainty. That is, by including dune height, the prediction is more precise but not more accurate. Comparing hindcast evaluations, better predictive skill was found when predicting dune height (0.8) compared with shoreline change (0.6). The skill depends on the level of detail of the model and we identify an optimized model that has high skill and minimal overfitting. The predictive model can be implemented with a range of forecast scenarios, and we illustrate the impacts of a higher future sea-level. This scenario shows that the shoreline change becomes increasingly erosional and more uncertain. Predicted dune heights are lower and the dune height uncertainty decreases.

  20. Thermal Pollution Mathematical Model. Volume 4: Verification of Three-Dimensional Rigid-Lid Model at Lake Keowee. [envrionment impact of thermal discharges from power plants

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.

    1980-01-01

    The rigid lid model was developed to predict three dimensional temperature and velocity distributions in lakes. This model was verified at various sites (Lake Belews, Biscayne Bay, etc.) and th verification at Lake Keowee was the last of these series of verification runs. The verification at Lake Keowee included the following: (1) selecting the domain of interest, grid systems, and comparing the preliminary results with archival data; (2) obtaining actual ground truth and infrared scanner data both for summer and winter; and (3) using the model to predict the measured data for the above periods and comparing the predicted results with the actual data. The model results compared well with measured data. Thus, the model can be used as an effective predictive tool for future sites.

  1. When Theory Meets Data: Comparing Model Predictions Of Hillslope Sediment Size With Field Measurements.

    NASA Astrophysics Data System (ADS)

    Mahmoudi, M.; Sklar, L. S.; Leclere, S.; Davis, J. D.; Stine, A.

    2017-12-01

    The size distributions of sediment produced on hillslopes and supplied to river channels influence a wide range of fluvial processes, from bedrock river incision to the creation of aquatic habitats. However, the factors that control hillslope sediment size are poorly understood, limiting our ability to predict sediment size and model the evolution of sediment size distributions across landscapes. Recently separate field and theoretical investigations have begun to address this knowledge gap. Here we compare the predictions of several emerging modeling approaches to landscapes where high quality field data are available. Our goals are to explore the sensitivity and applicability of the theoretical models in each field context, and ultimately to provide a foundation for incorporating hillslope sediment size into models of landscape evolution. The field data include published measurements of hillslope sediment size from the Kohala peninsula on the island of Hawaii and tributaries to the Feather River in the northern Sierra Nevada mountains of California, and an unpublished data set from the Inyo Creek catchment of the southern Sierra Nevada. These data are compared to predictions adapted from recently published modeling approaches that include elements of topography, geology, structure, climate and erosion rate. Predictive models for each site are built in ArcGIS using field condition datasets: DEM topography (slope, aspect, curvature), bedrock geology (lithology, mineralogy), structure (fault location, fracture density), climate data (mean annual precipitation and temperature), and estimates of erosion rates. Preliminary analysis suggests that models may be finely tuned to the calibration sites, particularly when field conditions most closely satisfy model assumptions, leading to unrealistic predictions from extrapolation. We suggest a path forward for developing a computationally tractable method for incorporating spatial variation in production of hillslope sediment size distributions in landscape evolution models. Overall, this work highlights the need for additional field data sets as well as improved theoretical models, but also demonstrates progress in predicting the size distribution of sediments produced on hillslopes and supplied to channels.

  2. SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Iyengar, P

    2016-06-15

    Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PETmore » and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining all features.« less

  3. A model to predict the risk of lethal nasopharyngeal necrosis after re-irradiation with intensity-modulated radiotherapy in nasopharyngeal carcinoma patients.

    PubMed

    Yu, Ya-Hui; Xia, Wei-Xiong; Shi, Jun-Li; Ma, Wen-Juan; Li, Yong; Ye, Yan-Fang; Liang, Hu; Ke, Liang-Ru; Lv, Xing; Yang, Jing; Xiang, Yan-Qun; Guo, Xiang

    2016-06-29

    For patients with nasopharyngeal carcinoma (NPC) who undergo re-irradiation with intensity-modulated radiotherapy (IMRT), lethal nasopharyngeal necrosis (LNN) is a severe late adverse event. The purpose of this study was to identify risk factors for LNN and develop a model to predict LNN after radical re-irradiation with IMRT in patients with recurrent NPC. Patients who underwent radical re-irradiation with IMRT for locally recurrent NPC between March 2001 and December 2011 and who had no evidence of distant metastasis were included in this study. Clinical characteristics, including recurrent carcinoma conditions and dosimetric features, were evaluated as candidate risk factors for LNN. Logistic regression analysis was used to identify independent risk factors and construct the predictive scoring model. Among 228 patients enrolled in this study, 204 were at risk of developing LNN based on risk analysis. Of the 204 patients treated, 31 (15.2%) developed LNN. Logistic regression analysis showed that female sex (P = 0.008), necrosis before re-irradiation (P = 0.008), accumulated total prescription dose to the gross tumor volume (GTV) ≥145.5 Gy (P = 0.043), and recurrent tumor volume ≥25.38 cm(3) (P = 0.009) were independent risk factors for LNN. A model to predict LNN was then constructed that included these four independent risk factors. A model that includes sex, necrosis before re-irradiation, accumulated total prescription dose to GTV, and recurrent tumor volume can effectively predict the risk of developing LNN in NPC patients who undergo radical re-irradiation with IMRT.

  4. A system identification approach for developing model predictive controllers of antibody quality attributes in cell culture processes

    PubMed Central

    Schmitt, John; Beller, Justin; Russell, Brian; Quach, Anthony; Hermann, Elizabeth; Lyon, David; Breit, Jeffrey

    2017-01-01

    As the biopharmaceutical industry evolves to include more diverse protein formats and processes, more robust control of Critical Quality Attributes (CQAs) is needed to maintain processing flexibility without compromising quality. Active control of CQAs has been demonstrated using model predictive control techniques, which allow development of processes which are robust against disturbances associated with raw material variability and other potentially flexible operating conditions. Wide adoption of model predictive control in biopharmaceutical cell culture processes has been hampered, however, in part due to the large amount of data and expertise required to make a predictive model of controlled CQAs, a requirement for model predictive control. Here we developed a highly automated, perfusion apparatus to systematically and efficiently generate predictive models using application of system identification approaches. We successfully created a predictive model of %galactosylation using data obtained by manipulating galactose concentration in the perfusion apparatus in serialized step change experiments. We then demonstrated the use of the model in a model predictive controller in a simulated control scenario to successfully achieve a %galactosylation set point in a simulated fed‐batch culture. The automated model identification approach demonstrated here can potentially be generalized to many CQAs, and could be a more efficient, faster, and highly automated alternative to batch experiments for developing predictive models in cell culture processes, and allow the wider adoption of model predictive control in biopharmaceutical processes. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1647–1661, 2017 PMID:28786215

  5. Predicting the magnetic vectors within coronal mass ejections arriving at Earth: 2. Geomagnetic response

    NASA Astrophysics Data System (ADS)

    Savani, N. P.; Vourlidas, A.; Richardson, I. G.; Szabo, A.; Thompson, B. J.; Pulkkinen, A.; Mays, M. L.; Nieves-Chinchilla, T.; Bothmer, V.

    2017-02-01

    This is a companion to Savani et al. (2015) that discussed how a first-order prediction of the internal magnetic field of a coronal mass ejection (CME) may be made from observations of its initial state at the Sun for space weather forecasting purposes (Bothmer-Schwenn scheme (BSS) model). For eight CME events, we investigate how uncertainties in their predicted magnetic structure influence predictions of the geomagnetic activity. We use an empirical relationship between the solar wind plasma drivers and Kp index together with the inferred magnetic vectors, to make a prediction of the time variation of Kp (Kp(BSS)). We find a 2σ uncertainty range on the magnetic field magnitude (|B|) provides a practical and convenient solution for predicting the uncertainty in geomagnetic storm strength. We also find the estimated CME velocity is a major source of error in the predicted maximum Kp. The time variation of Kp(BSS) is important for predicting periods of enhanced and maximum geomagnetic activity, driven by southerly directed magnetic fields, and periods of lower activity driven by northerly directed magnetic field. We compare the skill score of our model to a number of other forecasting models, including the NOAA/Space Weather Prediction Center (SWPC) and Community Coordinated Modeling Center (CCMC)/SWRC estimates. The BSS model was the most unbiased prediction model, while the other models predominately tended to significantly overforecast. The True skill score of the BSS prediction model (TSS = 0.43 ± 0.06) exceeds the results of two baseline models and the NOAA/SWPC forecast. The BSS model prediction performed equally with CCMC/SWRC predictions while demonstrating a lower uncertainty.

  6. Human and Server Docking Prediction for CAPRI Round 30–35 Using LZerD with Combined Scoring Functions

    PubMed Central

    Peterson, Lenna X.; Kim, Hyungrae; Esquivel-Rodriguez, Juan; Roy, Amitava; Han, Xusi; Shin, Woong-Hee; Zhang, Jian; Terashi, Genki; Lee, Matt; Kihara, Daisuke

    2016-01-01

    We report the performance of protein-protein docking predictions by our group for recent rounds of the Critical Assessment of Prediction of Interactions (CAPRI), a community-wide assessment of state-of-the-art docking methods. Our prediction procedure uses a protein-protein docking program named LZerD developed in our group. LZerD represents a protein surface with 3D Zernike descriptors (3DZD), which are based on a mathematical series expansion of a 3D function. The appropriate soft representation of protein surface with 3DZD makes the method more tolerant to conformational change of proteins upon docking, which adds an advantage for unbound docking. Docking was guided by interface residue prediction performed with BindML and cons-PPISP as well as literature information when available. The generated docking models were ranked by a combination of scoring functions, including PRESCO, which evaluates the native-likeness of residues’ spatial environments in structure models. First, we discuss the overall performance of our group in the CAPRI prediction rounds and investigate the reasons for unsuccessful cases. Then, we examine the performance of several knowledge-based scoring functions and their combinations for ranking docking models. It was found that the quality of a pool of docking models generated by LZerD, i.e. whether or not the pool includes near-native models, can be predicted by the correlation of multiple scores. Although the current analysis used docking models generated by LZerD, findings on scoring functions are expected to be universally applicable to other docking methods. PMID:27654025

  7. Prediction of Agglomeration, Fouling, and Corrosion Tendency of Fuels in CFB Co-Combustion

    NASA Astrophysics Data System (ADS)

    Barišć, Vesna; Zabetta, Edgardo Coda; Sarkki, Juha

    Prediction of agglomeration, fouling, and corrosion tendency of fuels is essential to the design of any CFB boiler. During the years, tools have been successfully developed at Foster Wheeler to help with such predictions for the most commercial fuels. However, changes in fuel market and the ever-growing demand for co-combustion capabilities pose a continuous need for development. This paper presents results from recently upgraded models used at Foster Wheeler to predict agglomeration, fouling, and corrosion tendency of a variety of fuels and mixtures. The models, subject of this paper, are semi-empirical computer tools that combine the theoretical basics of agglomeration/fouling/corrosion phenomena with empirical correlations. Correlations are derived from Foster Wheeler's experience in fluidized beds, including nearly 10,000 fuel samples and over 1,000 tests in about 150 CFB units. In these models, fuels are evaluated based on their classification, their chemical and physical properties by standard analyses (proximate, ultimate, fuel ash composition, etc.;.) alongside with Foster Wheeler own characterization methods. Mixtures are then evaluated taking into account the component fuels. This paper presents the predictive capabilities of the agglomeration/fouling/corrosion probability models for selected fuels and mixtures fired in full-scale. The selected fuels include coals and different types of biomass. The models are capable to predict the behavior of most fuels and mixtures, but also offer possibilities for further improvements.

  8. A seasonal hydrologic ensemble prediction system for water resource management

    NASA Astrophysics Data System (ADS)

    Luo, L.; Wood, E. F.

    2006-12-01

    A seasonal hydrologic ensemble prediction system, developed for the Ohio River basin, has been improved and expanded to several other regions including the Eastern U.S., Africa and East Asia. The prediction system adopts the traditional Extended Streamflow Prediction (ESP) approach, utilizing the VIC (Variable Infiltration Capacity) hydrological model as the central tool for producing ensemble prediction of soil moisture, snow and streamflow with lead times up to 6-month. VIC is forced by observed meteorology to estimate the hydrological initial condition prior to the forecast, but during the forecast period the atmospheric forcing comes from statistically downscaled, seasonal forecast from dynamic climate models. The seasonal hydrologic ensemble prediction system is currently producing realtime seasonal hydrologic forecast for these regions on a monthly basis. Using hindcasts from a 19-year period (1981-1999), during which seasonal hindcasts from NCEP Climate Forecast System (CFS) and European Union DEMETER project are available, we evaluate the performance of the forecast system over our forecast regions. The evaluation shows that the prediction system using the current forecast approach is able to produce reliable and accurate precipitation, soil moisture and streamflow predictions. The overall skill is much higher then the traditional ESP. In particular, forecasts based on multiple climate model forecast are more skillful than single model-based forecast. This emphasizes the significant need for producing seasonal climate forecast with multiple climate models for hydrologic applications. Forecast from this system is expected to provide very valuable information about future hydrologic states and associated risks for end users, including water resource management and financial sectors.

  9. Predictive models in cancer management: A guide for clinicians.

    PubMed

    Kazem, Mohammed Ali

    2017-04-01

    Predictive tools in cancer management are used to predict different outcomes including survival probability or risk of recurrence. The uptake of these tools by clinicians involved in cancer management has not been as common as other clinical tools, which may be due to the complexity of some of these tools or a lack of understanding of how they can aid decision-making in particular clinical situations. The aim of this article is to improve clinicians' knowledge and understanding of predictive tools used in cancer management, including how they are built, how they can be applied to medical practice, and what their limitations may be. Literature review was conducted to investigate the role of predictive tools in cancer management. All predictive models share similar characteristics, but depending on the type of the tool its ability to predict an outcome will differ. Each type has its own pros and cons, and its generalisability will depend on the cohort used to build the tool. These factors will affect the clinician's decision whether to apply the model to their cohort or not. Before a model is used in clinical practice, it is important to appreciate how the model is constructed, what its use may add over and above traditional decision-making tools, and what problems or limitations may be associated with it. Understanding all the above is an important step for any clinician who wants to decide whether or not use predictive tools in their practice. Copyright © 2016 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  10. Predicting Acute Exacerbations in Chronic Obstructive Pulmonary Disease.

    PubMed

    Samp, Jennifer C; Joo, Min J; Schumock, Glen T; Calip, Gregory S; Pickard, A Simon; Lee, Todd A

    2018-03-01

    With increasing health care costs that have outpaced those of other industries, payers of health care are moving from a fee-for-service payment model to one in which reimbursement is tied to outcomes. Chronic obstructive pulmonary disease (COPD) is a disease where this payment model has been implemented by some payers, and COPD exacerbations are a quality metric that is used. Under an outcomes-based payment model, it is important for health systems to be able to identify patients at risk for poor outcomes so that they can target interventions to improve outcomes. To develop and evaluate predictive models that could be used to identify patients at high risk for COPD exacerbations. This study was retrospective and observational and included COPD patients treated with a bronchodilator-based combination therapy. We used health insurance claims data to obtain demographics, enrollment information, comorbidities, medication use, and health care resource utilization for each patient over a 6-month baseline period. Exacerbations were examined over a 6-month outcome period and included inpatient (primary discharge diagnosis for COPD), outpatient, and emergency department (outpatient/emergency department visits with a COPD diagnosis plus an acute prescription for an antibiotic or corticosteroid within 5 days) exacerbations. The cohort was split into training (75%) and validation (25%) sets. Within the training cohort, stepwise logistic regression models were created to evaluate risk of exacerbations based on factors measured during the baseline period. Models were evaluated using sensitivity, specificity, and positive and negative predictive values. The base model included all confounding or effect modifier covariates. Several other models were explored using different sets of observations and variables to determine the best predictive model. There were 478,772 patients included in the analytic sample, of which 40.5% had exacerbations during the outcome period. Patients with exacerbations had slightly more comorbidities, medication use, and health care resource utilization compared with patients without exacerbations. In the base model, sensitivity was 41.6% and specificity was 85.5%. Positive and negative predictive values were 66.2% and 68.2%, respectively. Other models that were evaluated resulted in similar test characteristics as the base model. In this study, we were not able to predict COPD exacerbations with a high level of accuracy using health insurance claims data from COPD patients treated with bronchodilator-based combination therapy. Future studies should be done to explore predictive models for exacerbations. No outside funding supported this study. Samp is now employed by, and owns stock in, AbbVie. The other authors have nothing to disclose. Study concept and design were contributed by Joo and Pickard, along with the other authors. Samp and Lee performed the data analysis, with assistance from the other authors. Samp wrote the manuscript, which was revised by Schumock and Calip, along with the other authors.

  11. Gap model development, validation, and application to succession of secondary subtropical dry forests of Puerto Rico

    Treesearch

    Jennifer A. Holm; H.H. Shugart; Skip J. Van Bloem; G.R. Larocque

    2012-01-01

    Because of human pressures, the need to understand and predict the long-term dynamics and development of subtropical dry forests is urgent. Through modifications to the ZELIG simulation model, including the development of species- and site-specific parameters and internal modifications, the capability to model and predict forest change within the 4500-ha Guanica State...

  12. Mathematical models of human paralyzed muscle after long-term training.

    PubMed

    Law, L A Frey; Shields, R K

    2007-01-01

    Spinal cord injury (SCI) results in major musculoskeletal adaptations, including muscle atrophy, faster contractile properties, increased fatigability, and bone loss. The use of functional electrical stimulation (FES) provides a method to prevent paralyzed muscle adaptations in order to sustain force-generating capacity. Mathematical muscle models may be able to predict optimal activation strategies during FES, however muscle properties further adapt with long-term training. The purpose of this study was to compare the accuracy of three muscle models, one linear and two nonlinear, for predicting paralyzed soleus muscle force after exposure to long-term FES training. Further, we contrasted the findings between the trained and untrained limbs. The three models' parameters were best fit to a single force train in the trained soleus muscle (N=4). Nine additional force trains (test trains) were predicted for each subject using the developed models. Model errors between predicted and experimental force trains were determined, including specific muscle force properties. The mean overall error was greatest for the linear model (15.8%) and least for the nonlinear Hill Huxley type model (7.8%). No significant error differences were observed between the trained versus untrained limbs, although model parameter values were significantly altered with training. This study confirmed that nonlinear models most accurately predict both trained and untrained paralyzed muscle force properties. Moreover, the optimized model parameter values were responsive to the relative physiological state of the paralyzed muscle (trained versus untrained). These findings are relevant for the design and control of neuro-prosthetic devices for those with SCI.

  13. Using spatio-temporal modeling to predict long-term exposure to black smoke at fine spatial and temporal scale

    NASA Astrophysics Data System (ADS)

    Dadvand, Payam; Rushton, Stephen; Diggle, Peter J.; Goffe, Louis; Rankin, Judith; Pless-Mulloli, Tanja

    2011-01-01

    Whilst exposure to air pollution is linked to a wide range of adverse health outcomes, assessing levels of this exposure has remained a challenge. This study reports a modeling approach for the estimation of weekly levels of ambient black smoke (BS) at residential postcodes across Northeast England (2055 km 2) over a 12 year period (1985-1996). A two-stage modeling strategy was developed using monitoring data on BS together with a range of covariates including data on traffic, population density, industrial activity, land cover (remote sensing), and meteorology. The first stage separates the temporal trend in BS for the region as a whole from within-region spatial variation and the second stage is a linear model which predicts BS levels at all locations in the region using spatially referenced covariate data as predictors and the regional predicted temporal trend as an offset. Traffic and land cover predictors were included in the final model, which predicted 70% of the spatio-temporal variation in BS across the study region over the study period. This modeling approach appears to provide a robust way of estimating exposure to BS at an inter-urban scale.

  14. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  15. Food-Web Models Predict Species Abundances in Response to Habitat Change

    PubMed Central

    Gotelli, Nicholas J; Ellison, Aaron M

    2006-01-01

    Plant and animal population sizes inevitably change following habitat loss, but the mechanisms underlying these changes are poorly understood. We experimentally altered habitat volume and eliminated top trophic levels of the food web of invertebrates that inhabit rain-filled leaves of the carnivorous pitcher plant Sarracenia purpurea. Path models that incorporated food-web structure better predicted population sizes of food-web constituents than did simple keystone species models, models that included only autecological responses to habitat volume, or models including both food-web structure and habitat volume. These results provide the first experimental confirmation that trophic structure can determine species abundances in the face of habitat loss. PMID:17002518

  16. Predicting Behaviors to Reduce Toxic Chemical Exposures among New and Expectant Mothers: The Role of Distal Variables within the Integrative Model of Behavioral Prediction

    ERIC Educational Resources Information Center

    Mello, Susan; Hovick, Shelly R.

    2016-01-01

    There is a growing body of evidence linking childhood exposure to environmental toxins and a range of adverse health outcomes, including preterm birth, cognitive deficits, and cancer. Little is known, however, about what drives mothers to engage in health behaviors to reduce such risks. Guided by the integrative model of behavioral prediction,…

  17. A scoring system to predict breast cancer mortality at 5 and 10 years.

    PubMed

    Paredes-Aracil, Esther; Palazón-Bru, Antonio; Folgado-de la Rosa, David Manuel; Ots-Gutiérrez, José Ramón; Compañ-Rosique, Antonio Fernando; Gil-Guillén, Vicente Francisco

    2017-03-24

    Although predictive models exist for mortality in breast cancer (BC) (generally all cause-mortality), they are not applicable to all patients and their statistical methodology is not the most powerful to develop a predictive model. Consequently, we developed a predictive model specific for BC mortality at 5 and 10 years resolving the above issues. This cohort study included 287 patients diagnosed with BC in a Spanish region in 2003-2016. time-to-BC death. Secondary variables: age, personal history of breast surgery, personal history of any cancer/BC, premenopause, postmenopause, grade, estrogen receptor, progesterone receptor, c-erbB2, TNM stage, multicentricity/multifocality, diagnosis and treatment. A points system was constructed to predict BC mortality at 5 and 10 years. The model was internally validated by bootstrapping. The points system was integrated into a mobile application for Android. Mean follow-up was 8.6 ± 3.5 years and 55 patients died of BC. The points system included age, personal history of BC, grade, TNM stage and multicentricity. Validation was satisfactory, in both discrimination and calibration. In conclusion, we constructed and internally validated a scoring system for predicting BC mortality at 5 and 10 years. External validation studies are needed for its use in other geographical areas.

  18. Prediction of rain effects on earth-space communication links operating in the 10 to 35 GHz frequency range

    NASA Technical Reports Server (NTRS)

    Stutzman, Warren L.

    1989-01-01

    This paper reviews the effects of precipitation on earth-space communication links operating the 10 to 35 GHz frequency range. Emphasis is on the quantitative prediction of rain attenuation and depolarization. Discussions center on the models developed at Virginia Tech. Comments on other models are included as well as literature references to key works. Also included is the system level modeling for dual polarized communication systems with techniques for calculating antenna and propagation medium effects. Simple models for the calculation of average annual attenuation and cross-polarization discrimination (XPD) are presented. Calculation of worst month statistics are also presented.

  19. The predictive value of selected serum microRNAs for acute GVHD by TaqMan MicroRNA arrays.

    PubMed

    Zhang, Chunyan; Bai, Nan; Huang, Wenrong; Zhang, Pengjun; Luo, Yuan; Men, Shasha; Wen, Ting; Tong, Hongli; Wang, Shuhong; Tian, Ya-Ping

    2016-10-01

    Currently, the diagnosis of acute graft-versus-host disease (aGVHD) is mainly based on clinical symptoms and biopsy results. This study was designed to further explore new no noninvasive biomarkers for aGVHD prediction/diagnosis. We profiled miRNAs in serum pools from patients with aGVHD (grades II-IV) (n = 9) and non-aGVHD controls (n = 9) by real-time qPCR-based TaqMan MicroRNA arrays. Then, predictive models were established using related miRNAs (n = 38) and verified by a double-blind trial (n = 54). We found that miR-411 was significantly down regulated when aGVHD developed and recovered when aGVHD was controlled, which demonstrated that miR-411 has potential as an indicator for aGVHD monitoring. We developed and validated a predictive model and a diagnostic model for aGVHD. The predictive model included two miRNAs (miR-26b and miR-374a), which could predict an increased risk for aGVHD 1 or 2 weeks in advance, with an AUC, Positive Predictive Value (PPV), and Negative Predictive Value (NPV) of 0.722, 76.19 %, and 69.70 %, respectively. The diagnostic model included three miRNAs (miR-28-5p, miR-489, and miR-671-3p) with an AUC, PPV, and NPV of 0.841, 85.71 % and 83.33 %, respectively. Our results show that circulating miRNAs (miR-26b and miR-374a, miR-28-5p, miR-489 and miR-671-3p) may serve as biomarkers for the prediction and diagnosis of grades II-IV aGVHD.

  20. Prediction of fruit and vegetable intake from biomarkers using individual participant data of diet-controlled intervention studies.

    PubMed

    Souverein, Olga W; de Vries, Jeanne H M; Freese, Riitta; Watzl, Bernhard; Bub, Achim; Miller, Edgar R; Castenmiller, Jacqueline J M; Pasman, Wilrike J; van Het Hof, Karin; Chopra, Mridula; Karlsen, Anette; Dragsted, Lars O; Winkels, Renate; Itsiopoulos, Catherine; Brazionis, Laima; O'Dea, Kerin; van Loo-Bouwman, Carolien A; Naber, Ton H J; van der Voet, Hilko; Boshuizen, Hendriek C

    2015-05-14

    Fruit and vegetable consumption produces changes in several biomarkers in blood. The present study aimed to examine the dose-response curve between fruit and vegetable consumption and carotenoid (α-carotene, β-carotene, β-cryptoxanthin, lycopene, lutein and zeaxanthin), folate and vitamin C concentrations. Furthermore, a prediction model of fruit and vegetable intake based on these biomarkers and subject characteristics (i.e. age, sex, BMI and smoking status) was established. Data from twelve diet-controlled intervention studies were obtained to develop a prediction model for fruit and vegetable intake (including and excluding fruit and vegetable juices). The study population in the present individual participant data meta-analysis consisted of 526 men and women. Carotenoid, folate and vitamin C concentrations showed a positive relationship with fruit and vegetable intake. Measures of performance for the prediction model were calculated using cross-validation. For the prediction model of fruit, vegetable and juice intake, the root mean squared error (RMSE) was 258.0 g, the correlation between observed and predicted intake was 0.78 and the mean difference between observed and predicted intake was - 1.7 g (limits of agreement: - 466.3, 462.8 g). For the prediction of fruit and vegetable intake (excluding juices), the RMSE was 201.1 g, the correlation was 0.65 and the mean bias was 2.4 g (limits of agreement: -368.2, 373.0 g). The prediction models which include the biomarkers and subject characteristics may be used to estimate average intake at the group level and to investigate the ranking of individuals with regard to their intake of fruit and vegetables when validating questionnaires that measure intake.

  1. Improved prediction of biochemical recurrence after radical prostatectomy by genetic polymorphisms.

    PubMed

    Morote, Juan; Del Amo, Jokin; Borque, Angel; Ars, Elisabet; Hernández, Carlos; Herranz, Felipe; Arruza, Antonio; Llarena, Roberto; Planas, Jacques; Viso, María J; Palou, Joan; Raventós, Carles X; Tejedor, Diego; Artieda, Marta; Simón, Laureano; Martínez, Antonio; Rioja, Luis A

    2010-08-01

    Single nucleotide polymorphisms are inherited genetic variations that can predispose or protect individuals against clinical events. We hypothesized that single nucleotide polymorphism profiling may improve the prediction of biochemical recurrence after radical prostatectomy. We performed a retrospective, multi-institutional study of 703 patients treated with radical prostatectomy for clinically localized prostate cancer who had at least 5 years of followup after surgery. All patients were genotyped for 83 prostate cancer related single nucleotide polymorphisms using a low density oligonucleotide microarray. Baseline clinicopathological variables and single nucleotide polymorphisms were analyzed to predict biochemical recurrence within 5 years using stepwise logistic regression. Discrimination was measured by ROC curve AUC, specificity, sensitivity, predictive values, net reclassification improvement and integrated discrimination index. The overall biochemical recurrence rate was 35%. The model with the best fit combined 8 covariates, including the 5 clinicopathological variables prostate specific antigen, Gleason score, pathological stage, lymph node involvement and margin status, and 3 single nucleotide polymorphisms at the KLK2, SULT1A1 and TLR4 genes. Model predictive power was defined by 80% positive predictive value, 74% negative predictive value and an AUC of 0.78. The model based on clinicopathological variables plus single nucleotide polymorphisms showed significant improvement over the model without single nucleotide polymorphisms, as indicated by 23.3% net reclassification improvement (p = 0.003), integrated discrimination index (p <0.001) and likelihood ratio test (p <0.001). Internal validation proved model robustness (bootstrap corrected AUC 0.78, range 0.74 to 0.82). The calibration plot showed close agreement between biochemical recurrence observed and predicted probabilities. Predicting biochemical recurrence after radical prostatectomy based on clinicopathological data can be significantly improved by including patient genetic information. Copyright (c) 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  2. Predicting Progression from Mild Cognitive Impairment to Alzheimer's Dementia Using Clinical, MRI, and Plasma Biomarkers via Probabilistic Pattern Classification

    PubMed Central

    Korolev, Igor O.; Symonds, Laura L.; Bozoki, Andrea C.

    2016-01-01

    Background Individuals with mild cognitive impairment (MCI) have a substantially increased risk of developing dementia due to Alzheimer's disease (AD). In this study, we developed a multivariate prognostic model for predicting MCI-to-dementia progression at the individual patient level. Methods Using baseline data from 259 MCI patients and a probabilistic, kernel-based pattern classification approach, we trained a classifier to distinguish between patients who progressed to AD-type dementia (n = 139) and those who did not (n = 120) during a three-year follow-up period. More than 750 variables across four data sources were considered as potential predictors of progression. These data sources included risk factors, cognitive and functional assessments, structural magnetic resonance imaging (MRI) data, and plasma proteomic data. Predictive utility was assessed using a rigorous cross-validation framework. Results Cognitive and functional markers were most predictive of progression, while plasma proteomic markers had limited predictive utility. The best performing model incorporated a combination of cognitive/functional markers and morphometric MRI measures and predicted progression with 80% accuracy (83% sensitivity, 76% specificity, AUC = 0.87). Predictors of progression included scores on the Alzheimer's Disease Assessment Scale, Rey Auditory Verbal Learning Test, and Functional Activities Questionnaire, as well as volume/cortical thickness of three brain regions (left hippocampus, middle temporal gyrus, and inferior parietal cortex). Calibration analysis revealed that the model is capable of generating probabilistic predictions that reliably reflect the actual risk of progression. Finally, we found that the predictive accuracy of the model varied with patient demographic, genetic, and clinical characteristics and could be further improved by taking into account the confidence of the predictions. Conclusions We developed an accurate prognostic model for predicting MCI-to-dementia progression over a three-year period. The model utilizes widely available, cost-effective, non-invasive markers and can be used to improve patient selection in clinical trials and identify high-risk MCI patients for early treatment. PMID:26901338

  3. An improved shuffled frog leaping algorithm based evolutionary framework for currency exchange rate prediction

    NASA Astrophysics Data System (ADS)

    Dash, Rajashree

    2017-11-01

    Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.

  4. Verification of a three-dimensional resin transfer molding process simulation model

    NASA Technical Reports Server (NTRS)

    Fingerson, John C.; Loos, Alfred C.; Dexter, H. Benson

    1995-01-01

    Experimental evidence was obtained to complete the verification of the parameters needed for input to a three-dimensional finite element model simulating the resin flow and cure through an orthotropic fabric preform. The material characterizations completed include resin kinetics and viscosity models, as well as preform permeability and compaction models. The steady-state and advancing front permeability measurement methods are compared. The results indicate that both methods yield similar permeabilities for a plain weave, bi-axial fiberglass fabric. Also, a method to determine principal directions and permeabilities is discussed and results are shown for a multi-axial warp knit preform. The flow of resin through a blade-stiffened preform was modeled and experiments were completed to verify the results. The predicted inlet pressure was approximately 65% of the measured value. A parametric study was performed to explain differences in measured and predicted flow front advancement and inlet pressures. Furthermore, PR-500 epoxy resin/IM7 8HS carbon fabric flat panels were fabricated by the Resin Transfer Molding process. Tests were completed utilizing both perimeter injection and center-port injection as resin inlet boundary conditions. The mold was instrumented with FDEMS sensors, pressure transducers, and thermocouples to monitor the process conditions. Results include a comparison of predicted and measured inlet pressures and flow front position. For the perimeter injection case, the measured inlet pressure and flow front results compared well to the predicted results. The results of the center-port injection case showed that the predicted inlet pressure was approximately 50% of the measured inlet pressure. Also, measured flow front position data did not agree well with the predicted results. Possible reasons for error include fiber deformation at the resin inlet and a lag in FDEMS sensor wet-out due to low mold pressures.

  5. Radiation model predictions and validation using LDEF satellite data

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1993-01-01

    Predictions and comparisons with the radiation dose measurements on Long Duration Exposure Facility (LDEF) by thermoluminescent dosimeters were made to evaluate the accuracy of models currently used in defining the ionizing radiation environment for low Earth orbit missions. The calculations include a detailed simulation of the radiation exposure (altitude and solar cycle variations, directional dependence) and shielding effects (three-dimensional LDEF geometry model) so that differences in the predicted and observed doses can be attributed to environment model uncertainties. The LDEF dose data are utilized to assess the accuracy of models describing the trapped proton flux, the trapped proton directionality, and the trapped electron flux.

  6. Predicting and evaluation the severity in acute pancreatitis using a new modeling built on body mass index and intra-abdominal pressure.

    PubMed

    Fei, Yang; Gao, Kun; Tu, Jianfeng; Wang, Wei; Zong, Guang-Quan; Li, Wei-Qin

    2017-06-03

    Acute pancreatitis (AP) keeps as severe medical diagnosis and treatment problem. Early evaluation for severity and risk stratification in patients with AP is very important. Some scoring system such as acute physiology and chronic health evaluation-II (APACHE-II), the computed tomography severity index (CTSI), Ranson's score and the bedside index of severity of AP (BISAP) have been used, nevertheless, there're a few shortcomings in these methods. The aim of this study was to construct a new modeling including intra-abdominal pressure (IAP) and body mass index (BMI) to evaluate the severity in AP. The study comprised of two independent cohorts of patients with AP, one set was used to develop modeling from Jinling hospital in the period between January 2013 and October 2016, 1073 patients were included in it; another set was used to validate modeling from the 81st hospital in the period between January 2012 and December 2016, 326 patients were included in it. The association between risk factors and severity of AP were assessed by univariable analysis; multivariable modeling was explored through stepwise selection regression. The change in IAP and BMI were combined to generate a regression equation as the new modeling. Statistical indexes were used to evaluate the value of the prediction in the new modeling. Univariable analysis confirmed change in IAP and BMI to be significantly associated with severity of AP. The predict sensitivity, specificity, positive predictive value, negative predictive value and accuracy by the new modeling for severity of AP were 77.6%, 82.6%, 71.9%, 87.5% and 74.9% respectively in the developing dataset. There were significant differences between the new modeling and other scoring systems in these parameters (P < 0.05). In addition, a comparison of the area under receiver operating characteristic curves of them showed a statistically significant difference (P < 0.05). The same results could be found in the validating dataset. A new modeling based on IAP and BMI is more likely to predict the severity of AP. Copyright © 2017. Published by Elsevier Inc.

  7. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    NASA Astrophysics Data System (ADS)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  8. Real­-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.

    2014-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.

  9. Finite-Rate Ablation Boundary Conditions for Carbon-Phenolic Heat-Shield

    NASA Technical Reports Server (NTRS)

    Chen, Y.-K.; Milos, Frank S.

    2003-01-01

    A formulation of finite-rate ablation surface boundary conditions, including oxidation, nitridation, and sublimation of carbonaceous material with pyrolysis gas injection, has been developed based on surface species mass conservation. These surface boundary conditions are discretized and integrated with a Navier-Stokes solver. This numerical procedure can predict aerothermal heating, chemical species concentration, and carbonaceous material ablation rate over the heatshield surface of re-entry space vehicles. In this study, the gas-gas and gas-surface interactions are established for air flow over a carbon-phenolic heatshield. Two finite-rate gas-surface interaction models are considered in the present study. The first model is based on the work of Park, and the second model includes the kinetics suggested by Zhluktov and Abe. Nineteen gas phase chemical reactions and four gas-surface interactions are considered in the present model. There is a total of fourteen gas phase chemical species, including five species for air and nine species for ablation products. Three test cases are studied in this paper. The first case is a graphite test model in the arc-jet stream; the second is a light weight Phenolic Impregnated Carbon Ablator at the Stardust re-entry peak heating conditions, and the third is a fully dense carbon-phenolic heatshield at the peak heating point of a proposed Mars Sample Return Earth Entry Vehicle. Predictions based on both finite-rate gas- surface interaction models are compared with those obtained using B' tables, which were created based on the chemical equilibrium assumption. Stagnation point convective heat fluxes predicted using Park's finite-rate model are far below those obtained from chemical equilibrium B' tables and Zhluktov's model. Recession predictions from Zhluktov's model are generally lower than those obtained from Park's model and chemical equilibrium B' tables. The effect of species mass diffusion on predicted ablation rate is also examined.

  10. Rapid biochemical methane potential prediction of urban organic waste with near-infrared reflectance spectroscopy.

    PubMed

    Fitamo, T; Triolo, J M; Boldrin, A; Scheutz, C

    2017-08-01

    The anaerobic digestibility of various biomass feedstocks in biogas plants is determined with biochemical methane potential (BMP) assays. However, experimental BMP analysis is time-consuming, costly and challenging to optimise stock management and feeding to achieve improved biogas production. The aim of the present study is to develop a fast and reliable model based on near-infrared reflectance spectroscopy (NIRS) for the BMP prediction of urban organic waste (UOW). The model comprised 87 UOW samples. Additionally, 88 plant biomass samples were included, to develop a combined model predicting BMP. The coefficient of determination (R 2 ) and root mean square error in prediction (RMSE P ) of the UOW model were 0.88 and 44 mL CH 4 /g VS, while the combined model was 0.89 and 50 mL CH 4 /g VS. Improved model performance was obtained for the two individual models compared to the combined version. The BMP prediction with NIRS was satisfactory and moderately successful. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Prediction of 222Rn in Danish dwellings using geology and house construction information from central databases.

    PubMed

    Andersen, Claus E; Raaschou-Nielsen, Ole; Andersen, Helle Primdal; Lind, Morten; Gravesen, Peter; Thomsen, Birthe L; Ulbak, Kaare

    2007-01-01

    A linear regression model has been developed for the prediction of indoor (222)Rn in Danish houses. The model provides proxy radon concentrations for about 21,000 houses in a Danish case-control study on the possible association between residential radon and childhood cancer (primarily leukaemia). The model was calibrated against radon measurements in 3116 houses. An independent dataset with 788 house measurements was used for model performance assessment. The model includes nine explanatory variables, of which the most important ones are house type and geology. All explanatory variables are available from central databases. The model was fitted to log-transformed radon concentrations and it has an R(2) of 40%. The uncertainty associated with individual predictions of (untransformed) radon concentrations is about a factor of 2.0 (one standard deviation). The comparison with the independent test data shows that the model makes sound predictions and that errors of radon predictions are only weakly correlated with the estimates themselves (R(2) = 10%).

  12. Towards more accurate and reliable predictions for nuclear applications

    NASA Astrophysics Data System (ADS)

    Goriely, Stephane; Hilaire, Stephane; Dubray, Noel; Lemaître, Jean-François

    2017-09-01

    The need for nuclear data far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in fundamental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. Nowadays mean-field models can be tuned at the same level of accuracy as the phenomenological models, renormalized on experimental data if needed, and therefore can replace the phenomenological inputs in the evaluation of nuclear data. The latest achievements to determine nuclear masses within the non-relativistic HFB approach, including the related uncertainties in the model predictions, are discussed. Similarly, recent efforts to determine fission observables within the mean-field approach are described and compared with more traditional existing models.

  13. Optimal temperature for malaria transmission is dramaticallylower than previously predicted

    USGS Publications Warehouse

    Mordecai, Eerin A.; Paaijmans, Krijin P.; Johnson, Leah R.; Balzer, Christian; Ben-Horin, Tal; de Moor, Emily; McNally, Amy; Pawar, Samraat; Ryan, Sadie J.; Smith, Thomas C.; Lafferty, Kevin D.

    2013-01-01

    The ecology of mosquito vectors and malaria parasites affect the incidence, seasonal transmission and geographical range of malaria. Most malaria models to date assume constant or linear responses of mosquito and parasite life-history traits to temperature, predicting optimal transmission at 31 °C. These models are at odds with field observations of transmission dating back nearly a century. We build a model with more realistic ecological assumptions about the thermal physiology of insects. Our model, which includes empirically derived nonlinear thermal responses, predicts optimal malaria transmission at 25 °C (6 °C lower than previous models). Moreover, the model predicts that transmission decreases dramatically at temperatures > 28 °C, altering predictions about how climate change will affect malaria. A large data set on malaria transmission risk in Africa validates both the 25 °C optimum and the decline above 28 °C. Using these more accurate nonlinear thermal-response models will aid in understanding the effects of current and future temperature regimes on disease transmission.

  14. Optimal temperature for malaria transmission is dramatically lower than previously predicted

    USGS Publications Warehouse

    Mordecai, Erin A.; Paaijmans, Krijn P.; Johnson, Leah R.; Balzer, Christian; Ben-Horin, Tal; de Moor, Emily; McNally, Amy; Pawar, Samraat; Ryan, Sadie J.; Smith, Thomas C.; Lafferty, Kevin D.

    2013-01-01

    The ecology of mosquito vectors and malaria parasites affect the incidence, seasonal transmission and geographical range of malaria. Most malaria models to date assume constant or linear responses of mosquito and parasite life-history traits to temperature, predicting optimal transmission at 31 °C. These models are at odds with field observations of transmission dating back nearly a century. We build a model with more realistic ecological assumptions about the thermal physiology of insects. Our model, which includes empirically derived nonlinear thermal responses, predicts optimal malaria transmission at 25 °C (6 °C lower than previous models). Moreover, the model predicts that transmission decreases dramatically at temperatures > 28 °C, altering predictions about how climate change will affect malaria. A large data set on malaria transmission risk in Africa validates both the 25 °C optimum and the decline above 28 °C. Using these more accurate nonlinear thermal-response models will aid in understanding the effects of current and future temperature regimes on disease transmission.

  15. MODELS FOR SUBMARINE OUTFALL - VALIDATION AND PREDICTION UNCERTAINTIES

    EPA Science Inventory

    This address reports on some efforts to verify and validate dilution models, including those found in Visual Plumes. This is done in the context of problem experience: a range of problems, including different pollutants such as bacteria; scales, including near-field and far-field...

  16. A Plasticity Model to Predict the Effects of Confinement on Concrete

    NASA Astrophysics Data System (ADS)

    Wolf, Julie

    A plasticity model to predict the behavior of confined concrete is developed. The model is designed to implicitly account for the increase in strength and ductility due to confining a concrete member. The concrete model is implemented into a finite element (FE) model. By implicitly including the change in the strength and ductility in the material model, the confining material can be explicitly included in the FE model. Any confining material can be considered, and the effects on the concrete of failure in the confinement material can be modeled. Test data from a wide variety of different concretes utilizing different confinement methods are used to estimate the model parameters. This allows the FE model to capture the generalized behavior of concrete under multiaxial loading. The FE model is used to predict the results of tests on reinforced concrete members confined by steel hoops and fiber reinforced polymer (FRP) jackets. Loading includes pure axial load and axial load-moment combinations. Variability in the test data makes the model predictions difficult to compare but, overall, the FE model is able to capture the effects of confinement on concrete. Finally, the FE model is used to compare the performance of steel hoop to FRP confined sections, and of square to circular cross sections. As expected, circular sections are better able to engage the confining material, leading to higher strengths. However, higher strains are seen in the confining material for the circular sections. This leads to failure at lower axial strain levels in the case of the FRP confined sections. Significant differences are seen in the behavior of FRP confined members and steel hoop confined members. Failure in the FRP members is always determined by rupture in the composite jacket. As a result, the FRP members continue to take load up to failure. In contrast, the steel hoop confined sections exhibit extensive strain softening before failure. This comparison illustrates the usefulness of the concrete model as a tool for designers. Overall, the concrete model provides a flexible and powerful method to predict the performance of confined concrete.

  17. Creep fatigue life prediction for engine hot section materials (isotropic)

    NASA Technical Reports Server (NTRS)

    Moreno, Vito; Nissley, David; Lin, Li-Sen Jim

    1985-01-01

    The first two years of a two-phase program aimed at improving the high temperature crack initiation life prediction technology for gas turbine hot section components are discussed. In Phase 1 (baseline) effort, low cycle fatigue (LCF) models, using a data base generated for a cast nickel base gas turbine hot section alloy (B1900+Hf), were evaluated for their ability to predict the crack initiation life for relevant creep-fatigue loading conditions and to define data required for determination of model constants. The variables included strain range and rate, mean strain, strain hold times and temperature. None of the models predicted all of the life trends within reasonable data requirements. A Cycle Damage Accumulation (CDA) was therefore developed which follows an exhaustion of material ductility approach. Material ductility is estimated based on observed similarities of deformation structure between fatigue, tensile and creep tests. The cycle damage function is based on total strain range, maximum stress and stress amplitude and includes both time independent and time dependent components. The CDA model accurately predicts all of the trends in creep-fatigue life with loading conditions. In addition, all of the CDA model constants are determinable from rapid cycle, fully reversed fatigue tests and monotonic tensile and/or creep data.

  18. Mapping the Transmission Risk of Zika Virus using Machine Learning Models.

    PubMed

    Jiang, Dong; Hao, Mengmeng; Ding, Fangyu; Fu, Jingying; Li, Meng

    2018-06-19

    Zika virus, which has been linked to severe congenital abnormalities, is exacerbating global public health problems with its rapid transnational expansion fueled by increased global travel and trade. Suitability mapping of the transmission risk of Zika virus is essential for drafting public health plans and disease control strategies, which are especially important in areas where medical resources are relatively scarce. Predicting the risk of Zika virus outbreak has been studied in recent years, but the published literature rarely includes multiple model comparisons or predictive uncertainty analysis. Here, three relatively popular machine learning models including backward propagation neural network (BPNN), gradient boosting machine (GBM) and random forest (RF) were adopted to map the probability of Zika epidemic outbreak at the global level, pairing high-dimensional multidisciplinary covariate layers with comprehensive location data on recorded Zika virus infection in humans. The results show that the predicted high-risk areas for Zika transmission are concentrated in four regions: Southeastern North America, Eastern South America, Central Africa and Eastern Asia. To evaluate the performance of machine learning models, the 50 modeling processes were conducted based on a training dataset. The BPNN model obtained the highest predictive accuracy with a 10-fold cross-validation area under the curve (AUC) of 0.966 [95% confidence interval (CI) 0.965-0.967], followed by the GBM model (10-fold cross-validation AUC = 0.964[0.963-0.965]) and the RF model (10-fold cross-validation AUC = 0.963[0.962-0.964]). Based on training samples, compared with the BPNN-based model, we find that significant differences (p = 0.0258* and p = 0.0001***, respectively) are observed for prediction accuracies achieved by the GBM and RF models. Importantly, the prediction uncertainty introduced by the selection of absence data was quantified and could provide more accurate fundamental and scientific information for further study on disease transmission prediction and risk assessment. Copyright © 2018. Published by Elsevier B.V.

  19. Risk Prediction Models for Acute Kidney Injury in Critically Ill Patients: Opus in Progressu.

    PubMed

    Neyra, Javier A; Leaf, David E

    2018-05-31

    Acute kidney injury (AKI) is a complex systemic syndrome associated with high morbidity and mortality. Among critically ill patients admitted to intensive care units (ICUs), the incidence of AKI is as high as 50% and is associated with dismal outcomes. Thus, the development and validation of clinical risk prediction tools that accurately identify patients at high risk for AKI in the ICU is of paramount importance. We provide a comprehensive review of 3 clinical risk prediction tools that have been developed for incident AKI occurring in the first few hours or days following admission to the ICU. We found substantial heterogeneity among the clinical variables that were examined and included as significant predictors of AKI in the final models. The area under the receiver operating characteristic curves was ∼0.8 for all 3 models, indicating satisfactory model performance, though positive predictive values ranged from only 23 to 38%. Hence, further research is needed to develop more accurate and reproducible clinical risk prediction tools. Strategies for improved assessment of AKI susceptibility in the ICU include the incorporation of dynamic (time-varying) clinical parameters, as well as biomarker, functional, imaging, and genomic data. © 2018 S. Karger AG, Basel.

  20. Ultrasound-based clinical prediction rule model for detecting papillary thyroid cancer in cervical lymph nodes: A pilot study.

    PubMed

    Patel, Nayana U; McKinney, Kristin; Kreidler, Sarah M; Bieker, Teresa M; Russ, Paul; Roberts, Katherine; Glueck, Deborah H; Albuja-Cruz, Maria; Klopper, Joshua; Haugen, Bryan R

    2016-01-01

    To identify sonographic features of cervical lymph nodes (LNs) that are associated with papillary thyroid cancer (PTC) and to develop a prediction model for classifying nodes as metastatic or benign. This retrospective study included the records of postthyroidectomy patients with PTC who had undergone cervical ultrasound and LN biopsy. LN location, size, shape, hilum, echopattern, Doppler flow, and microcalcifications were assessed. Model selection was used to identify features associated with malignant LNs and to build a predictive, binary-outcome, generalized linear mixed model. A cross-validated receiver operating characteristic analysis was conducted to assess the accuracy of the model for classifying metastatic nodes. We analyzed records from 71 LNs (23 metastatic) in 44 patients (16 with PTC). The predictive model included a nonhomogeneous echopattern (odds ratio [OR], 5.73; 95% confidence interval [CI], 1.07-30.74; p = 0.04), microcalcifications (OR, 4.91; 95% CI, 0.91-26.54; p = 0.06), and volume (OR, 2.57; 95% CI, 0.66-9.99; p = 0.16) as predictors. The model had an area under the curve of 0.74 (95% CI, 0.60-0.85), sensitivity of 65% (95% CI, 50% to 78%), and specificity of 85% (95% CI, 73% to 94%) at the Youden optimal cut point of 0.38. Nonhomogeneous echopattern, microcalcifications, and node volume were predictive of malignant LNs in patients with PTC. A larger sample is needed to validate this model. © 2015 Wiley Periodicals, Inc.

  1. Prediction and validation of residual feed intake and dry matter intake in Danish lactating dairy cows using mid-infrared spectroscopy of milk.

    PubMed

    Shetty, N; Løvendahl, P; Lund, M S; Buitenhuis, A J

    2017-01-01

    The present study explored the effectiveness of Fourier transform mid-infrared (FT-IR) spectral profiles as a predictor for dry matter intake (DMI) and residual feed intake (RFI). The partial least squares regression method was used to develop the prediction models. The models were validated using different external test sets, one randomly leaving out 20% of the records (validation A), the second randomly leaving out 20% of cows (validation B), and a third (for DMI prediction models) randomly leaving out one cow (validation C). The data included 1,044 records from 140 cows; 97 were Danish Holstein and 43 Danish Jersey. Results showed better accuracies for validation A compared with other validation methods. Milk yield (MY) contributed largely to DMI prediction; MY explained 59% of the variation and the validated model error root mean square error of prediction (RMSEP) was 2.24kg. The model was improved by adding live weight (LW) as an additional predictor trait, where the accuracy R 2 increased from 0.59 to 0.72 and error RMSEP decreased from 2.24 to 1.83kg. When only the milk FT-IR spectral profile was used in DMI prediction, a lower prediction ability was obtained, with R 2 =0.30 and RMSEP=2.91kg. However, once the spectral information was added, along with MY and LW as predictors, model accuracy improved and R 2 increased to 0.81 and RMSEP decreased to 1.49kg. Prediction accuracies of RFI changed throughout lactation. The RFI prediction model for the early-lactation stage was better compared with across lactation or mid- and late-lactation stages, with R 2 =0.46 and RMSEP=1.70. The most important spectral wavenumbers that contributed to DMI and RFI prediction models included fat, protein, and lactose peaks. Comparable prediction results were obtained when using infrared-predicted fat, protein, and lactose instead of full spectra, indicating that FT-IR spectral data do not add significant new information to improve DMI and RFI prediction models. Therefore, in practice, if full FT-IR spectral data are not stored, it is possible to achieve similar DMI or RFI prediction results based on standard milk control data. For DMI, the milk fat region was responsible for the major variation in milk spectra; for RFI, the major variation in milk spectra was within the milk protein region. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  2. CGBayesNets: Conditional Gaussian Bayesian Network Learning and Inference with Mixed Discrete and Continuous Data

    PubMed Central

    Weiss, Scott T.

    2014-01-01

    Bayesian Networks (BN) have been a popular predictive modeling formalism in bioinformatics, but their application in modern genomics has been slowed by an inability to cleanly handle domains with mixed discrete and continuous variables. Existing free BN software packages either discretize continuous variables, which can lead to information loss, or do not include inference routines, which makes prediction with the BN impossible. We present CGBayesNets, a BN package focused around prediction of a clinical phenotype from mixed discrete and continuous variables, which fills these gaps. CGBayesNets implements Bayesian likelihood and inference algorithms for the conditional Gaussian Bayesian network (CGBNs) formalism, one appropriate for predicting an outcome of interest from, e.g., multimodal genomic data. We provide four different network learning algorithms, each making a different tradeoff between computational cost and network likelihood. CGBayesNets provides a full suite of functions for model exploration and verification, including cross validation, bootstrapping, and AUC manipulation. We highlight several results obtained previously with CGBayesNets, including predictive models of wood properties from tree genomics, leukemia subtype classification from mixed genomic data, and robust prediction of intensive care unit mortality outcomes from metabolomic profiles. We also provide detailed example analysis on public metabolomic and gene expression datasets. CGBayesNets is implemented in MATLAB and available as MATLAB source code, under an Open Source license and anonymous download at http://www.cgbayesnets.com. PMID:24922310

  3. CGBayesNets: conditional Gaussian Bayesian network learning and inference with mixed discrete and continuous data.

    PubMed

    McGeachie, Michael J; Chang, Hsun-Hsien; Weiss, Scott T

    2014-06-01

    Bayesian Networks (BN) have been a popular predictive modeling formalism in bioinformatics, but their application in modern genomics has been slowed by an inability to cleanly handle domains with mixed discrete and continuous variables. Existing free BN software packages either discretize continuous variables, which can lead to information loss, or do not include inference routines, which makes prediction with the BN impossible. We present CGBayesNets, a BN package focused around prediction of a clinical phenotype from mixed discrete and continuous variables, which fills these gaps. CGBayesNets implements Bayesian likelihood and inference algorithms for the conditional Gaussian Bayesian network (CGBNs) formalism, one appropriate for predicting an outcome of interest from, e.g., multimodal genomic data. We provide four different network learning algorithms, each making a different tradeoff between computational cost and network likelihood. CGBayesNets provides a full suite of functions for model exploration and verification, including cross validation, bootstrapping, and AUC manipulation. We highlight several results obtained previously with CGBayesNets, including predictive models of wood properties from tree genomics, leukemia subtype classification from mixed genomic data, and robust prediction of intensive care unit mortality outcomes from metabolomic profiles. We also provide detailed example analysis on public metabolomic and gene expression datasets. CGBayesNets is implemented in MATLAB and available as MATLAB source code, under an Open Source license and anonymous download at http://www.cgbayesnets.com.

  4. Users guide for the hydroacoustic coverage assessment model (HydroCAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, T., LLNL

    1997-12-01

    A model for predicting the detection and localization performance of hydroacoustic monitoring networks has been developed. The model accounts for major factors affecting global-scale acoustic propagation in the ocean. including horizontal refraction, travel time variability due to spatial and temporal fluctuations in the ocean, and detailed characteristics of the source. Graphical user interfaces are provided to setup the models and visualize the results. The model produces maps of network detection coverage and localization area of uncertainty, as well as intermediate results such as predicted path amplitudes, travel time and travel time variance. This Users Guide for the model is organizedmore » into three sections. First a summary of functionality available in the model is presented, including example output products. The second section provides detailed descriptions of each of models contained in the system. The last section describes how to run the model, including a summary of each data input form in the user interface.« less

  5. High accuracy satellite drag model (HASDM)

    NASA Astrophysics Data System (ADS)

    Storz, M.; Bowman, B.; Branson, J.

    The dominant error source in the force models used to predict low perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying high-resolution density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal, semidiurnal and terdiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index a p to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low perigee satellites.

  6. High accuracy satellite drag model (HASDM)

    NASA Astrophysics Data System (ADS)

    Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent

    The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.

  7. High-fidelity modeling and impact footprint prediction for vehicle breakup analysis

    NASA Astrophysics Data System (ADS)

    Ling, Lisa

    For decades, vehicle breakup analysis had been performed for space missions that used nuclear heater or power units in order to assess aerospace nuclear safety for potential launch failures leading to inadvertent atmospheric reentry. Such pre-launch risk analysis is imperative to assess possible environmental impacts, obtain launch approval, and for launch contingency planning. In order to accurately perform a vehicle breakup analysis, the analysis tool should include a trajectory propagation algorithm coupled with thermal and structural analyses and influences. Since such a software tool was not available commercially or in the public domain, a basic analysis tool was developed by Dr. Angus McRonald prior to this study. This legacy software consisted of low-fidelity modeling and had the capability to predict vehicle breakup, but did not predict the surface impact point of the nuclear component. Thus the main thrust of this study was to develop and verify the additional dynamics modeling and capabilities for the analysis tool with the objectives to (1) have the capability to predict impact point and footprint, (2) increase the fidelity in the prediction of vehicle breakup, and (3) reduce the effort and time required to complete an analysis. The new functions developed for predicting the impact point and footprint included 3-degrees-of-freedom trajectory propagation, the generation of non-arbitrary entry conditions, sensitivity analysis, and the calculation of impact footprint. The functions to increase the fidelity in the prediction of vehicle breakup included a panel code to calculate the hypersonic aerodynamic coefficients for an arbitrary-shaped body and the modeling of local winds. The function to reduce the effort and time required to complete an analysis included the calculation of node failure criteria. The derivation and development of these new functions are presented in this dissertation, and examples are given to demonstrate the new capabilities and the improvements made, with comparisons between the results obtained from the upgraded analysis tool and the legacy software wherever applicable.

  8. Predicting Survival From Large Echocardiography and Electronic Health Record Datasets: Optimization With Machine Learning.

    PubMed

    Samad, Manar D; Ulloa, Alvaro; Wehner, Gregory J; Jing, Linyuan; Hartzel, Dustin; Good, Christopher W; Williams, Brent A; Haggerty, Christopher M; Fornwalt, Brandon K

    2018-06-09

    The goal of this study was to use machine learning to more accurately predict survival after echocardiography. Predicting patient outcomes (e.g., survival) following echocardiography is primarily based on ejection fraction (EF) and comorbidities. However, there may be significant predictive information within additional echocardiography-derived measurements combined with clinical electronic health record data. Mortality was studied in 171,510 unselected patients who underwent 331,317 echocardiograms in a large regional health system. We investigated the predictive performance of nonlinear machine learning models compared with that of linear logistic regression models using 3 different inputs: 1) clinical variables, including 90 cardiovascular-relevant International Classification of Diseases, Tenth Revision, codes, and age, sex, height, weight, heart rate, blood pressures, low-density lipoprotein, high-density lipoprotein, and smoking; 2) clinical variables plus physician-reported EF; and 3) clinical variables and EF, plus 57 additional echocardiographic measurements. Missing data were imputed with a multivariate imputation by using a chained equations algorithm (MICE). We compared models versus each other and baseline clinical scoring systems by using a mean area under the curve (AUC) over 10 cross-validation folds and across 10 survival durations (6 to 60 months). Machine learning models achieved significantly higher prediction accuracy (all AUC >0.82) over common clinical risk scores (AUC = 0.61 to 0.79), with the nonlinear random forest models outperforming logistic regression (p < 0.01). The random forest model including all echocardiographic measurements yielded the highest prediction accuracy (p < 0.01 across all models and survival durations). Only 10 variables were needed to achieve 96% of the maximum prediction accuracy, with 6 of these variables being derived from echocardiography. Tricuspid regurgitation velocity was more predictive of survival than LVEF. In a subset of studies with complete data for the top 10 variables, multivariate imputation by chained equations yielded slightly reduced predictive accuracies (difference in AUC of 0.003) compared with the original data. Machine learning can fully utilize large combinations of disparate input variables to predict survival after echocardiography with superior accuracy. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  9. Models to predict length of stay in the Intensive Care Unit after coronary artery bypass grafting: a systematic review.

    PubMed

    Atashi, Alireza; Verburg, Ilona W; Karim, Hesam; Miri, Mirmohammad; Abu-Hanna, Ameen; de Jonge, Evert; de Keizer, Nicolette F; Eslami, Saeid

    2018-06-01

    Intensive Care Units (ICU) length of stay (LoS) prediction models are used to compare different institutions and surgeons on their performance, and is useful as an efficiency indicator for quality control. There is little consensus about which prediction methods are most suitable to predict (ICU) length of stay. The aim of this study is to systematically review models for predicting ICU LoS after coronary artery bypass grafting and to assess the reporting and methodological quality of these models to apply them for benchmarking. A general search was conducted in Medline and Embase up to 31-12-2016. Three authors classified the papers for inclusion by reading their title, abstract and full text. All original papers describing development and/or validation of a prediction model for LoS in the ICU after CABG surgery were included. We used a checklist developed for critical appraisal and data extraction for systematic reviews of prediction modeling and extended it on handling specific patients subgroups. We also defined other items and scores to assess the methodological and reporting quality of the models. Of 5181 uniquely identified articles, fifteen studies were included of which twelve on development of new models and three on validation of existing models. All studies used linear or logistic regression as method for model development, and reported various performance measures based on the difference between predicted and observed ICU LoS. Most used a prospective (46.6%) or retrospective study design (40%). We found heterogeneity in patient inclusion/exclusion criteria; sample size; reported accuracy rates; and methods of candidate predictor selection. Most (60%) studies have not mentioned the handling of missing values and none compared the model outcome measure of survivors with non-survivors. For model development and validation studies respectively, the maximum reporting (methodological) scores were 66/78 and 62/62 (14/22 and 12/22). There are relatively few models for predicting ICU length of stay after CABG. Several aspects of methodological and reporting quality of studies in this field should be improved. There is a need for standardizing outcome and risk factor definitions in order to develop/validate a multi-institutional and international risk scoring system.

  10. Creep fatigue life prediction for engine hot section materials (ISOTROPIC)

    NASA Technical Reports Server (NTRS)

    Nelson, R. S.; Schoendorf, J. F.; Lin, L. S.

    1986-01-01

    The specific activities summarized include: verification experiments (base program); thermomechanical cycling model; multiaxial stress state model; cumulative loading model; screening of potential environmental and protective coating models; and environmental attack model.

  11. Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky

    2012-01-01

    We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.

  12. Multi-Model Ensemble Wake Vortex Prediction

    NASA Technical Reports Server (NTRS)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  13. Testing the transtheoretical model in predicting smoking relapse among Malaysian adult smokers receiving assistance in quitting.

    PubMed

    Yasin, Siti Munira; Retneswari, Masilamani; Moy, Foong Ming; Taib, Khairul Mizan; Isahak, Marzuki; Koh, David

    2013-01-01

    The role of The Transtheoretical Model (TTM) in predicting relapse is limited. We aimed to assess whether this model can be utilised to predict relapse during the action stage. The participants included 120 smokers who had abstained from smoking for at least 24 hours following two Malaysian universities' smoking cessation programme. The smokers who relapsed perceived significantly greater advantages related to smoking and increasing doubt in their ability to quit. In contrast, former smokers with greater self-liberation and determination to abstain were less likely to relapse. The findings suggest that TTM can be used to predict relapse among quitting smokers.

  14. Multistep-Ahead Air Passengers Traffic Prediction with Hybrid ARIMA-SVMs Models

    PubMed Central

    Ming, Wei; Xiong, Tao

    2014-01-01

    The hybrid ARIMA-SVMs prediction models have been established recently, which take advantage of the unique strength of ARIMA and SVMs models in linear and nonlinear modeling, respectively. Built upon this hybrid ARIMA-SVMs models alike, this study goes further to extend them into the case of multistep-ahead prediction for air passengers traffic with the two most commonly used multistep-ahead prediction strategies, that is, iterated strategy and direct strategy. Additionally, the effectiveness of data preprocessing approaches, such as deseasonalization and detrending, is investigated and proofed along with the two strategies. Real data sets including four selected airlines' monthly series were collected to justify the effectiveness of the proposed approach. Empirical results demonstrate that the direct strategy performs better than iterative one in long term prediction case while iterative one performs better in the case of short term prediction. Furthermore, both deseasonalization and detrending can significantly improve the prediction accuracy for both strategies, indicating the necessity of data preprocessing. As such, this study contributes as a full reference to the planners from air transportation industries on how to tackle multistep-ahead prediction tasks in the implementation of either prediction strategy. PMID:24723814

  15. Predicting Football Matches Results using Bayesian Networks for English Premier League (EPL)

    NASA Astrophysics Data System (ADS)

    Razali, Nazim; Mustapha, Aida; Yatim, Faiz Ahmad; Aziz, Ruhaya Ab

    2017-08-01

    The issues of modeling asscoiation football prediction model has become increasingly popular in the last few years and many different approaches of prediction models have been proposed with the point of evaluating the attributes that lead a football team to lose, draw or win the match. There are three types of approaches has been considered for predicting football matches results which include statistical approaches, machine learning approaches and Bayesian approaches. Lately, many studies regarding football prediction models has been produced using Bayesian approaches. This paper proposes a Bayesian Networks (BNs) to predict the results of football matches in term of home win (H), away win (A) and draw (D). The English Premier League (EPL) for three seasons of 2010-2011, 2011-2012 and 2012-2013 has been selected and reviewed. K-fold cross validation has been used for testing the accuracy of prediction model. The required information about the football data is sourced from a legitimate site at http://www.football-data.co.uk. BNs achieved predictive accuracy of 75.09% in average across three seasons. It is hoped that the results could be used as the benchmark output for future research in predicting football matches results.

  16. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    PubMed

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Propeller aircraft interior noise model. II - Scale-model and flight-test comparisons

    NASA Technical Reports Server (NTRS)

    Willis, C. M.; Mayes, W. H.

    1987-01-01

    A program for predicting the sound levels inside propeller driven aircraft arising from sidewall transmission of airborne exterior noise is validated through comparisons of predictions with both scale-model test results and measurements obtained in flight tests on a turboprop aircraft. The program produced unbiased predictions for the case of the scale-model tests, with a standard deviation of errors of about 4 dB. For the case of the flight tests, the predictions revealed a bias of 2.62-4.28 dB (depending upon whether or not the data for the fourth harmonic were included) and the standard deviation of the errors ranged between 2.43 and 4.12 dB. The analytical model is shown to be capable of taking changes in the flight environment into account.

  18. Model-based learning and the contribution of the orbitofrontal cortex to the model-free world.

    PubMed

    McDannald, Michael A; Takahashi, Yuji K; Lopatina, Nina; Pietras, Brad W; Jones, Josh L; Schoenbaum, Geoffrey

    2012-04-01

    Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  19. Are prediction models for Lynch syndrome valid for probands with endometrial cancer?

    PubMed

    Backes, Floor J; Hampel, Heather; Backes, Katherine A; Vaccarello, Luis; Lewandowski, George; Bell, Jeffrey A; Reid, Gary C; Copeland, Larry J; Fowler, Jeffrey M; Cohn, David E

    2009-01-01

    Currently, three prediction models are used to predict a patient's risk of having Lynch syndrome (LS). These models have been validated in probands with colorectal cancer (CRC), but not in probands presenting with endometrial cancer (EMC). Thus, the aim was to determine the performance of these prediction models in women with LS presenting with EMC. Probands with EMC and LS were identified. Personal and family history was entered into three prediction models, PREMM(1,2), MMRpro, and MMRpredict. Probabilities of mutations in the mismatch repair genes were recorded. Accurate prediction was defined as a model predicting at least a 5% chance of a proband carrying a mutation. From 562 patients prospectively enrolled in a clinical trial of patients with EMC, 13 (2.2%) were shown to have LS. Nine patients had a mutation in MSH6, three in MSH2, and one in MLH1. MMRpro predicted that 3 of 9 patients with an MSH6, 3 of 3 with an MSH2, and 1 of 1 patient with an MLH1 mutation could have LS. For MMRpredict, EMC coded as "proximal CRC" predicted 5 of 5, and as "distal CRC" three of five. PREMM(1,2) predicted that 4 of 4 with an MLH1 or MSH2 could have LS. Prediction of LS in probands presenting with EMC using current models for probands with CRC works reasonably well. Further studies are needed to develop models that include questions specific to patients with EMC with a greater age range, as well as placing increased emphasis on prediction of LS in probands with MSH6 mutations.

  20. Development of one-equation transition/turbulence models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, J.R.; Roy, C.J.; Blottner, F.G.

    2000-01-14

    This paper reports on the development of a unified one-equation model for the prediction of transitional and turbulent flows. An eddy viscosity--transport equation for nonturbulent fluctuation growth based on that proposed by Warren and Hassan is combined with the Spalart-Allmaras one-equation model for turbulent fluctuation growth. Blending of the two equations is accomplished through a multidimensional intermittency function based on the work of Dhawan and Narasimha. The model predicts both the onset and extent of transition. Low-speed test cases include transitional flow over a flat plate, a single element airfoil, and a multi-element airfoil in landing configuration. High-speed test casesmore » include transitional Mach 3.5 flow over a 5{degree} cone and Mach 6 flow over a flared-cone configuration. Results are compared with experimental data, and the grid-dependence of selected predictions is analyzed.« less

  1. Prediction Models for 30-Day Mortality and Complications After Total Knee and Hip Arthroplasties for Veteran Health Administration Patients With Osteoarthritis.

    PubMed

    Harris, Alex Hs; Kuo, Alfred C; Bowe, Thomas; Gupta, Shalini; Nordin, David; Giori, Nicholas J

    2018-05-01

    Statistical models to preoperatively predict patients' risk of death and major complications after total joint arthroplasty (TJA) could improve the quality of preoperative management and informed consent. Although risk models for TJA exist, they have limitations including poor transparency and/or unknown or poor performance. Thus, it is currently impossible to know how well currently available models predict short-term complications after TJA, or if newly developed models are more accurate. We sought to develop and conduct cross-validation of predictive risk models, and report details and performance metrics as benchmarks. Over 90 preoperative variables were used as candidate predictors of death and major complications within 30 days for Veterans Health Administration patients with osteoarthritis who underwent TJA. Data were split into 3 samples-for selection of model tuning parameters, model development, and cross-validation. C-indexes (discrimination) and calibration plots were produced. A total of 70,569 patients diagnosed with osteoarthritis who received primary TJA were included. C-statistics and bootstrapped confidence intervals for the cross-validation of the boosted regression models were highest for cardiac complications (0.75; 0.71-0.79) and 30-day mortality (0.73; 0.66-0.79) and lowest for deep vein thrombosis (0.59; 0.55-0.64) and return to the operating room (0.60; 0.57-0.63). Moderately accurate predictive models of 30-day mortality and cardiac complications after TJA in Veterans Health Administration patients were developed and internally cross-validated. By reporting model coefficients and performance metrics, other model developers can test these models on new samples and have a procedure and indication-specific benchmark to surpass. Published by Elsevier Inc.

  2. Improvement of Prediction Ability for Genomic Selection of Dairy Cattle by Including Dominance Effects

    PubMed Central

    Sun, Chuanyu; VanRaden, Paul M.; Cole, John B.; O'Connell, Jeffrey R.

    2014-01-01

    Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs). The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both breeds; those SNPs also showed the largest dominance effects for fat yield (both breeds) as well as for Holstein milk yield. PMID:25084281

  3. New Methods for Estimating Seasonal Potential Climate Predictability

    NASA Astrophysics Data System (ADS)

    Feng, Xia

    This study develops two new statistical approaches to assess the seasonal potential predictability of the observed climate variables. One is the univariate analysis of covariance (ANOCOVA) model, a combination of autoregressive (AR) model and analysis of variance (ANOVA). It has the advantage of taking into account the uncertainty of the estimated parameter due to sampling errors in statistical test, which is often neglected in AR based methods, and accounting for daily autocorrelation that is not considered in traditional ANOVA. In the ANOCOVA model, the seasonal signals arising from external forcing are determined to be identical or not to assess any interannual variability that may exist is potentially predictable. The bootstrap is an attractive alternative method that requires no hypothesis model and is available no matter how mathematically complicated the parameter estimator. This method builds up the empirical distribution of the interannual variance from the resamplings drawn with replacement from the given sample, in which the only predictability in seasonal means arises from the weather noise. These two methods are applied to temperature and water cycle components including precipitation and evaporation, to measure the extent to which the interannual variance of seasonal means exceeds the unpredictable weather noise compared with the previous methods, including Leith-Shukla-Gutzler (LSG), Madden, and Katz. The potential predictability of temperature from ANOCOVA model, bootstrap, LSG and Madden exhibits a pronounced tropical-extratropical contrast with much larger predictability in the tropics dominated by El Nino/Southern Oscillation (ENSO) than in higher latitudes where strong internal variability lowers predictability. Bootstrap tends to display highest predictability of the four methods, ANOCOVA lies in the middle, while LSG and Madden appear to generate lower predictability. Seasonal precipitation from ANOCOVA, bootstrap, and Katz, resembling that for temperature, is more predictable over the tropical regions, and less predictable in extropics. Bootstrap and ANOCOVA are in good agreement with each other, both methods generating larger predictability than Katz. The seasonal predictability of evaporation over land bears considerably similarity with that of temperature using ANOCOVA, bootstrap, LSG and Madden. The remote SST forcing and soil moisture reveal substantial seasonality in their relations with the potentially predictable seasonal signals. For selected regions, either SST or soil moisture or both shows significant relationships with predictable signals, hence providing indirect insight on slowly varying boundary processes involved to enable useful seasonal climate predication. A multivariate analysis of covariance (MANOCOVA) model is established to identify distinctive predictable patterns, which are uncorrelated with each other. Generally speaking, the seasonal predictability from multivariate model is consistent with that from ANOCOVA. Besides unveiling the spatial variability of predictability, MANOCOVA model also reveals the temporal variability of each predictable pattern, which could be linked to the periodic oscillations.

  4. Chemically Aware Model Builder (camb): an R package for property and bioactivity modelling of small molecules.

    PubMed

    Murrell, Daniel S; Cortes-Ciriano, Isidro; van Westen, Gerard J P; Stott, Ian P; Bender, Andreas; Malliavin, Thérèse E; Glen, Robert C

    2015-01-01

    In silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process. camb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2). Overall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.

  5. Predicting solvation free energies and thermodynamics in polar solvents and mixtures using a solvation-layer interface condition

    PubMed Central

    Goossens, Spencer; Mehdizadeh Rahimi, Ali

    2017-01-01

    We demonstrate that with two small modifications, the popular dielectric continuum model is capable of predicting, with high accuracy, ion solvation thermodynamics (Gibbs free energies, entropies, and heat capacities) in numerous polar solvents. We are also able to predict ion solvation free energies in water–co-solvent mixtures over available concentration series. The first modification to the classical dielectric Poisson model is a perturbation of the macroscopic dielectric-flux interface condition at the solute–solvent interface: we add a nonlinear function of the local electric field, giving what we have called a solvation-layer interface condition (SLIC). The second modification is including the microscopic interface potential (static potential) in our model. We show that the resulting model exhibits high accuracy without the need for fitting solute atom radii in a state-dependent fashion. Compared to experimental results in nine water–co-solvent mixtures, SLIC predicts transfer free energies to within 2.5 kJ/mol. The co-solvents include both protic and aprotic species, as well as biologically relevant denaturants such as urea and dimethylformamide. Furthermore, our results indicate that the interface potential is essential to reproduce entropies and heat capacities. These and previous tests of the SLIC model indicate that it is a promising dielectric continuum model for accurate predictions in a wide range of conditions.

  6. Predicting solvation free energies and thermodynamics in polar solvents and mixtures using a solvation-layer interface condition

    NASA Astrophysics Data System (ADS)

    Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Knepley, Matthew; Bardhan, Jaydeep P.

    2017-03-01

    We demonstrate that with two small modifications, the popular dielectric continuum model is capable of predicting, with high accuracy, ion solvation thermodynamics (Gibbs free energies, entropies, and heat capacities) in numerous polar solvents. We are also able to predict ion solvation free energies in water-co-solvent mixtures over available concentration series. The first modification to the classical dielectric Poisson model is a perturbation of the macroscopic dielectric-flux interface condition at the solute-solvent interface: we add a nonlinear function of the local electric field, giving what we have called a solvation-layer interface condition (SLIC). The second modification is including the microscopic interface potential (static potential) in our model. We show that the resulting model exhibits high accuracy without the need for fitting solute atom radii in a state-dependent fashion. Compared to experimental results in nine water-co-solvent mixtures, SLIC predicts transfer free energies to within 2.5 kJ/mol. The co-solvents include both protic and aprotic species, as well as biologically relevant denaturants such as urea and dimethylformamide. Furthermore, our results indicate that the interface potential is essential to reproduce entropies and heat capacities. These and previous tests of the SLIC model indicate that it is a promising dielectric continuum model for accurate predictions in a wide range of conditions.

  7. The effect of the hot oxygen corona on the interaction of the solar wind with Venus

    NASA Technical Reports Server (NTRS)

    Belotserkovskii, O. M.; Mitnitskii, V. IA.; Breus, T. K.; Krymskii, A. M.; Nagy, A. F.

    1987-01-01

    A numerical gasdynamic model, which includes the effects of mass loading of the shocked solar wind, was used to calculate the density and magnetic field variations in the magnetosheath of Venus. These calculations were carried out for conditions corresponding to a specific orbit of the Pioneer Venus Orbiter (PVO orbit 582). A comparison of the model predictions and the measured shock position, density and magnetic field values showed a reasonable agreement, indicating that a gasdynamic model that includes the effects of mass loading can be used to predict these parameters.

  8. The effect of the hot oxygen corona on the interaction of the solar wind with Venus

    NASA Astrophysics Data System (ADS)

    Belotserkovskii, O. M.; Breus, T. K.; Krymskii, A. M.; Mitnitskii, V. Ya.; Nagey, A. F.; Gombosi, T. I.

    1987-05-01

    A numerical gas dynamic model, which includes the effects of mass loading of the shocked solar wind, was used to calculate the density and magnetic field variations in the magnetosheath of Venus. These calculations were carried out for conditions corresponding to a specific orbit of the Pioneer Venus Orbiter (PVO orbit 582). A comparison of the model predictions and the measured shock position, density and magnetic field values showed a reasonable agreement, indicating that a gas dynamic model that includes the effects of mass loading can be used to predict these parameters.

  9. Developing and implementing the use of predictive models for estimating water quality at Great Lakes beaches

    USGS Publications Warehouse

    Francy, Donna S.; Brady, Amie M.G.; Carvin, Rebecca B.; Corsi, Steven R.; Fuller, Lori M.; Harrison, John H.; Hayhurst, Brett A.; Lant, Jeremiah; Nevers, Meredith B.; Terrio, Paul J.; Zimmerman, Tammy M.

    2013-01-01

    Predictive models have been used at beaches to improve the timeliness and accuracy of recreational water-quality assessments over the most common current approach to water-quality monitoring, which relies on culturing fecal-indicator bacteria such as Escherichia coli (E. coli.). Beach-specific predictive models use environmental and water-quality variables that are easily and quickly measured as surrogates to estimate concentrations of fecal-indicator bacteria or to provide the probability that a State recreational water-quality standard will be exceeded. When predictive models are used for beach closure or advisory decisions, they are referred to as “nowcasts.” During the recreational seasons of 2010-12, the U.S. Geological Survey (USGS), in cooperation with 23 local and State agencies, worked to improve existing nowcasts at 4 beaches, validate predictive models at another 38 beaches, and collect data for predictive-model development at 7 beaches throughout the Great Lakes. This report summarizes efforts to collect data and develop predictive models by multiple agencies and to compile existing information on the beaches and beach-monitoring programs into one comprehensive report. Local agencies measured E. coli concentrations and variables expected to affect E. coli concentrations such as wave height, turbidity, water temperature, and numbers of birds at the time of sampling. In addition to these field measurements, equipment was installed by the USGS or local agencies at or near several beaches to collect water-quality and metrological measurements in near real time, including nearshore buoys, weather stations, and tributary staff gages and monitors. The USGS worked with local agencies to retrieve data from existing sources either manually or by use of tools designed specifically to compile and process data for predictive-model development. Predictive models were developed by use of linear regression and (or) partial least squares techniques for 42 beaches that had at least 2 years of data (2010-11 and sometimes earlier) and for 1 beach that had 1 year of data. For most models, software designed for model development by the U.S. Environmental Protection Agency (Virtual Beach) was used. The selected model for each beach was based on a combination of explanatory variables including, most commonly, turbidity, day of the year, change in lake level over 24 hours, wave height, wind direction and speed, and antecedent rainfall for various time periods. Forty-two predictive models were validated against data collected during an independent year (2012) and compared to the current method for assessing recreational water quality-using the previous day’s E. coli concentration (persistence model). Goals for good predictive-model performance were responses that were at least 5 percent greater than the persistence model and overall correct responses greater than or equal to 80 percent, sensitivities (percentage of exceedances of the bathing-water standard that were correctly predicted by the model) greater than or equal to 50 percent, and specificities (percentage of nonexceedances correctly predicted by the model) greater than or equal to 85 percent. Out of 42 predictive models, 24 models yielded over-all correct responses that were at least 5 percent greater than the use of the persistence model. Predictive-model responses met the performance goals more often than the persistence-model responses in terms of overall correctness (28 versus 17 models, respectively), sensitivity (17 versus 4 models), and specificity (34 versus 25 models). Gaining knowledge of each beach and the factors that affect E. coli concentrations is important for developing good predictive models. Collection of additional years of data with a wide range of environmental conditions may also help to improve future model performance. The USGS will continue to work with local agencies in 2013 and beyond to develop and validate predictive models at beaches and improve existing nowcasts, restructuring monitoring activities to accommodate future uncertainties in funding and resources.

  10. Multi-Scale Modeling of Liquid Phase Sintering Affected by Gravity: Preliminary Analysis

    NASA Technical Reports Server (NTRS)

    Olevsky, Eugene; German, Randall M.

    2012-01-01

    A multi-scale simulation concept taking into account impact of gravity on liquid phase sintering is described. The gravity influence can be included at both the micro- and macro-scales. At the micro-scale, the diffusion mass-transport is directionally modified in the framework of kinetic Monte-Carlo simulations to include the impact of gravity. The micro-scale simulations can provide the values of the constitutive parameters for macroscopic sintering simulations. At the macro-scale, we are attempting to embed a continuum model of sintering into a finite-element framework that includes the gravity forces and substrate friction. If successful, the finite elements analysis will enable predictions relevant to space-based processing, including size and shape and property predictions. Model experiments are underway to support the models via extraction of viscosity moduli versus composition, particle size, heating rate, temperature and time.

  11. Learning to Predict Combinatorial Structures

    NASA Astrophysics Data System (ADS)

    Vembu, Shankar

    2009-12-01

    The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.

  12. NOAA Climate Program Office Contributions to National ESPC

    NASA Astrophysics Data System (ADS)

    Higgins, W.; Huang, J.; Mariotti, A.; Archambault, H. M.; Barrie, D.; Lucas, S. E.; Mathis, J. T.; Legler, D. M.; Pulwarty, R. S.; Nierenberg, C.; Jones, H.; Cortinas, J. V., Jr.; Carman, J.

    2016-12-01

    NOAA is one of five federal agencies (DOD, DOE, NASA, NOAA, and NSF) which signed an updated charter in 2016 to partner on the National Earth System Prediction Capability (ESPC). Situated within NOAA's Office of Oceanic and Atmospheric Research (OAR), NOAA Climate Program Office (CPO) programs contribute significantly to the National ESPC goals and activities. This presentation will provide an overview of CPO contributions to National ESPC. First, we will discuss selected CPO research and transition activities that directly benefit the ESPC coupled model prediction capability, including The North American Multi-Model Ensemble (NMME) seasonal prediction system The Subseasonal Experiment (SubX) project to test real-time subseasonal ensemble prediction systems. Improvements to the NOAA operational Climate Forecast System (CFS), including software infrastructure and data assimilation. Next, we will show how CPO's foundational research activities are advancing future ESPC capabilities. Highlights will include: The Tropical Pacific Observing System (TPOS) to provide the basis for predicting climate on subseasonal to decadal timescales. Subseasonal-to-Seasonal (S2S) processes and predictability studies to improve understanding, modeling and prediction of the MJO. An Arctic Research Program to address urgent needs for advancing monitoring and prediction capabilities in this major area of concern. Advances towards building an experimental multi-decadal prediction system through studies on the Atlantic Meridional Overturning Circulation (AMOC). Finally, CPO has embraced Integrated Information Systems (IIS's) that build on the innovation of programs such as the National Integrated Drought Information System (NIDIS) to develop and deliver end to end environmental information for key societal challenges (e.g. extreme heat; coastal flooding). These contributions will help the National ESPC better understand and address societal needs and decision support requirements.

  13. Ability of preoperative 3.0-Tesla magnetic resonance imaging to predict the absence of side-specific extracapsular extension of prostate cancer.

    PubMed

    Hara, Tomohiko; Nakanishi, Hiroyuki; Nakagawa, Tohru; Komiyama, Motokiyo; Kawahara, Takashi; Manabe, Tomoko; Miyake, Mototaka; Arai, Eri; Kanai, Yae; Fujimoto, Hiroyuki

    2013-10-01

    Recent studies have shown an improvement in prostate cancer diagnosis with the use of 3.0-Tesla magnetic resonance imaging. We retrospectively assessed the ability of this imaging technique to predict side-specific extracapsular extension of prostate cancer. From October 2007 to August 2011, prostatectomy was carried out in 396 patients after preoperative 3.0-Tesla magnetic resonance imaging. Among these, 132 (primary sample) and 134 patients (validation sample) underwent 12-core prostate biopsy at the National Cancer Center Hospital of Tokyo, Japan, and at other institutions, respectively. In the primary dataset, univariate and multivariate analyses were carried out to predict side-specific extracapsular extension using variables determined preoperatively, including 3.0-Tesla magnetic resonance imaging findings (T2-weighted and diffusion-weighted imaging). A prediction model was then constructed and applied to the validation study sample. Multivariate analysis identified four significant independent predictors (P < 0.05), including a biopsy Gleason score of ≥8, positive 3.0-Tesla diffusion-weighted magnetic resonance imaging findings, ≥2 positive biopsy cores on each side and a maximum percentage of positive cores ≥31% on each side. The negative predictive value was 93.9% in the combination model with these four predictors, meanwhile the positive predictive value was 33.8%. Good reproducibility of these four significant predictors and the combination model was observed in the validation study sample. The side-specific extracapsular extension prediction by the biopsy Gleason score and factors associated with tumor location, including a positive 3.0-Tesla diffusion-weighted magnetic resonance imaging finding, have a high negative predictive value, but a low positive predictive value. © 2013 The Japanese Urological Association.

  14. Large-scale prediction of adverse drug reactions using chemical, biological, and phenotypic properties of drugs.

    PubMed

    Liu, Mei; Wu, Yonghui; Chen, Yukun; Sun, Jingchun; Zhao, Zhongming; Chen, Xue-wen; Matheny, Michael Edwin; Xu, Hua

    2012-06-01

    Adverse drug reaction (ADR) is one of the major causes of failure in drug development. Severe ADRs that go undetected until the post-marketing phase of a drug often lead to patient morbidity. Accurate prediction of potential ADRs is required in the entire life cycle of a drug, including early stages of drug design, different phases of clinical trials, and post-marketing surveillance. Many studies have utilized either chemical structures or molecular pathways of the drugs to predict ADRs. Here, the authors propose a machine-learning-based approach for ADR prediction by integrating the phenotypic characteristics of a drug, including indications and other known ADRs, with the drug's chemical structures and biological properties, including protein targets and pathway information. A large-scale study was conducted to predict 1385 known ADRs of 832 approved drugs, and five machine-learning algorithms for this task were compared. This evaluation, based on a fivefold cross-validation, showed that the support vector machine algorithm outperformed the others. Of the three types of information, phenotypic data were the most informative for ADR prediction. When biological and phenotypic features were added to the baseline chemical information, the ADR prediction model achieved significant improvements in area under the curve (from 0.9054 to 0.9524), precision (from 43.37% to 66.17%), and recall (from 49.25% to 63.06%). Most importantly, the proposed model successfully predicted the ADRs associated with withdrawal of rofecoxib and cerivastatin. The results suggest that phenotypic information on drugs is valuable for ADR prediction. Moreover, they demonstrate that different models that combine chemical, biological, or phenotypic information can be built from approved drugs, and they have the potential to detect clinically important ADRs in both preclinical and post-marketing phases.

  15. Predicting the hand, foot, and mouth disease incidence using search engine query data and climate variables: an ecological study in Guangdong, China.

    PubMed

    Du, Zhicheng; Xu, Lin; Zhang, Wangjian; Zhang, Dingmei; Yu, Shicheng; Hao, Yuantao

    2017-10-06

    Hand, foot, and mouth disease (HFMD) has caused a substantial burden in China, especially in Guangdong Province. Based on the enhanced surveillance system, we aimed to explore whether the addition of temperate and search engine query data improves the risk prediction of HFMD. Ecological study. Information on the confirmed cases of HFMD, climate parameters and search engine query logs was collected. A total of 1.36 million HFMD cases were identified from the surveillance system during 2011-2014. Analyses were conducted at aggregate level and no confidential information was involved. A seasonal autoregressive integrated moving average (ARIMA) model with external variables (ARIMAX) was used to predict the HFMD incidence from 2011 to 2014, taking into account temperature and search engine query data (Baidu Index, BDI). Statistics of goodness-of-fit and precision of prediction were used to compare models (1) based on surveillance data only, and with the addition of (2) temperature, (3) BDI, and (4) both temperature and BDI. A high correlation between HFMD incidence and BDI ( r =0.794, p<0.001) or temperature ( r =0.657, p<0.001) was observed using both time series plot and correlation matrix. A linear effect of BDI (without lag) and non-linear effect of temperature (1 week lag) on HFMD incidence were found in a distributed lag non-linear model. Compared with the model based on surveillance data only, the ARIMAX model including BDI reached the best goodness-of-fit with an Akaike information criterion (AIC) value of -345.332, whereas the model including both BDI and temperature had the most accurate prediction in terms of the mean absolute percentage error (MAPE) of 101.745%. An ARIMAX model incorporating search engine query data significantly improved the prediction of HFMD. Further studies are warranted to examine whether including search engine query data also improves the prediction of other infectious diseases in other settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Predicting the hand, foot, and mouth disease incidence using search engine query data and climate variables: an ecological study in Guangdong, China

    PubMed Central

    Du, Zhicheng; Xu, Lin; Zhang, Wangjian; Zhang, Dingmei; Yu, Shicheng; Hao, Yuantao

    2017-01-01

    Objectives Hand, foot, and mouth disease (HFMD) has caused a substantial burden in China, especially in Guangdong Province. Based on the enhanced surveillance system, we aimed to explore whether the addition of temperate and search engine query data improves the risk prediction of HFMD. Design Ecological study. Setting and participants Information on the confirmed cases of HFMD, climate parameters and search engine query logs was collected. A total of 1.36 million HFMD cases were identified from the surveillance system during 2011–2014. Analyses were conducted at aggregate level and no confidential information was involved. Outcome measures A seasonal autoregressive integrated moving average (ARIMA) model with external variables (ARIMAX) was used to predict the HFMD incidence from 2011 to 2014, taking into account temperature and search engine query data (Baidu Index, BDI). Statistics of goodness-of-fit and precision of prediction were used to compare models (1) based on surveillance data only, and with the addition of (2) temperature, (3) BDI, and (4) both temperature and BDI. Results A high correlation between HFMD incidence and BDI (r=0.794, p<0.001) or temperature (r=0.657, p<0.001) was observed using both time series plot and correlation matrix. A linear effect of BDI (without lag) and non-linear effect of temperature (1 week lag) on HFMD incidence were found in a distributed lag non-linear model. Compared with the model based on surveillance data only, the ARIMAX model including BDI reached the best goodness-of-fit with an Akaike information criterion (AIC) value of −345.332, whereas the model including both BDI and temperature had the most accurate prediction in terms of the mean absolute percentage error (MAPE) of 101.745%. Conclusions An ARIMAX model incorporating search engine query data significantly improved the prediction of HFMD. Further studies are warranted to examine whether including search engine query data also improves the prediction of other infectious diseases in other settings. PMID:28988169

  17. [Estimation of the effect derived from wind erosion of soil and dust emission in Tianjin suburbs on the central district based on WEPS model].

    PubMed

    Chen, Li; Han, Ting-Ting; Li, Tao; Ji, Ya-Qin; Bai, Zhi-Peng; Wang, Bin

    2012-07-01

    Due to the lack of a prediction model for current wind erosion in China and the slow development for such models, this study aims to predict the wind erosion of soil and the dust emission and develop a prediction model for wind erosion in Tianjin by investigating the structure, parameter systems and the relationships among the parameter systems of the prediction models for wind erosion in typical areas, using the U.S. wind erosion prediction system (WEPS) as reference. Based on the remote sensing technique and the test data, a parameter system was established for the prediction model of wind erosion and dust emission, and a model was developed that was suitable for the prediction of wind erosion and dust emission in Tianjin. Tianjin was divided into 11 080 blocks with a resolution of 1 x 1 km2, among which 7 778 dust emitting blocks were selected. The parameters of the blocks were localized, including longitude, latitude, elevation and direction, etc.. The database files of blocks were localized, including wind file, climate file, soil file and management file. The weps. run file was edited. Based on Microsoft Visualstudio 2008, secondary development was done using C + + language, and the dust fluxes of 7 778 blocks were estimated, including creep and saltation fluxes, suspension fluxes and PM10 fluxes. Based on the parameters of wind tunnel experiments in Inner Mongolia, the soil measurement data and climate data in suburbs of Tianjin, the wind erosion module, wind erosion fluxes, dust emission release modulus and dust release fluxes were calculated for the four seasons and the whole year in suburbs of Tianjin. In 2009, the total creep and saltation fluxes, suspension fluxes and PM10 fluxes in the suburbs of Tianjin were 2.54 x 10(6) t, 1.25 x 10(7) t and 9.04 x 10(5) t, respectively, among which, the parts pointing to the central district were 5.61 x 10(5) t, 2.89 x 10(6) t and 2.03 x 10(5) t, respectively.

  18. Developing a predictive tropospheric ozone model for Tabriz

    NASA Astrophysics Data System (ADS)

    Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi

    2013-04-01

    Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.

  19. Root architecture simulation improves the inference from seedling root phenotyping towards mature root systems

    PubMed Central

    Zhao, Jiangsan; Rewald, Boris; Leitner, Daniel; Nagel, Kerstin A.; Nakhforoosh, Alireza

    2017-01-01

    Abstract Root phenotyping provides trait information for plant breeding. A shortcoming of high-throughput root phenotyping is the limitation to seedling plants and failure to make inferences on mature root systems. We suggest root system architecture (RSA) models to predict mature root traits and overcome the inference problem. Sixteen pea genotypes were phenotyped in (i) seedling (Petri dishes) and (ii) mature (sand-filled columns) root phenotyping platforms. The RSA model RootBox was parameterized with seedling traits to simulate the fully developed root systems. Measured and modelled root length, first-order lateral number, and root distribution were compared to determine key traits for model-based prediction. No direct relationship in root traits (tap, lateral length, interbranch distance) was evident between phenotyping systems. RootBox significantly improved the inference over phenotyping platforms. Seedling plant tap and lateral root elongation rates and interbranch distance were sufficient model parameters to predict genotype ranking in total root length with an RSpearman of 0.83. Parameterization including uneven lateral spacing via a scaling function substantially improved the prediction of architectures underlying the differently sized root systems. We conclude that RSA models can solve the inference problem of seedling root phenotyping. RSA models should be included in the phenotyping pipeline to provide reliable information on mature root systems to breeding research. PMID:28168270

  20. A systematic review of models to predict recruitment to multicentre clinical trials.

    PubMed

    Barnard, Katharine D; Dent, Louise; Cook, Andrew

    2010-07-06

    Less than one third of publicly funded trials managed to recruit according to their original plan often resulting in request for additional funding and/or time extensions. The aim was to identify models which might be useful to a major public funder of randomised controlled trials when estimating likely time requirements for recruiting trial participants. The requirements of a useful model were identified as usability, based on experience, able to reflect time trends, accounting for centre recruitment and contribution to a commissioning decision. A systematic review of English language articles using MEDLINE and EMBASE. Search terms included: randomised controlled trial, patient, accrual, predict, enroll, models, statistical; Bayes Theorem; Decision Theory; Monte Carlo Method and Poisson. Only studies discussing prediction of recruitment to trials using a modelling approach were included. Information was extracted from articles by one author, and checked by a second, using a pre-defined form. Out of 326 identified abstracts, only 8 met all the inclusion criteria. Of these 8 studies examined, there are five major classes of model discussed: the unconditional model, the conditional model, the Poisson model, Bayesian models and Monte Carlo simulation of Markov models. None of these meet all the pre-identified needs of the funder. To meet the needs of a number of research programmes, a new model is required as a matter of importance. Any model chosen should be validated against both retrospective and prospective data, to ensure the predictions it gives are superior to those currently used.

  1. Molecular structure and gas chromatographic retention behavior of the components of Ylang-Ylang oil.

    PubMed

    Olivero, J; Gracia, T; Payares, P; Vivas, R; Díaz, D; Daza, E; Geerlings, P

    1997-05-01

    Using quantitative structure-retention relationships (QSRR) methodologies the Kovats gas chromatographic retention indices for both apolar (DB-1) and polar (DB-Wax) columns for 48 compounds from Ylang-Ylang essential oil were empirically predicted from calculated and experimental data on molecular structure. Topological, geometric, and electronic descriptors were obtained for model generation. Relationships between descriptors and the retention data reported were established by linear multiple regression, giving equations that can be used to predict the Kovats indices for compounds present in essential oils, both in DB-1 and DB-Wax columns. Factor analysis was performed to interpret the meaning of the descriptors included in the models. The prediction model for the DB-1 column includes descriptors such as Randic's first-order connectivity index (1X), the molecular surface (MSA), the sum of the atomic charge on all the hydrogens (QH), Randic's third-order connectivity index (3X) and the molecular electronegativity (chi). The prediction model for the DB-Wax column includes the first three descriptors mentioned for the DB-1 column (1X, MSA and QH) and the most negative charge (MNC), the global softness (S), and the difference between Randic's and Kier and Hall's third-order connectivity indexes (3X-3XV).

  2. Physics and chemistry of antimicrobial behavior of ion-exchanged silver in glass.

    PubMed

    Borrelli, N F; Senaratne, W; Wei, Y; Petzold, O

    2015-02-04

    The results of a comprehensive study involving the antimicrobial activity in a silver ion-exchanged glass are presented. The study includes the glass composition, the method of incorporating silver into the glass, the effective concentration of the silver available at the glass surface, and the effect of the ambient environment. A quantitative kinetic model that includes the above factors in predicting the antimicrobial activity is proposed. Finally, experimental data demonstrating antibacterial activity against Staphylococcus aureus with correlation to the predicted model is shown.

  3. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-01-20

    Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).

  4. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-02-01

    Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Stichting European Society for Clinical Investigation Journal Foundation.

  5. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-01-06

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).

  6. Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD)

    PubMed Central

    Reitsma, Johannes B.; Altman, Douglas G.; Moons, Karel G.M.

    2015-01-01

    Background— Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. Methods— The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. Results— The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. Conclusions— To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). PMID:25561516

  7. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD statement

    PubMed Central

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-01-01

    Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). PMID:25562432

  8. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-02-01

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Royal College of Obstetricians and Gynaecologists.

  9. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. The TRIPOD Group.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-01-13

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 The Authors.

  10. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-01-06

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).

  11. Transparent reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-02-01

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). Copyright © 2015 Elsevier Inc. All rights reserved.

  12. A system identification approach for developing model predictive controllers of antibody quality attributes in cell culture processes.

    PubMed

    Downey, Brandon; Schmitt, John; Beller, Justin; Russell, Brian; Quach, Anthony; Hermann, Elizabeth; Lyon, David; Breit, Jeffrey

    2017-11-01

    As the biopharmaceutical industry evolves to include more diverse protein formats and processes, more robust control of Critical Quality Attributes (CQAs) is needed to maintain processing flexibility without compromising quality. Active control of CQAs has been demonstrated using model predictive control techniques, which allow development of processes which are robust against disturbances associated with raw material variability and other potentially flexible operating conditions. Wide adoption of model predictive control in biopharmaceutical cell culture processes has been hampered, however, in part due to the large amount of data and expertise required to make a predictive model of controlled CQAs, a requirement for model predictive control. Here we developed a highly automated, perfusion apparatus to systematically and efficiently generate predictive models using application of system identification approaches. We successfully created a predictive model of %galactosylation using data obtained by manipulating galactose concentration in the perfusion apparatus in serialized step change experiments. We then demonstrated the use of the model in a model predictive controller in a simulated control scenario to successfully achieve a %galactosylation set point in a simulated fed-batch culture. The automated model identification approach demonstrated here can potentially be generalized to many CQAs, and could be a more efficient, faster, and highly automated alternative to batch experiments for developing predictive models in cell culture processes, and allow the wider adoption of model predictive control in biopharmaceutical processes. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1647-1661, 2017. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers.

  13. The Met Office Unified Model Global Atmosphere 6.0/6.1 and JULES Global Land 6.0/6.1 configurations

    NASA Astrophysics Data System (ADS)

    Walters, David; Boutle, Ian; Brooks, Malcolm; Melvin, Thomas; Stratton, Rachel; Vosper, Simon; Wells, Helen; Williams, Keith; Wood, Nigel; Allen, Thomas; Bushell, Andrew; Copsey, Dan; Earnshaw, Paul; Edwards, John; Gross, Markus; Hardiman, Steven; Harris, Chris; Heming, Julian; Klingaman, Nicholas; Levine, Richard; Manners, James; Martin, Gill; Milton, Sean; Mittermaier, Marion; Morcrette, Cyril; Riddick, Thomas; Roberts, Malcolm; Sanchez, Claudio; Selwood, Paul; Stirling, Alison; Smith, Chris; Suri, Dan; Tennant, Warren; Vidale, Pier Luigi; Wilkinson, Jonathan; Willett, Martin; Woolnough, Steve; Xavier, Prince

    2017-04-01

    We describe Global Atmosphere 6.0 and Global Land 6.0 (GA6.0/GL6.0): the latest science configurations of the Met Office Unified Model and JULES (Joint UK Land Environment Simulator) land surface model developed for use across all timescales. Global Atmosphere 6.0 includes the ENDGame (Even Newer Dynamics for General atmospheric modelling of the environment) dynamical core, which significantly increases mid-latitude variability improving a known model bias. Alongside developments of the model's physical parametrisations, ENDGame also increases variability in the tropics, which leads to an improved representation of tropical cyclones and other tropical phenomena. Further developments of the atmospheric and land surface parametrisations improve other aspects of model performance, including the forecasting of surface weather phenomena. We also describe GA6.1/GL6.1, which includes a small number of long-standing differences from our main trunk configurations that we continue to require for operational global weather prediction. Since July 2014, GA6.1/GL6.1 has been used by the Met Office for operational global numerical weather prediction, whilst GA6.0/GL6.0 was implemented in its remaining global prediction systems over the following year.

  14. Relationships between host body condition and immunocompetence, not host sex, best predict parasite burden in a bat-helminth system.

    PubMed

    Warburton, Elizabeth M; Pearl, Christopher A; Vonhof, Maarten J

    2016-06-01

    Sex-biased parasitism highlights potentially divergent approaches to parasite resistance resulting in differing energetic trade-offs for males and females; however, trade-offs between immunity and self-maintenance could also depend on host body condition. We investigated these relationships in the big brown bat, Eptesicus fuscus, to determine if host sex or body condition better predicted parasite resistance, if testosterone levels predicted male parasite burdens, and if immune parameters could predict male testosterone levels. We found that male and female hosts had similar parasite burdens and female bats scored higher than males in only one immunological measure. Top models of helminth burden revealed interactions between body condition index and agglutination score as well as between agglutination score and host sex. Additionally, the strength of the relationships between sex, agglutination, and helminth burden is affected by body condition. Models of male parasite burden provided no support for testosterone predicting helminthiasis. Models that best predicted testosterone levels did not include parasite burden but instead consistently included month of capture and agglutination score. Thus, in our system, body condition was a more important predictor of immunity and worm burden than host sex.

  15. Intra-Articular Knee Contact Force Estimation During Walking Using Force-Reaction Elements and Subject-Specific Joint Model.

    PubMed

    Jung, Yihwan; Phan, Cong-Bo; Koo, Seungbum

    2016-02-01

    Joint contact forces measured with instrumented knee implants have not only revealed general patterns of joint loading but also showed individual variations that could be due to differences in anatomy and joint kinematics. Musculoskeletal human models for dynamic simulation have been utilized to understand body kinetics including joint moments, muscle tension, and knee contact forces. The objectives of this study were to develop a knee contact model which can predict knee contact forces using an inverse dynamics-based optimization solver and to investigate the effect of joint constraints on knee contact force prediction. A knee contact model was developed to include 32 reaction force elements on the surface of a tibial insert of a total knee replacement (TKR), which was embedded in a full-body musculoskeletal model. Various external measurements including motion data and external force data during walking trials of a subject with an instrumented knee implant were provided from the Sixth Grand Challenge Competition to Predict in vivo Knee Loads. Knee contact forces in the medial and lateral portions of the instrumented knee implant were also provided for the same walking trials. A knee contact model with a hinge joint and normal alignment could predict knee contact forces with root mean square errors (RMSEs) of 165 N and 288 N for the medial and lateral portions of the knee, respectively, and coefficients of determination (R2) of 0.70 and -0.63. When the degrees-of-freedom (DOF) of the knee and locations of leg markers were adjusted to account for the valgus lower-limb alignment of the subject, RMSE values improved to 144 N and 179 N, and R2 values improved to 0.77 and 0.37, respectively. The proposed knee contact model with subject-specific joint model could predict in vivo knee contact forces with reasonable accuracy. This model may contribute to the development and improvement of knee arthroplasty.

  16. Developing and validating a predictive model for stroke progression.

    PubMed

    Craig, L E; Wu, O; Gilmour, H; Barber, M; Langhorne, P

    2011-01-01

    Progression is believed to be a common and important complication in acute stroke, and has been associated with increased mortality and morbidity. Reliable identification of predictors of early neurological deterioration could potentially benefit routine clinical care. The aim of this study was to identify predictors of early stroke progression using two independent patient cohorts. Two patient cohorts were used for this study - the first cohort formed the training data set, which included consecutive patients admitted to an urban teaching hospital between 2000 and 2002, and the second cohort formed the test data set, which included patients admitted to the same hospital between 2003 and 2004. A standard definition of stroke progression was used. The first cohort (n = 863) was used to develop the model. Variables that were statistically significant (p < 0.1) on univariate analysis were included in the multivariate model. Logistic regression was the technique employed using backward stepwise regression to drop the least significant variables (p > 0.1) in turn. The second cohort (n = 216) was used to test the performance of the model. The performance of the predictive model was assessed in terms of both calibration and discrimination. Multiple imputation methods were used for dealing with the missing values. Variables shown to be significant predictors of stroke progression were conscious level, history of coronary heart disease, presence of hyperosmolarity, CT lesion, living alone on admission, Oxfordshire Community Stroke Project classification, presence of pyrexia and smoking status. The model appears to have reasonable discriminative properties [the median receiver-operating characteristic curve value was 0.72 (range 0.72-0.73)] and to fit well with the observed data, which is indicated by the high goodness-of-fit p value [the median p value from the Hosmer-Lemeshow test was 0.90 (range 0.50-0.92)]. The predictive model developed in this study contains variables that can be easily collected in practice therefore increasing its usability in clinical practice. Using this analysis approach, the discrimination and calibration of the predictive model appear sufficiently high to provide accurate predictions. This study also offers some discussion around the validation of predictive models for wider use in clinical practice.

  17. THE ORIGIN OF THE HOT GAS IN THE GALACTIC HALO: TESTING GALACTIC FOUNTAIN MODELS' X-RAY EMISSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henley, David B.; Shelton, Robin L.; Kwak, Kyujin

    2015-02-20

    We test the X-ray emission predictions of galactic fountain models against XMM-Newton measurements of the emission from the Milky Way's hot halo. These measurements are from 110 sight lines, spanning the full range of Galactic longitudes. We find that a magnetohydrodynamical simulation of a supernova-driven interstellar medium, which features a flow of hot gas from the disk to the halo, reproduces the temperature but significantly underpredicts the 0.5-2.0 keV surface brightness of the halo (by two orders of magnitude, if we compare the median predicted and observed values). This is true for versions of the model with and without anmore » interstellar magnetic field. We consider different reasons for the discrepancy between the model predictions and the observations. We find that taking into account overionization in cooled halo plasma, which could in principle boost the predicted X-ray emission, is unlikely in practice to bring the predictions in line with the observations. We also find that including thermal conduction, which would tend to increase the surface brightnesses of interfaces between hot and cold gas, would not overcome the surface brightness shortfall. However, charge exchange emission from such interfaces, not included in the current model, may be significant. The faintness of the model may also be due to the lack of cosmic ray driving, meaning that the model may underestimate the amount of material transported from the disk to the halo. In addition, an extended hot halo of accreted material may be important, by supplying hot electrons that could boost the emission of the material driven out from the disk. Additional model predictions are needed to test the relative importance of these processes in explaining the observed halo emission.« less

  18. The Origin of the Hot Gas in the Galactic Halo: Testing Galactic Fountain Models' X-Ray Emission

    NASA Astrophysics Data System (ADS)

    Henley, David B.; Shelton, Robin L.; Kwak, Kyujin; Hill, Alex S.; Mac Low, Mordecai-Mark

    2015-02-01

    We test the X-ray emission predictions of galactic fountain models against XMM-Newton measurements of the emission from the Milky Way's hot halo. These measurements are from 110 sight lines, spanning the full range of Galactic longitudes. We find that a magnetohydrodynamical simulation of a supernova-driven interstellar medium, which features a flow of hot gas from the disk to the halo, reproduces the temperature but significantly underpredicts the 0.5-2.0 keV surface brightness of the halo (by two orders of magnitude, if we compare the median predicted and observed values). This is true for versions of the model with and without an interstellar magnetic field. We consider different reasons for the discrepancy between the model predictions and the observations. We find that taking into account overionization in cooled halo plasma, which could in principle boost the predicted X-ray emission, is unlikely in practice to bring the predictions in line with the observations. We also find that including thermal conduction, which would tend to increase the surface brightnesses of interfaces between hot and cold gas, would not overcome the surface brightness shortfall. However, charge exchange emission from such interfaces, not included in the current model, may be significant. The faintness of the model may also be due to the lack of cosmic ray driving, meaning that the model may underestimate the amount of material transported from the disk to the halo. In addition, an extended hot halo of accreted material may be important, by supplying hot electrons that could boost the emission of the material driven out from the disk. Additional model predictions are needed to test the relative importance of these processes in explaining the observed halo emission.

  19. Prediction of acute kidney injury within 30 days of cardiac surgery.

    PubMed

    Ng, Shu Yi; Sanagou, Masoumeh; Wolfe, Rory; Cochrane, Andrew; Smith, Julian A; Reid, Christopher Michael

    2014-06-01

    To predict acute kidney injury after cardiac surgery. The study included 28,422 cardiac surgery patients who had had no preoperative renal dialysis from June 2001 to June 2009 in 18 hospitals. Logistic regression analyses were undertaken to identify the best combination of risk factors for predicting acute kidney injury. Two models were developed, one including the preoperative risk factors and another including the pre-, peri-, and early postoperative risk factors. The area under the receiver operating characteristic curve was calculated, using split-sample internal validation, to assess model discrimination. The incidence of acute kidney injury was 5.8% (1642 patients). The mortality for patients who experienced acute kidney injury was 17.4% versus 1.6% for patients who did not. On validation, the area under the curve for the preoperative model was 0.77, and the Hosmer-Lemeshow goodness-of-fit P value was .06. For the postoperative model area under the curve was 0.81 and the Hosmer-Lemeshow P value was .6. Both models had good discrimination and acceptable calibration. Acute kidney injury after cardiac surgery can be predicted using preoperative risk factors alone or, with greater accuracy, using pre-, peri-, and early postoperative risk factors. The ability to identify high-risk individuals can be useful in preoperative patient management and for recruitment of appropriate patients to clinical trials. Prediction in the early stages of postoperative care can guide subsequent intensive care of patients and could also be the basis of a retrospective performance audit tool. Copyright © 2014 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  20. Clinical Prediction of Functional Outcome after Ischemic Stroke: The Surprising Importance of Periventricular White Matter Disease and Race

    PubMed Central

    Kissela, Brett; Lindsell, Christopher J.; Kleindorfer, Dawn; Alwell, Kathleen; Moomaw, Charles J.; Woo, Daniel; Flaherty, Matthew L.; Air, Ellen; Broderick, Joseph; Tsevat, Joel

    2009-01-01

    Background We sought 0074o build models that address questions of interest to patients and families by predicting short- and long-term mortality and functional outcome after ischemic stroke, while allowing for risk re-stratification as comorbid events accumulate. Methods A cohort of 451 ischemic stroke subjects in 1999 were interviewed during hospitalization, at 3 months, and at approximately 4 years. Medical records from the acute hospitalization were abstracted. All hospitalizations for 3 months post-stroke were reviewed to ascertain medical and psychiatric comorbidities, which were categorized for analysis. Multivariable models were derived to predict mortality and functional outcome (modified Rankin Scale) at 3 months and 4 years. Comorbidities were included as modifiers of the 3 month models, and included in 4-year predictions. Results Post-stroke medical and psychiatric comorbidities significantly increased short term post-stroke mortality and morbidity. Severe periventricular white matter disease (PVWMD) was significantly associated with poor functional outcome at 3 months, independent of other factors, such as diabetes and age; inclusion of this imaging variable eliminated other traditional risk factors often found in stroke outcomes models. Outcome at 3 months was a significant predictor of long-term mortality and functional outcome. Black race was a predictor of 4-year mortality. Conclusions We propose that predictive models for stroke outcome, as well as analysis of clinical trials, should include adjustment for comorbid conditions. The effects of PVWMD on short-term functional outcomes and black race on long-term mortality are findings that require confirmation. PMID:19109548

  1. Bayesian inference of galaxy formation from the K-band luminosity function of galaxies: tensions between theory and observation

    NASA Astrophysics Data System (ADS)

    Lu, Yu; Mo, H. J.; Katz, Neal; Weinberg, Martin D.

    2012-04-01

    We conduct Bayesian model inferences from the observed K-band luminosity function of galaxies in the local Universe, using the semi-analytic model (SAM) of galaxy formation introduced in Lu et al. The prior distributions for the 14 free parameters include a large range of possible models. We find that some of the free parameters, e.g. the characteristic scales for quenching star formation in both high-mass and low-mass haloes, are already tightly constrained by the single data set. The posterior distribution includes the model parameters adopted in other SAMs. By marginalizing over the posterior distribution, we make predictions that include the full inferential uncertainties for the colour-magnitude relation, the Tully-Fisher relation, the conditional stellar mass function of galaxies in haloes of different masses, the H I mass function, the redshift evolution of the stellar mass function of galaxies and the global star formation history. Using posterior predictive checking with the available observational results, we find that the model family (i) predicts a Tully-Fisher relation that is curved; (ii) significantly overpredicts the satellite fraction; (iii) vastly overpredicts the H I mass function; (iv) predicts high-z stellar mass functions that have too many low-mass galaxies and too few high-mass ones and (v) predicts a redshift evolution of the stellar mass density and the star formation history that are in moderate disagreement. These results suggest that some important processes are still missing in the current model family, and we discuss a number of possible solutions to solve the discrepancies, such as interactions between galaxies and dark matter haloes, tidal stripping, the bimodal accretion of gas, preheating and a redshift-dependent initial mass function.

  2. Collegiate Student-Athletes' Academic Success: Academic Communication Apprehension's Impact on Prediction Models

    ERIC Educational Resources Information Center

    James, Kai'Iah A.

    2010-01-01

    This dissertation study examines the impact of traditional and non-cognitive variables on the academic prediction model for a sample of collegiate student-athletes. Three hundred and fifty-nine NCAA Division IA male and female student-athletes, representing 13 sports, including football and Men's and Women's Basketball provided demographic…

  3. Shear wave prediction using committee fuzzy model constrained by lithofacies, Zagros basin, SW Iran

    NASA Astrophysics Data System (ADS)

    Shiroodi, Sadjad Kazem; Ghafoori, Mohammad; Ansari, Hamid Reza; Lashkaripour, Golamreza; Ghanadian, Mostafa

    2017-02-01

    The main purpose of this study is to introduce the geological controlling factors in improving an intelligence-based model to estimate shear wave velocity from seismic attributes. The proposed method includes three main steps in the framework of geological events in a complex sedimentary succession located in the Persian Gulf. First, the best attributes were selected from extracted seismic data. Second, these attributes were transformed into shear wave velocity using fuzzy inference systems (FIS) such as Sugeno's fuzzy inference (SFIS), adaptive neuro-fuzzy inference (ANFIS) and optimized fuzzy inference (OFIS). Finally, a committee fuzzy machine (CFM) based on bat-inspired algorithm (BA) optimization was applied to combine previous predictions into an enhanced solution. In order to show the geological effect on improving the prediction, the main classes of predominate lithofacies in the reservoir of interest including shale, sand, and carbonate were selected and then the proposed algorithm was performed with and without lithofacies constraint. The results showed a good agreement between real and predicted shear wave velocity in the lithofacies-based model compared to the model without lithofacies especially in sand and carbonate.

  4. High speed turboprop aeroacoustic study (counterrotation). Volume 2: Computer programs

    NASA Technical Reports Server (NTRS)

    Whitfield, C. E.; Mani, R.; Gliebe, P. R.

    1990-01-01

    The isolated counterrotating high speed turboprop noise prediction program developed and funded by GE Aircraft Engines was compared with model data taken in the GE Aircraft Engines Cell 41 anechoic facility, the Boeing Transonic Wind Tunnel, and in the NASA-Lewis 8 x 6 and 9 x 15 wind tunnels. The predictions show good agreement with measured data under both low and high speed simulated flight conditions. The installation effect model developed for single rotation, high speed turboprops was extended to include counter rotation. The additional effect of mounting a pylon upstream of the forward rotor was included in the flow field modeling. A nontraditional mechanism concerning the acoustic radiation from a propeller at angle of attack was investigated. Predictions made using this approach show results that are in much closer agreement with measurement over a range of operating conditions than those obtained via traditional fluctuating force methods. The isolated rotors and installation effects models were combined into a single prediction program. The results were compared with data taken during the flight test of the B727/UDF (trademark) engine demonstrator aircraft.

  5. High speed turboprop aeroacoustic study (counterrotation). Volume 2: Computer programs

    NASA Astrophysics Data System (ADS)

    Whitfield, C. E.; Mani, R.; Gliebe, P. R.

    1990-07-01

    The isolated counterrotating high speed turboprop noise prediction program developed and funded by GE Aircraft Engines was compared with model data taken in the GE Aircraft Engines Cell 41 anechoic facility, the Boeing Transonic Wind Tunnel, and in the NASA-Lewis 8 x 6 and 9 x 15 wind tunnels. The predictions show good agreement with measured data under both low and high speed simulated flight conditions. The installation effect model developed for single rotation, high speed turboprops was extended to include counter rotation. The additional effect of mounting a pylon upstream of the forward rotor was included in the flow field modeling. A nontraditional mechanism concerning the acoustic radiation from a propeller at angle of attack was investigated. Predictions made using this approach show results that are in much closer agreement with measurement over a range of operating conditions than those obtained via traditional fluctuating force methods. The isolated rotors and installation effects models were combined into a single prediction program. The results were compared with data taken during the flight test of the B727/UDF (trademark) engine demonstrator aircraft.

  6. Development and validation of an extensive growth and growth boundary model for psychrotolerant Lactobacillus spp. in seafood and meat products.

    PubMed

    Mejlholm, Ole; Dalgaard, Paw

    2013-10-15

    A new and extensive growth and growth boundary model for psychrotolerant Lactobacillus spp. was developed and validated for processed and unprocessed products of seafood and meat. The new model was developed by refitting and expanding an existing cardinal parameter model for growth and the growth boundary of lactic acid bacteria (LAB) in processed seafood (O. Mejlholm and P. Dalgaard, J. Food Prot. 70. 2485-2497, 2007). Initially, to estimate values for the maximum specific growth rate at the reference temperature of 25 °C (μref) and the theoretical minimum temperature that prevents growth of psychrotolerant LAB (T(min)), the existing LAB model was refitted to data from experiments with seafood and meat products reported not to include nitrite or any of the four organic acids evaluated in the present study. Next, dimensionless terms modelling the antimicrobial effect of nitrite, and acetic, benzoic, citric and sorbic acids on growth of Lactobacillus sakei were added to the refitted model, together with minimum inhibitory concentrations determined for the five environmental parameters. The new model including the effect of 12 environmental parameters, as well as their interactive effects, was successfully validated using 229 growth rates (μ(max) values) for psychrotolerant Lactobacillus spp. in seafood and meat products. Average bias and accuracy factor values of 1.08 and 1.27, respectively, were obtained when observed and predicted μ(max) values of psychrotolerant Lactobacillus spp. were compared. Thus, on average μ(max) values were only overestimated by 8%. The performance of the new model was equally good for seafood and meat products, and the importance of including the effect of acetic, benzoic, citric and sorbic acids and to a lesser extent nitrite in order to accurately predict growth of psychrotolerant Lactobacillus spp. was clearly demonstrated. The new model can be used to predict growth of psychrotolerant Lactobacillus spp. in seafood and meat products e.g. prediction of the time to a critical cell concentration of bacteria is considered useful for establishing the shelf life. In addition, the high number of environmental parameters included in the new model makes it flexible and suitable for product development as the effect of substituting one combination of preservatives with another can be predicted. In general, the performance of the new model was unacceptable for other types of LAB including Carnobacterium spp., Leuconostoc spp. and Weissella spp. © 2013.

  7. Prediction of lake depth across a 17-state region in the United States

    USGS Publications Warehouse

    Oliver, Samantha K.; Soranno, Patricia A.; Fergus, C. Emi; Wagner, Tyler; Winslow, Luke A.; Scott, Caren E.; Webster, Katherine E.; Downing, John A.; Stanley, Emily H.

    2016-01-01

    Lake depth is an important characteristic for understanding many lake processes, yet it is unknown for the vast majority of lakes globally. Our objective was to develop a model that predicts lake depth using map-derived metrics of lake and terrestrial geomorphic features. Building on previous models that use local topography to predict lake depth, we hypothesized that regional differences in topography, lake shape, or sedimentation processes could lead to region-specific relationships between lake depth and the mapped features. We therefore used a mixed modeling approach that included region-specific model parameters. We built models using lake and map data from LAGOS, which includes 8164 lakes with maximum depth (Zmax) observations. The model was used to predict depth for all lakes ≥4 ha (n = 42 443) in the study extent. Lake surface area and maximum slope in a 100 m buffer were the best predictors of Zmax. Interactions between surface area and topography occurred at both the local and regional scale; surface area had a larger effect in steep terrain, so large lakes embedded in steep terrain were much deeper than those in flat terrain. Despite a large sample size and inclusion of regional variability, model performance (R2 = 0.29, RMSE = 7.1 m) was similar to other published models. The relative error varied by region, however, highlighting the importance of taking a regional approach to lake depth modeling. Additionally, we provide the largest known collection of observed and predicted lake depth values in the United States.

  8. Methods and Techniques for Risk Prediction of Space Shuttle Upgrades

    NASA Technical Reports Server (NTRS)

    Hoffman, Chad R.; Pugh, Rich; Safie, Fayssal

    1998-01-01

    Since the Space Shuttle Accident in 1986, NASA has been trying to incorporate probabilistic risk assessment (PRA) in decisions concerning the Space Shuttle and other NASA projects. One major study NASA is currently conducting is in the PRA area in establishing an overall risk model for the Space Shuttle System. The model is intended to provide a tool to predict the Shuttle risk and to perform sensitivity analyses and trade studies including evaluation of upgrades. Marshall Space Flight Center (MSFC) and its prime contractors including Pratt and Whitney (P&W) are part of the NASA team conducting the PRA study. MSFC responsibility involves modeling the External Tank (ET), the Solid Rocket Booster (SRB), the Reusable Solid Rocket Motor (RSRM), and the Space Shuttle Main Engine (SSME). A major challenge that faced the PRA team is modeling the shuttle upgrades. This mainly includes the P&W High Pressure Fuel Turbopump (HPFTP) and the High Pressure Oxidizer Turbopump (HPOTP). The purpose of this paper is to discuss the various methods and techniques used for predicting the risk of the P&W redesigned HPFTP and HPOTP.

  9. Further testing and development of simulation models for UT inspections of armor

    NASA Astrophysics Data System (ADS)

    Margetan, Frank J.; Richter, Nathaniel; Thompson, R. Bruce

    2012-05-01

    In previous work we introduced an approach for simulating ultrasonic pulse/echo immersion inspections of multi-layer armor panels. Model inputs include the thickness, density, velocity and attenuation of each armor layer, the focal properties of the transducer, and a measured calibration signal. The basic model output is a response-versus-time waveform (ultrasonic A-scan) which includes echoes from all interfaces including those arising from reverberations within layers. Such A-scans can be predicted both for unflawed panels and panels containing a large disbond at any given interface. In this paper we continue our testing of the simulation software, applying it now to an armor panel consisting of SiC ceramic tiles fully embedded in a titanium-alloy matrix. An interesting specimen of such armor became available in which some tile/metal interfaces appear to be well bonded, while others have disbonded areas of various sizes. We compare measured and predicted A-scans for UT inspections, and also demonstrate an extension of the model to predict ultrasonic C-scans over regions containing a small, isolated disbond.

  10. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML)

    PubMed Central

    Lechevalier, D.; Ak, R.; Ferguson, M.; Law, K. H.; Lee, Y.-T. T.; Rachuri, S.

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain. PMID:29202125

  11. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    PubMed

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  12. Real-time assessments of water quality: expanding nowcasting throughout the Great Lakes

    USGS Publications Warehouse

    ,

    2013-01-01

    Nowcasts are systems that inform the public of current bacterial water-quality conditions at beaches on the basis of predictive models. During 2010–12, the U.S. Geological Survey (USGS) worked with 23 local and State agencies to improve existing operational beach nowcast systems at 4 beaches and expand the use of predictive models in nowcasts at an additional 45 beaches throughout the Great Lakes. The predictive models were specific to each beach, and the best model for each beach was based on a unique combination of environmental and water-quality explanatory variables. The variables used most often in models to predict Escherichia coli (E. coli) concentrations or the probability of exceeding a State recreational water-quality standard included turbidity, day of the year, wave height, wind direction and speed, antecedent rainfall for various time periods, and change in lake level over 24 hours. During validation of 42 beach models during 2012, the models performed better than the current method to assess recreational water quality (previous day's E. coli concentration). The USGS will continue to work with local agencies to improve nowcast predictions, enable technology transfer of predictive model development procedures, and implement more operational systems during 2013 and beyond.

  13. Predictive power of food web models based on body size decreases with trophic complexity.

    PubMed

    Jonsson, Tomas; Kaartinen, Riikka; Jonsson, Mattias; Bommarco, Riccardo

    2018-05-01

    Food web models parameterised using body size show promise to predict trophic interaction strengths (IS) and abundance dynamics. However, this remains to be rigorously tested in food webs beyond simple trophic modules, where indirect and intraguild interactions could be important and driven by traits other than body size. We systematically varied predator body size, guild composition and richness in microcosm insect webs and compared experimental outcomes with predictions of IS from models with allometrically scaled parameters. Body size was a strong predictor of IS in simple modules (r 2  = 0.92), but with increasing complexity the predictive power decreased, with model IS being consistently overestimated. We quantify the strength of observed trophic interaction modifications, partition this into density-mediated vs. behaviour-mediated indirect effects and show that model shortcomings in predicting IS is related to the size of behaviour-mediated effects. Our findings encourage development of dynamical food web models explicitly including and exploring indirect mechanisms. © 2018 John Wiley & Sons Ltd/CNRS.

  14. Multifactorial disease risk calculator: Risk prediction for multifactorial disease pedigrees.

    PubMed

    Campbell, Desmond D; Li, Yiming; Sham, Pak C

    2018-03-01

    Construction of multifactorial disease models from epidemiological findings and their application to disease pedigrees for risk prediction is nontrivial for all but the simplest of cases. Multifactorial Disease Risk Calculator is a web tool facilitating this. It provides a user-friendly interface, extending a reported methodology based on a liability-threshold model. Multifactorial disease models incorporating all the following features in combination are handled: quantitative risk factors (including polygenic scores), categorical risk factors (including major genetic risk loci), stratified age of onset curves, and the partition of the population variance in disease liability into genetic, shared, and unique environment effects. It allows the application of such models to disease pedigrees. Pedigree-related outputs are (i) individual disease risk for pedigree members, (ii) n year risk for unaffected pedigree members, and (iii) the disease pedigree's joint liability distribution. Risk prediction for each pedigree member is based on using the constructed disease model to appropriately weigh evidence on disease risk available from personal attributes and family history. Evidence is used to construct the disease pedigree's joint liability distribution. From this, lifetime and n year risk can be predicted. Example disease models and pedigrees are provided at the website and are used in accompanying tutorials to illustrate the features available. The website is built on an R package which provides the functionality for pedigree validation, disease model construction, and risk prediction. Website: http://grass.cgs.hku.hk:3838/mdrc/current. © 2017 WILEY PERIODICALS, INC.

  15. Predicting cognitive function from clinical measures of physical function and health status in older adults.

    PubMed

    Bolandzadeh, Niousha; Kording, Konrad; Salowitz, Nicole; Davis, Jennifer C; Hsu, Liang; Chan, Alison; Sharma, Devika; Blohm, Gunnar; Liu-Ambrose, Teresa

    2015-01-01

    Current research suggests that the neuropathology of dementia-including brain changes leading to memory impairment and cognitive decline-is evident years before the onset of this disease. Older adults with cognitive decline have reduced functional independence and quality of life, and are at greater risk for developing dementia. Therefore, identifying biomarkers that can be easily assessed within the clinical setting and predict cognitive decline is important. Early recognition of cognitive decline could promote timely implementation of preventive strategies. We included 89 community-dwelling adults aged 70 years and older in our study, and collected 32 measures of physical function, health status and cognitive function at baseline. We utilized an L1-L2 regularized regression model (elastic net) to identify which of the 32 baseline measures were strongly predictive of cognitive function after one year. We built three linear regression models: 1) based on baseline cognitive function, 2) based on variables consistently selected in every cross-validation loop, and 3) a full model based on all the 32 variables. Each of these models was carefully tested with nested cross-validation. Our model with the six variables consistently selected in every cross-validation loop had a mean squared prediction error of 7.47. This number was smaller than that of the full model (115.33) and the model with baseline cognitive function (7.98). Our model explained 47% of the variance in cognitive function after one year. We built a parsimonious model based on a selected set of six physical function and health status measures strongly predictive of cognitive function after one year. In addition to reducing the complexity of the model without changing the model significantly, our model with the top variables improved the mean prediction error and R-squared. These six physical function and health status measures can be easily implemented in a clinical setting.

  16. Thermophysical properties of liquid UO2, ZrO2 and corium by molecular dynamics and predictive models

    NASA Astrophysics Data System (ADS)

    Kim, Woong Kee; Shim, Ji Hoon; Kaviany, Massoud

    2017-08-01

    Predicting the fate of accident-melted nuclear fuel-cladding requires the understanding of the thermophysical properties which are lacking or have large scatter due to high-temperature experimental challenges. Using equilibrium classical molecular dynamics (MD), we predict the properties of melted UO2 and ZrO2 and compare them with the available experimental data and the predictive models. The existing interatomic potential models have been developed mainly for the polymorphic solid phases of these oxides, so they cannot be used to predict all the properties accurately. We compare and decipher the distinctions of those MD predictions using the specific property-related autocorrelation decays. The predicted properties are density, specific heat, heat of fusion, compressibility, viscosity, surface tension, and the molecular and electronic thermal conductivities. After the comparisons, we provide readily usable temperature-dependent correlations (including UO2-ZrO2 compounds, i.e. corium melt).

  17. A prediction and damage assessment model for snowmelt flood events in middle and high latitudes Region

    NASA Astrophysics Data System (ADS)

    Qiao, C.; Huang, Q.; Chen, T.; Zhang, X.

    2017-12-01

    In the context of global warming, the snowmelt flood events in the mountainous area of the middle and high latitudes are increasingly frequent and create severe casualties and property damages. Carrying out the prediction and risk assessment of the snowmelt flood is of great importance in the water resources management, the flood warning and prevention. Based on the remote sensing and GIS techniques, the relationships of the variables influencing the snowmelt flood such as the snow area, the snow depth, the air temperature, the precipitation, the land topography and land covers are analyzed and a prediction and damage assessment model for snowmelt floods is developed. This model analyzes and predicts the flood submerging area, flood depth, flood grade, and the damages of different underlying surfaces in the study area in a given time period based on the estimation of snowmelt amount, the snowmelt runoff, the direction and velocity of the flood. Then it was used to predict a snowmelt flood event in the Ertis River Basin in northern Xinjiang, China, during March and June, 2005 and to assess its damages including the damages of roads, transmission lines, settlements caused by the floods and the possible landslides using the hydrological and meteorological data, snow parameter data, DEM data and land use data. A comparison was made between the prediction results from this model and observation data including the flood measurement and its disaster loss data, which suggests that this model performs well in predicting the strength and impact area of snowmelt flood and its damage assessment. This model will be helpful for the prediction and damage assessment of snowmelt flood events in the mountainous area in the middle and high latitudes in spring, which has great social and economic significance because it provides a relatively reliable method for snowmelt flood prediction and reduces the possible damages caused by snowmelt floods.

  18. Clinical Prediction Model for Time in Therapeutic Range While on Warfarin in Newly Diagnosed Atrial Fibrillation.

    PubMed

    Williams, Brent A; Evans, Michael A; Honushefsky, Ashley M; Berger, Peter B

    2017-10-12

    Though warfarin has historically been the primary oral anticoagulant for stroke prevention in newly diagnosed atrial fibrillation (AF), several new direct oral anticoagulants may be preferred when anticoagulation control with warfarin is expected to be poor. This study developed a prediction model for time in therapeutic range (TTR) among newly diagnosed AF patients on newly initiated warfarin as a tool to assist decision making between warfarin and direct oral anticoagulants. This electronic medical record-based, retrospective study included newly diagnosed, nonvalvular AF patients with no recent warfarin exposure receiving primary care services through a large healthcare system in rural Pennsylvania. TTR was estimated as the percentage of time international normalized ratio measurements were between 2.0 and 3.0 during the first year following warfarin initiation. Candidate predictors of TTR were chosen from data elements collected during usual clinical care. A TTR prediction model was developed and temporally validated and its predictive performance was compared with the SAMe-TT 2 R 2 score (sex, age, medical history, treatment, tobacco, race) using R 2 and c-statistics. A total of 7877 newly diagnosed AF patients met study inclusion criteria. Median (interquartile range) TTR within the first year of starting warfarin was 51% (32, 67). Of 85 candidate predictors evaluated, 15 were included in the final validated model with an R 2 of 15.4%. The proposed model showed better predictive performance than the SAMe-TT 2 R 2 score ( R 2 =3.0%). The proposed prediction model may assist decision making on the proper mode of oral anticoagulant among newly diagnosed AF patients. However, predicting TTR on warfarin remains challenging. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  19. Human and server docking prediction for CAPRI round 30-35 using LZerD with combined scoring functions.

    PubMed

    Peterson, Lenna X; Kim, Hyungrae; Esquivel-Rodriguez, Juan; Roy, Amitava; Han, Xusi; Shin, Woong-Hee; Zhang, Jian; Terashi, Genki; Lee, Matt; Kihara, Daisuke

    2017-03-01

    We report the performance of protein-protein docking predictions by our group for recent rounds of the Critical Assessment of Prediction of Interactions (CAPRI), a community-wide assessment of state-of-the-art docking methods. Our prediction procedure uses a protein-protein docking program named LZerD developed in our group. LZerD represents a protein surface with 3D Zernike descriptors (3DZD), which are based on a mathematical series expansion of a 3D function. The appropriate soft representation of protein surface with 3DZD makes the method more tolerant to conformational change of proteins upon docking, which adds an advantage for unbound docking. Docking was guided by interface residue prediction performed with BindML and cons-PPISP as well as literature information when available. The generated docking models were ranked by a combination of scoring functions, including PRESCO, which evaluates the native-likeness of residues' spatial environments in structure models. First, we discuss the overall performance of our group in the CAPRI prediction rounds and investigate the reasons for unsuccessful cases. Then, we examine the performance of several knowledge-based scoring functions and their combinations for ranking docking models. It was found that the quality of a pool of docking models generated by LZerD, that is whether or not the pool includes near-native models, can be predicted by the correlation of multiple scores. Although the current analysis used docking models generated by LZerD, findings on scoring functions are expected to be universally applicable to other docking methods. Proteins 2017; 85:513-527. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Prediction of MeV electron fluxes throughout the outer radiation belt using multivariate autoregressive models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Kaori; Nagatsuma, Tsutomu; Reeves, Geoffrey D.

    The Van Allen radiation belts surrounding the Earth are filled with MeV-energy electrons. This region poses ionizing radiation risks for spacecraft that operate within it, including those in geostationary orbit (GEO) and medium Earth orbit. In order to provide alerts of electron flux enhancements, 16 prediction models of the electron log-flux variation throughout the equatorial outer radiation belt as a function of the McIlwain L parameter were developed using the multivariate autoregressive model and Kalman filter. Measurements of omnidirectional 2.3 MeV electron flux from the Van Allen Probes mission as well as >2 MeV electrons from the GOES 15 spacecraftmore » were used as the predictors. Furthermore, we selected model explanatory parameters from solar wind parameters, the electron log-flux at GEO, and geomagnetic indices. For the innermost region of the outer radiation belt, the electron flux is best predicted by using the Dst index as the sole input parameter. For the central to outermost regions, at L≥4.8 and L ≥5.6, the electron flux is predicted most accurately by including also the solar wind velocity and then the dynamic pressure, respectively. The Dst index is the best overall single parameter for predicting at 3 ≤ L ≤ 6, while for the GEO flux prediction, the K P index is better than Dst. Finally, a test calculation demonstrates that the model successfully predicts the timing and location of the flux maximum as much as 2 days in advance and that the electron flux decreases faster with time at higher L values, both model features consistent with the actually observed behavior.« less

  1. Prediction of MeV electron fluxes throughout the outer radiation belt using multivariate autoregressive models

    DOE PAGES

    Sakaguchi, Kaori; Nagatsuma, Tsutomu; Reeves, Geoffrey D.; ...

    2015-12-22

    The Van Allen radiation belts surrounding the Earth are filled with MeV-energy electrons. This region poses ionizing radiation risks for spacecraft that operate within it, including those in geostationary orbit (GEO) and medium Earth orbit. In order to provide alerts of electron flux enhancements, 16 prediction models of the electron log-flux variation throughout the equatorial outer radiation belt as a function of the McIlwain L parameter were developed using the multivariate autoregressive model and Kalman filter. Measurements of omnidirectional 2.3 MeV electron flux from the Van Allen Probes mission as well as >2 MeV electrons from the GOES 15 spacecraftmore » were used as the predictors. Furthermore, we selected model explanatory parameters from solar wind parameters, the electron log-flux at GEO, and geomagnetic indices. For the innermost region of the outer radiation belt, the electron flux is best predicted by using the Dst index as the sole input parameter. For the central to outermost regions, at L≥4.8 and L ≥5.6, the electron flux is predicted most accurately by including also the solar wind velocity and then the dynamic pressure, respectively. The Dst index is the best overall single parameter for predicting at 3 ≤ L ≤ 6, while for the GEO flux prediction, the K P index is better than Dst. Finally, a test calculation demonstrates that the model successfully predicts the timing and location of the flux maximum as much as 2 days in advance and that the electron flux decreases faster with time at higher L values, both model features consistent with the actually observed behavior.« less

  2. Prediction of MeV electron fluxes throughout the outer radiation belt using multivariate autoregressive models

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Kaori; Nagatsuma, Tsutomu; Reeves, Geoffrey D.; Spence, Harlan E.

    2015-12-01

    The Van Allen radiation belts surrounding the Earth are filled with MeV-energy electrons. This region poses ionizing radiation risks for spacecraft that operate within it, including those in geostationary orbit (GEO) and medium Earth orbit. To provide alerts of electron flux enhancements, 16 prediction models of the electron log-flux variation throughout the equatorial outer radiation belt as a function of the McIlwain L parameter were developed using the multivariate autoregressive model and Kalman filter. Measurements of omnidirectional 2.3 MeV electron flux from the Van Allen Probes mission as well as >2 MeV electrons from the GOES 15 spacecraft were used as the predictors. Model explanatory parameters were selected from solar wind parameters, the electron log-flux at GEO, and geomagnetic indices. For the innermost region of the outer radiation belt, the electron flux is best predicted by using the Dst index as the sole input parameter. For the central to outermost regions, at L ≧ 4.8 and L ≧ 5.6, the electron flux is predicted most accurately by including also the solar wind velocity and then the dynamic pressure, respectively. The Dst index is the best overall single parameter for predicting at 3 ≦ L ≦ 6, while for the GEO flux prediction, the KP index is better than Dst. A test calculation demonstrates that the model successfully predicts the timing and location of the flux maximum as much as 2 days in advance and that the electron flux decreases faster with time at higher L values, both model features consistent with the actually observed behavior.

  3. RACER a Coarse-Grained RNA Model for Capturing Folding Free Energy in Molecular Dynamics Simulations

    NASA Astrophysics Data System (ADS)

    Cheng, Sara; Bell, David; Ren, Pengyu

    RACER is a coarse-grained RNA model that can be used in molecular dynamics simulations to predict native structures and sequence-specific variation of free energy of various RNA structures. RACER is capable of accurate prediction of native structures of duplexes and hairpins (average RMSD of 4.15 angstroms), and RACER can capture sequence-specific variation of free energy in excellent agreement with experimentally measured stabilities (r-squared =0.98). The RACER model implements a new effective non-bonded potential and re-parameterization of hydrogen bond and Debye-Huckel potentials. Insights from the RACER model include the importance of treating pairing and stacking interactions separately in order to distinguish folded an unfolded states and identification of hydrogen-bonding, base stacking, and electrostatic interactions as essential driving forces for RNA folding. Future applications of the RACER model include predicting free energy landscapes of more complex RNA structures and use of RACER for multiscale simulations.

  4. Optimal plant nitrogen use improves model representation of vegetation response to elevated CO2

    NASA Astrophysics Data System (ADS)

    Caldararu, Silvia; Kern, Melanie; Engel, Jan; Zaehle, Sönke

    2017-04-01

    Existing global vegetation models often cannot accurately represent observed ecosystem behaviour under transient conditions such as elevated atmospheric CO2, a problem that can be attributed to an inflexibility in model representation of plant responses. Plant optimality concepts have been proposed as a solution to this problem as they offer a way to represent plastic plant responses in complex models. Here we present a novel, next generation vegetation model which includes optimal nitrogen allocation to and within the canopy as well as optimal biomass allocation between above- and belowground components in response to nutrient and water availability. The underlying hypothesis is that plants adjust their use of nitrogen in response to environmental conditions and nutrient availability in order to maximise biomass growth. We show that for two FACE (Free Air CO2 enrichment) experiments, the Duke forest and Oak Ridge forest sites, the model can better predict vegetation responses over the duration of the experiment when optimal processes are included. Specifically, under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration as well as increased biomass allocation to fine roots, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry predict a quick onset of N limitation.Existing global vegetation models often cannot accurately represent observed ecosystem behaviour under transient conditions such as elevated atmospheric CO2, a problem that can be attributed to an inflexibility in model representation of plant responses. Plant optimality concepts have been proposed as a solution to this problem as they offer a way to represent plastic plant responses in complex models. Here we present a novel, next generation vegetation model which includes optimal nitrogen allocation to and within the canopy as well as optimal biomass allocation between above- and belowground components in response to nutrient and water availability. The underlying hypothesis is that plants adjust their use of nitrogen in response to environmental conditions and nutrient availability in order to maximise biomass growth. We show that for two FACE (Free Air CO2 enrichment) experiments, the Duke forest and Oak Ridge forest sites, the model can better predict vegetation responses over the duration of the experiment when optimal processes are included. Specifically, under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration as well as increased biomass allocation to fine roots, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry predict a quick onset of N limitation.

  5. Comparison of Models and Whole-Genome Profiling Approaches for Genomic-Enabled Prediction of Septoria Tritici Blotch, Stagonospora Nodorum Blotch, and Tan Spot Resistance in Wheat.

    PubMed

    Juliana, Philomin; Singh, Ravi P; Singh, Pawan K; Crossa, Jose; Rutkoski, Jessica E; Poland, Jesse A; Bergstrom, Gary C; Sorrells, Mark E

    2017-07-01

    The leaf spotting diseases in wheat that include Septoria tritici blotch (STB) caused by , Stagonospora nodorum blotch (SNB) caused by , and tan spot (TS) caused by pose challenges to breeding programs in selecting for resistance. A promising approach that could enable selection prior to phenotyping is genomic selection that uses genome-wide markers to estimate breeding values (BVs) for quantitative traits. To evaluate this approach for seedling and/or adult plant resistance (APR) to STB, SNB, and TS, we compared the predictive ability of least-squares (LS) approach with genomic-enabled prediction models including genomic best linear unbiased predictor (GBLUP), Bayesian ridge regression (BRR), Bayes A (BA), Bayes B (BB), Bayes Cπ (BC), Bayesian least absolute shrinkage and selection operator (BL), and reproducing kernel Hilbert spaces markers (RKHS-M), a pedigree-based model (RKHS-P) and RKHS markers and pedigree (RKHS-MP). We observed that LS gave the lowest prediction accuracies and RKHS-MP, the highest. The genomic-enabled prediction models and RKHS-P gave similar accuracies. The increase in accuracy using genomic prediction models over LS was 48%. The mean genomic prediction accuracies were 0.45 for STB (APR), 0.55 for SNB (seedling), 0.66 for TS (seedling) and 0.48 for TS (APR). We also compared markers from two whole-genome profiling approaches: genotyping by sequencing (GBS) and diversity arrays technology sequencing (DArTseq) for prediction. While, GBS markers performed slightly better than DArTseq, combining markers from the two approaches did not improve accuracies. We conclude that implementing GS in breeding for these diseases would help to achieve higher accuracies and rapid gains from selection. Copyright © 2017 Crop Science Society of America.

  6. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  7. Verification of the predictive capabilities of the 4C code cryogenic circuit model

    NASA Astrophysics Data System (ADS)

    Zanino, R.; Bonifetto, R.; Hoa, C.; Richard, L. Savoldi

    2014-01-01

    The 4C code was developed to model thermal-hydraulics in superconducting magnet systems and related cryogenic circuits. It consists of three coupled modules: a quasi-3D thermal-hydraulic model of the winding; a quasi-3D model of heat conduction in the magnet structures; an object-oriented a-causal model of the cryogenic circuit. In the last couple of years the code and its different modules have undergone a series of validation exercises against experimental data, including also data coming from the supercritical He loop HELIOS at CEA Grenoble. However, all this analysis work was done each time after the experiments had been performed. In this paper a first demonstration is given of the predictive capabilities of the 4C code cryogenic circuit module. To do that, a set of ad-hoc experimental scenarios have been designed, including different heating and control strategies. Simulations with the cryogenic circuit module of 4C have then been performed before the experiment. The comparison presented here between the code predictions and the results of the HELIOS measurements gives the first proof of the excellent predictive capability of the 4C code cryogenic circuit module.

  8. RANdom SAmple Consensus (RANSAC) algorithm for material-informatics: application to photovoltaic solar cells.

    PubMed

    Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch

    2017-06-06

    An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.

  9. V&V framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hills, Richard G.; Maniaci, David Charles; Naughton, Jonathan W.

    2015-09-01

    A Verification and Validation (V&V) framework is presented for the development and execution of coordinated modeling and experimental program s to assess the predictive capability of computational models of complex systems through focused, well structured, and formal processes.The elements of the framework are based on established V&V methodology developed by various organizations including the Department of Energy, National Aeronautics and Space Administration, the American Institute of Aeronautics and Astronautics, and the American Society of Mechanical Engineers. Four main topics are addressed: 1) Program planning based on expert elicitation of the modeling physics requirements, 2) experimental design for model assessment, 3)more » uncertainty quantification for experimental observations and computational model simulations, and 4) assessment of the model predictive capability. The audience for this document includes program planners, modelers, experimentalist, V &V specialist, and customers of the modeling results.« less

  10. Instrument Landing System performance prediction

    DOT National Transportation Integrated Search

    1974-01-01

    Further achievements made in fiscal year 1973 on the development : of an Instrument Landing System (ILS) performance prediction model : are reported. These include (ILS) localizer scattering from generalized : slanted rectangular, triangular and cyli...

  11. Qualitative simulation for process modeling and control

    NASA Technical Reports Server (NTRS)

    Dalle Molle, D. T.; Edgar, T. F.

    1989-01-01

    A qualitative model is developed for a first-order system with a proportional-integral controller without precise knowledge of the process or controller parameters. Simulation of the qualitative model yields all of the solutions to the system equations. In developing the qualitative model, a necessary condition for the occurrence of oscillatory behavior is identified. Initializations that cannot exhibit oscillatory behavior produce a finite set of behaviors. When the phase-space behavior of the oscillatory behavior is properly constrained, these initializations produce an infinite but comprehensible set of asymptotically stable behaviors. While the predictions include all possible behaviors of the real system, a class of spurious behaviors has been identified. When limited numerical information is included in the model, the number of predictions is significantly reduced.

  12. Preoperative predictive model of cervical lymph node metastasis combining fluorine-18 fluorodeoxyglucose positron-emission tomography/computerized tomography findings and clinical factors in patients with oral or oropharyngeal squamous cell carcinoma.

    PubMed

    Mochizuki, Yumi; Omura, Ken; Nakamura, Shin; Harada, Hiroyuki; Shibuya, Hitoshi; Kurabayashi, Toru

    2012-02-01

    This study aimed to construct a preoperative predictive model of cervical lymph node metastasis using fluorine-18 fluorodeoxyglucose positron-emission tomography/computerized tomography ((18)F-FDG PET/CT) findings in patients with oral or oropharyngeal squamous cell carcinoma (SCC). Forty-nine such patients undergoing preoperative (18)F-FDG PET/CT and neck dissection or lymph node biopsy were enrolled. Retrospective comparisons with spatial correlation between PET/CT and the anatomical sites based on histopathological examinations of surgical specimens were performed. We calculated a logistic regression model, including the SUVmax-related variable. When using the optimal cutoff point criterion of probabilities calculated from the model that included either clinical factors and delayed-phase SUVmax ≥0.087 or clinical factors and maximum standardized uptake (SUV) increasing rate (SUV-IR) ≥ 0.100, it significantly increased the sensitivity, specificity, and accuracy (87.5%, 65.7%, and 75.2%, respectively). The use of predictive models that include clinical factors and delayed-phase SUVmax and SUV-IR improve preoperative nodal diagnosis. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Linking macroecology and community ecology: refining predictions of species distributions using biotic interaction networks.

    PubMed

    Staniczenko, Phillip P A; Sivasubramaniam, Prabu; Suttle, K Blake; Pearson, Richard G

    2017-06-01

    Macroecological models for predicting species distributions usually only include abiotic environmental conditions as explanatory variables, despite knowledge from community ecology that all species are linked to other species through biotic interactions. This disconnect is largely due to the different spatial scales considered by the two sub-disciplines: macroecologists study patterns at large extents and coarse resolutions, while community ecologists focus on small extents and fine resolutions. A general framework for including biotic interactions in macroecological models would help bridge this divide, as it would allow for rigorous testing of the role that biotic interactions play in determining species ranges. Here, we present an approach that combines species distribution models with Bayesian networks, which enables the direct and indirect effects of biotic interactions to be modelled as propagating conditional dependencies among species' presences. We show that including biotic interactions in distribution models for species from a California grassland community results in better range predictions across the western USA. This new approach will be important for improving estimates of species distributions and their dynamics under environmental change. © 2017 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  14. New Symptom-Based Predictive Tool for Survival at Seven and Thirty Days Developed by Palliative Home Care Teams

    PubMed Central

    Bescos, Mar; Barcons, Miquel; Torrubia, Pilar; Trujillano, Javier; Requena, Antonio

    2014-01-01

    Abstract Aim: This study sought to develop models to predict survival at 7 and 30 days based on symptoms detected by palliative home care teams (PHCTs). Materials and methods: This prospective analytic study included a 6-month recruitment period with patient monitoring until death or 180 days after recruitment. The inclusion criteria consisted of age greater than 18 years, advanced cancer, and treatment provided by participating PHCTs between April and July 2009. The study variables included death at 7 or 30 days, survival time, age, gender, place of residence, type of tumor and extension, presence of 11 signs and symptoms measured with a 0–3 Likert scale, functional and cognitive status, and use of a subcutaneous butterfly needle. The statistics applied included a descriptive analysis according to the percentage or mean±standard deviation. For symptom comparison between surviving and nonsurviving patients, the χ2 test was used. Classification and regression tree (CART) methodology was used for model development. An internal validation system (cross-validation with 10 partitions) was used to ensure generalization of the models. The area under the receiver operating characteristics (ROC) curve was calculated (with a 95% confidence interval) to assess the validation of the models. Results: A total of 698 patients were included. The mean age of the patients was 73.7±12 years, and 60.3% were male. The most frequent type of neoplasm was digestive (37.6%). The mean Karnofsky score was 51.8±14, the patients' cognitive status according to the Pfeiffer test was 2.6±4 errors, and 8.3% of patients required a subcutaneous butterfly needle. Each model provided 8 decision rules with a probability assignment range between 2.2% and 99.1%. The model used to predict the probability of death at 7 days included the presence of anorexia and dysphagia and the level of consciousness, and this model produced areas under the curve (AUCs) of 0.88 (0.86–0.90) and 0.81 (0.79–0.83). The model used to predict the probability of death at 30 days included the presence of asthenia and anorexia and the level of consciousness, and this model produced AUCs of 0.78 (0.77–0.80) and 0.77 (0.75–0.79). Conclusion: For patients with advanced cancer treated by PHCTs, the use of classification schemes and decision trees based on specific symptoms can help clinicians predict survival at 7 and 30 days. PMID:24922117

  15. The Impact of Different Complexity on Numerical Weather Predictions within the Coupled Global Online Modeling System

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Grell, G. A.; McKeen, S. A.; Ahmadov, R.

    2017-12-01

    The global Flow-following finite-volume Icosahedra Model (FIM), which was developed in the Global Systems Division of NOAA/ESRL and the Finite-volume cubed-sphere dynamical core (FV3) developed by GFDL, have been coupled online with aerosol and gas-phase chemistry schemes (FIM-Chem and FV3-Chem). Within the aerosol and chemistry modules, the models handle wet and dry deposition, chemical reactions, aerosol direct and semi-direct effect, anthropogenic emissions, biogenic emissions, biomass burning, dust and sea-salt emissions. They are able to provide chemical weather predictions at various spatial resolutions and with different levels of complexity. FIM-Chem is also able to quantify the impact of aerosol on numerical weather predictions (NWP). Currently, three different chemical schemes have been coupled with the FIM model. The simplest aerosol modules are from the GOCART model with its simplified parameterization of sulfur/sulfate chemistry. The photochemical gas-phase mechanism RACM was included to determine the impact of additional complexity on the aerosol and gas simulations. We have also implemented a more sophisticated aerosol scheme that includes secondary organic aerosols (SOA) based on the VBS approach. The model performance has been evaluated by comparing with the ATom-1 observations. FIM-Chem is able to reproduce many observed aerosol and gas features very well. A five-day NWP on 120 km horizontal resolution using FIM-Chem has been done for the end of July, 2016 to quantify the impact of the three different chemical schemes on weather forecasts. Compared to a meteorological run that excludes the model chemical schemes, and is driven only by background AODs from the GFS model, the 5-day forecast results shows significant impact on weather predictions when including the prognostic aerosol schemes. This includes convective precipitation, surface temperature, and 700 hPa air temperature. We also use FIM-Chem to investigate the 2012 South American Biomass Burning Analysis (SAMBBA) campaign period to determine whether more complex chemistry provides benefits for global numerical weather prediction.

  16. WIFIRE Data Model and Catalog for Wildfire Data and Tools

    NASA Astrophysics Data System (ADS)

    Altintas, I.; Crawl, D.; Cowart, C.; Gupta, A.; Block, J.; de Callafon, R.

    2014-12-01

    The WIFIRE project (wifire.ucsd.edu) is building an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. WIFIRE may be used by wildfire management authorities in the future to predict wildfire rate of spread and direction, and assess the effectiveness of high-density sensor networks in improving fire and weather predictions. WIFIRE has created a data model for wildfire resources including sensed and archived data, sensors, satellites, cameras, modeling tools, workflows and social information including Twitter feeds. This data model and associated wildfire resource catalog includes a detailed description of the HPWREN sensor network, SDG&E's Mesonet, and NASA MODIS. In addition, the WIFIRE data-model describes how to integrate the data from multiple heterogeneous sources to provide detailed fire-related information. The data catalog describes 'Observables' captured by each instrument using multiple ontologies including OGC SensorML and NASA SWEET. Observables include measurements such as wind speed, air temperature, and relative humidity, as well as their accuracy and resolution. We have implemented a REST service for publishing to and querying from the catalog using Web Application Description Language (WADL). We are creating web-based user interfaces and mobile device Apps that use the REST interface for dissemination to wildfire modeling community and project partners covering academic, private, and government laboratories while generating value to emergency officials and the general public. Additionally, the Kepler scientific workflow system is instrumented to interact with this data catalog to access real-time streaming and archived wildfire data and stream it into dynamic data-driven wildfire models at scale.

  17. Elastic properties of graphene: A pseudo-beam model with modified internal bending moment and its application

    NASA Astrophysics Data System (ADS)

    Xia, Z. M.; Wang, C. G.; Tan, H. F.

    2018-04-01

    A pseudo-beam model with modified internal bending moment is presented to predict elastic properties of graphene, including the Young's modulus and Poisson's ratio. In order to overcome a drawback in existing molecular structural mechanics models, which only account for pure bending (constant bending moment), the presented model accounts for linear bending moments deduced from the balance equations. Based on this pseudo-beam model, an analytical prediction is accomplished to predict the Young's modulus and Poisson's ratio of graphene based on the equation of the strain energies by using Castigliano second theorem. Then, the elastic properties of graphene are calculated compared with results available in literature, which verifies the feasibility of the pseudo-beam model. Finally, the pseudo-beam model is utilized to study the twisting wrinkling characteristics of annular graphene. Due to modifications of the internal bending moment, the wrinkling behaviors of graphene sheet are predicted accurately. The obtained results show that the pseudo-beam model has a good ability to predict the elastic properties of graphene accurately, especially the out-of-plane deformation behavior.

  18. Exploring Human Diseases and Biological Mechanisms by Protein Structure Prediction and Modeling.

    PubMed

    Wang, Juexin; Luttrell, Joseph; Zhang, Ning; Khan, Saad; Shi, NianQing; Wang, Michael X; Kang, Jing-Qiong; Wang, Zheng; Xu, Dong

    2016-01-01

    Protein structure prediction and modeling provide a tool for understanding protein functions by computationally constructing protein structures from amino acid sequences and analyzing them. With help from protein prediction tools and web servers, users can obtain the three-dimensional protein structure models and gain knowledge of functions from the proteins. In this chapter, we will provide several examples of such studies. As an example, structure modeling methods were used to investigate the relation between mutation-caused misfolding of protein and human diseases including epilepsy and leukemia. Protein structure prediction and modeling were also applied in nucleotide-gated channels and their interaction interfaces to investigate their roles in brain and heart cells. In molecular mechanism studies of plants, rice salinity tolerance mechanism was studied via structure modeling on crucial proteins identified by systems biology analysis; trait-associated protein-protein interactions were modeled, which sheds some light on the roles of mutations in soybean oil/protein content. In the age of precision medicine, we believe protein structure prediction and modeling will play more and more important roles in investigating biomedical mechanism of diseases and drug design.

  19. Claims-based risk model for first severe COPD exacerbation.

    PubMed

    Stanford, Richard H; Nag, Arpita; Mapel, Douglas W; Lee, Todd A; Rosiello, Richard; Schatz, Michael; Vekeman, Francis; Gauthier-Loiselle, Marjolaine; Merrigan, J F Philip; Duh, Mei Sheng

    2018-02-01

    To develop and validate a predictive model for first severe chronic obstructive pulmonary disease (COPD) exacerbation using health insurance claims data and to validate the risk measure of controller medication to total COPD treatment (controller and rescue) ratio (CTR). A predictive model was developed and validated in 2 managed care databases: Truven Health MarketScan database and Reliant Medical Group database. This secondary analysis assessed risk factors, including CTR, during the baseline period (Year 1) to predict risk of severe exacerbation in the at-risk period (Year 2). Patients with COPD who were 40 years or older and who had at least 1 COPD medication dispensed during the year following COPD diagnosis were included. Subjects with severe exacerbations in the baseline year were excluded. Risk factors in the baseline period were included as potential predictors in multivariate analysis. Performance was evaluated using C-statistics. The analysis included 223,824 patients. The greatest risk factors for first severe exacerbation were advanced age, chronic oxygen therapy usage, COPD diagnosis type, dispensing of 4 or more canisters of rescue medication, and having 2 or more moderate exacerbations. A CTR of 0.3 or greater was associated with a 14% lower risk of severe exacerbation. The model performed well with C-statistics, ranging from 0.711 to 0.714. This claims-based risk model can predict the likelihood of first severe COPD exacerbation. The CTR could also potentially be used to target populations at greatest risk for severe exacerbations. This could be relevant for providers and payers in approaches to prevent severe exacerbations and reduce costs.

  20. Fuselage boundary-layer refraction of fan tones radiated from an installed turbofan aero-engine.

    PubMed

    Gaffney, James; McAlpine, Alan; Kingan, Michael J

    2017-03-01

    A distributed source model to predict fan tone noise levels of an installed turbofan aero-engine is extended to include the refraction effects caused by the fuselage boundary layer. The model is a simple representation of an installed turbofan, where fan tones are represented in terms of spinning modes radiated from a semi-infinite circular duct, and the aircraft's fuselage is represented by an infinitely long, rigid cylinder. The distributed source is a disk, formed by integrating infinitesimal volume sources located on the intake duct termination. The cylinder is located adjacent to the disk. There is uniform axial flow, aligned with the axis of the cylinder, everywhere except close to the cylinder where there is a constant thickness boundary layer. The aim is to predict the near-field acoustic pressure, and in particular, to predict the pressure on the cylindrical fuselage which is relevant to assess cabin noise. Thus no far-field approximations are included in the modelling. The effect of the boundary layer is quantified by calculating the area-averaged mean square pressure over the cylinder's surface with and without the boundary layer included in the prediction model. The sound propagation through the boundary layer is calculated by solving the Pridmore-Brown equation. Results from the theoretical method show that the boundary layer has a significant effect on the predicted sound pressure levels on the cylindrical fuselage, owing to sound radiation of fan tones from an installed turbofan aero-engine.

  1. Dynamic linear models using the Kalman filter for early detection and early warning of malaria outbreaks

    NASA Astrophysics Data System (ADS)

    Merkord, C. L.; Liu, Y.; DeVos, M.; Wimberly, M. C.

    2015-12-01

    Malaria early detection and early warning systems are important tools for public health decision makers in regions where malaria transmission is seasonal and varies from year to year with fluctuations in rainfall and temperature. Here we present a new data-driven dynamic linear model based on the Kalman filter with time-varying coefficients that are used to identify malaria outbreaks as they occur (early detection) and predict the location and timing of future outbreaks (early warning). We fit linear models of malaria incidence with trend and Fourier form seasonal components using three years of weekly malaria case data from 30 districts in the Amhara Region of Ethiopia. We identified past outbreaks by comparing the modeled prediction envelopes with observed case data. Preliminary results demonstrated the potential for improved accuracy and timeliness over commonly-used methods in which thresholds are based on simpler summary statistics of historical data. Other benefits of the dynamic linear modeling approach include robustness to missing data and the ability to fit models with relatively few years of training data. To predict future outbreaks, we started with the early detection model for each district and added a regression component based on satellite-derived environmental predictor variables including precipitation data from the Tropical Rainfall Measuring Mission (TRMM) and land surface temperature (LST) and spectral indices from the Moderate Resolution Imaging Spectroradiometer (MODIS). We included lagged environmental predictors in the regression component of the model, with lags chosen based on cross-correlation of the one-step-ahead forecast errors from the first model. Our results suggest that predictions of future malaria outbreaks can be improved by incorporating lagged environmental predictors.

  2. Development of a screening tool using electronic health records for undiagnosed Type 2 diabetes mellitus and impaired fasting glucose detection in the Slovenian population.

    PubMed

    Štiglic, G; Kocbek, P; Cilar, L; Fijačko, N; Stožer, A; Zaletel, J; Sheikh, A; Povalej Bržan, P

    2018-05-01

    To develop and validate a simplified screening test for undiagnosed Type 2 diabetes mellitus and impaired fasting glucose for the Slovenian population (SloRisk) to be used in the general population. Data on 11 391 people were collected from the electronic health records of comprehensive medical examinations in five Slovenian healthcare centres. Fasting plasma glucose as well as information related to the Finnish Diabetes Risk Score questionnaire, FINDRISC, were collected for 2073 people to build predictive models. Bootstrapping-based evaluation was used to estimate the area under the receiver-operating characteristic curve performance metric of two proposed logistic regression models as well as the Finnish Diabetes Risk Score model both at recommended and at alternative cut-off values. The final model contained five questions for undiagnosed Type 2 diabetes prediction and achieved an area under the receiver-operating characteristic curve of 0.851 (95% CI 0.850-0.853). The impaired fasting glucose prediction model included six questions and achieved an area under the receiver-operating characteristic curve of 0.840 (95% CI 0.839-0.840). There were four questions that were included in both models (age, sex, waist circumference and blood sugar history), with physical activity selected only for undiagnosed Type 2 diabetes and questions on family history and hypertension drug use selected only for the impaired fasting glucose prediction model. This study proposes two simplified models based on FINDRISC questions for screening of undiagnosed Type 2 diabetes and impaired fasting glucose in the Slovenian population. A significant improvement in performance was achieved compared with the original FINDRISC questionnaire. Both models include waist circumference instead of BMI. © 2018 Diabetes UK.

  3. An Integrated Modeling Framework Forecasting Ecosystem Exposure-- A Systems Approach to the Cumulative Impacts of Multiple Stressors

    NASA Astrophysics Data System (ADS)

    Johnston, J. M.

    2013-12-01

    Freshwater habitats provide fishable, swimmable and drinkable resources and are a nexus of geophysical and biological processes. These processes in turn influence the persistence and sustainability of populations, communities and ecosystems. Climate change and landuse change encompass numerous stressors of potential exposure, including the introduction of toxic contaminants, invasive species, and disease in addition to physical drivers such as temperature and hydrologic regime. A systems approach that includes the scientific and technologic basis of assessing the health of ecosystems is needed to effectively protect human health and the environment. The Integrated Environmental Modeling Framework 'iemWatersheds' has been developed as a consistent and coherent means of forecasting the cumulative impact of co-occurring stressors. The Framework consists of three facilitating technologies: Data for Environmental Modeling (D4EM) that automates the collection and standardization of input data; the Framework for Risk Assessment of Multimedia Environmental Systems (FRAMES) that manages the flow of information between linked models; and the Supercomputer for Model Uncertainty and Sensitivity Evaluation (SuperMUSE) that provides post-processing and analysis of model outputs, including uncertainty and sensitivity analysis. Five models are linked within the Framework to provide multimedia simulation capabilities for hydrology and water quality processes: the Soil Water Assessment Tool (SWAT) predicts surface water and sediment runoff and associated contaminants; the Watershed Mercury Model (WMM) predicts mercury runoff and loading to streams; the Water quality Analysis and Simulation Program (WASP) predicts water quality within the stream channel; the Habitat Suitability Index (HSI) model scores physicochemical habitat quality for individual fish species; and the Bioaccumulation and Aquatic System Simulator (BASS) predicts fish growth, population dynamics and bioaccumulation of toxic substances. The capability of the Framework to address cumulative impacts will be demonstrated for freshwater ecosystem services and mountaintop mining.

  4. The value of daily platelet counts for predicting dengue shock syndrome: Results from a prospective observational study of 2301 Vietnamese children with dengue.

    PubMed

    Lam, Phung Khanh; Ngoc, Tran Van; Thu Thuy, Truong Thi; Hong Van, Nguyen Thi; Nhu Thuy, Tran Thi; Hoai Tam, Dong Thi; Dung, Nguyen Minh; Hanh Tien, Nguyen Thi; Thanh Kieu, Nguyen Tan; Simmons, Cameron; Wills, Bridget; Wolbers, Marcel

    2017-04-01

    Dengue is the most important mosquito-borne viral infection to affect humans. Although it usually manifests as a self-limited febrile illness, complications may occur as the fever subsides. A systemic vascular leak syndrome that sometimes progresses to life-threatening hypovolaemic shock is the most serious complication seen in children, typically accompanied by haemoconcentration and thrombocytopenia. Robust evidence on risk factors, especially features present early in the illness course, for progression to dengue shock syndrome (DSS) is lacking. Moreover, the potential value of incorporating serial haematocrit and platelet measurements in prediction models has never been assessed. We analyzed data from a prospective observational study of Vietnamese children aged 5-15 years admitted with clinically suspected dengue to the Hospital for Tropical Diseases in Ho Chi Minh City between 2001 and 2009. The analysis population comprised all children with laboratory-confirmed dengue enrolled between days 1-4 of illness. Logistic regression was the main statistical model for all univariate and multivariable analyses. The prognostic value of daily haematocrit levels and platelet counts were assessed using graphs and separate regression models fitted on each day of illness. Among the 2301 children included in the analysis, 143 (6%) progressed to DSS. Significant baseline risk factors for DSS included a history of vomiting, higher temperature, a palpable liver, and a lower platelet count. Prediction models that included serial daily platelet counts demonstrated better ability to discriminate patients who developed DSS from others, than models based on enrolment information only. However inclusion of daily haematocrit values did not improve prediction of DSS. Daily monitoring of platelet counts is important to help identify patients at high risk of DSS. Development of dynamic prediction models that incorporate signs, symptoms, and daily laboratory measurements, could improve DSS prediction and thereby reduce the burden on health services in endemic areas.

  5. Structural, Thermal, and Optical Performance (STOP) Modeling and Results for the James Webb Space Telescope Integrated Science Instrument Module

    NASA Technical Reports Server (NTRS)

    Gracey, Renee; Bartoszyk, Andrew; Cofie, Emmanuel; Comber, Brian; Hartig, George; Howard, Joseph; Sabatke, Derek; Wenzel, Greg; Ohl, Raymond

    2016-01-01

    The James Webb Space Telescope includes the Integrated Science Instrument Module (ISIM) element that contains four science instruments (SI) including a Guider. We performed extensive structural, thermal, and optical performance(STOP) modeling in support of all phases of ISIM development. In this paper, we focus on modeling and results associated with test and verification. ISIMs test program is bound by ground environments, mostly notably the 1g and test chamber thermal environments. This paper describes STOP modeling used to predict ISIM system performance in 0g and at various on-orbit temperature environments. The predictions are used to project results obtained during testing to on-orbit performance.

  6. Solid rocket booster performance evaluation model. Volume 1: Engineering description

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.

  7. Predicting community composition from pairwise interactions

    NASA Astrophysics Data System (ADS)

    Friedman, Jonathan; Higgins, Logan; Gore, Jeff

    The ability to predict the structure of complex, multispecies communities is crucial for understanding the impact of species extinction and invasion on natural communities, as well as for engineering novel, synthetic communities. Communities are often modeled using phenomenological models, such as the classical generalized Lotka-Volterra (gLV) model. While a lot of our intuition comes from such models, their predictive power has rarely been tested experimentally. To directly assess the predictive power of this approach, we constructed synthetic communities comprised of up to 8 soil bacteria. We measured the outcome of competition between all species pairs, and used these measurements to predict the composition of communities composed of more than 2 species. The pairwise competitions resulted in a diverse set of outcomes, including coexistence, exclusion, and bistability, and displayed evidence for both interference and facilitation. Most pair outcomes could be captured by the gLV framework, and the composition of multispecies communities could be predicted for communities composed solely of such pairs. Our results demonstrate the predictive ability and utility of simple phenomenology, which enables accurate predictions in the absence of mechanistic details.

  8. Nomogram to predict successful placement in surgical subspecialty fellowships using applicant characteristics.

    PubMed

    Muffly, Tyler M; Barber, Matthew D; Karafa, Matthew T; Kattan, Michael W; Shniter, Abigail; Jelovsek, J Eric

    2012-01-01

    The purpose of the study was to develop a model that predicts an individual applicant's probability of successful placement into a surgical subspecialty fellowship program. Candidates who applied to surgical fellowships during a 3-year period were identified in a set of databases that included the electronic application materials. Of the 1281 applicants who were available for analysis, 951 applicants (74%) successfully placed into a colon and rectal surgery, thoracic surgery, vascular surgery, or pediatric surgery fellowship. The optimal final prediction model, which was based on a logistic regression, included 14 variables. This model, with a c statistic of 0.74, allowed for the determination of a useful estimate of the probability of placement for an individual candidate. Of the factors that are available at the time of fellowship application, 14 were used to predict accurately the proportion of applicants who will successfully gain a fellowship position. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  9. Genetic determinants of freckle occurrence in the Spanish population: Towards ephelides prediction from human DNA samples.

    PubMed

    Hernando, Barbara; Ibañez, Maria Victoria; Deserio-Cuesta, Julio Alberto; Soria-Navarro, Raquel; Vilar-Sastre, Inca; Martinez-Cadenas, Conrado

    2018-03-01

    Prediction of human pigmentation traits, one of the most differentiable externally visible characteristics among individuals, from biological samples represents a useful tool in the field of forensic DNA phenotyping. In spite of freckling being a relatively common pigmentation characteristic in Europeans, little is known about the genetic basis of this largely genetically determined phenotype in southern European populations. In this work, we explored the predictive capacity of eight freckle and sunlight sensitivity-related genes in 458 individuals (266 non-freckled controls and 192 freckled cases) from Spain. Four loci were associated with freckling (MC1R, IRF4, ASIP and BNC2), and female sex was also found to be a predictive factor for having a freckling phenotype in our population. After identifying the most informative genetic variants responsible for human ephelides occurrence in our sample set, we developed a DNA-based freckle prediction model using a multivariate regression approach. Once developed, the capabilities of the prediction model were tested by a repeated 10-fold cross-validation approach. The proportion of correctly predicted individuals using the DNA-based freckle prediction model was 74.13%. The implementation of sex into the DNA-based freckle prediction model slightly improved the overall prediction accuracy by 2.19% (76.32%). Further evaluation of the newly-generated prediction model was performed by assessing the model's performance in a new cohort of 212 Spanish individuals, reaching a classification success rate of 74.61%. Validation of this prediction model may be carried out in larger populations, including samples from different European populations. Further research to validate and improve this newly-generated freckle prediction model will be needed before its forensic application. Together with DNA tests already validated for eye and hair colour prediction, this freckle prediction model may lead to a substantially more detailed physical description of unknown individuals from DNA found at the crime scene. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Development of Web tools to predict axillary lymph node metastasis and pathological response to neoadjuvant chemotherapy in breast cancer patients.

    PubMed

    Sugimoto, Masahiro; Takada, Masahiro; Toi, Masakazu

    2014-12-09

    Nomograms are a standard computational tool to predict the likelihood of an outcome using multiple available patient features. We have developed a more powerful data mining methodology, to predict axillary lymph node (AxLN) metastasis and response to neoadjuvant chemotherapy (NAC) in primary breast cancer patients. We developed websites to use these tools. The tools calculate the probability of AxLN metastasis (AxLN model) and pathological complete response to NAC (NAC model). As a calculation algorithm, we employed a decision tree-based prediction model known as the alternative decision tree (ADTree), which is an analog development of if-then type decision trees. An ensemble technique was used to combine multiple ADTree predictions, resulting in higher generalization abilities and robustness against missing values. The AxLN model was developed with training datasets (n=148) and test datasets (n=143), and validated using an independent cohort (n=174), yielding an area under the receiver operating characteristic curve (AUC) of 0.768. The NAC model was developed and validated with n=150 and n=173 datasets from a randomized controlled trial, yielding an AUC of 0.787. AxLN and NAC models require users to input up to 17 and 16 variables, respectively. These include pathological features, including human epidermal growth factor receptor 2 (HER2) status and imaging findings. Each input variable has an option of "unknown," to facilitate prediction for cases with missing values. The websites developed facilitate the use of these tools, and serve as a database for accumulating new datasets.

  11. A risk score for in-hospital death in patients admitted with ischemic or hemorrhagic stroke.

    PubMed

    Smith, Eric E; Shobha, Nandavar; Dai, David; Olson, DaiWai M; Reeves, Mathew J; Saver, Jeffrey L; Hernandez, Adrian F; Peterson, Eric D; Fonarow, Gregg C; Schwamm, Lee H

    2013-01-28

    We aimed to derive and validate a single risk score for predicting death from ischemic stroke (IS), intracerebral hemorrhage (ICH), and subarachnoid hemorrhage (SAH). Data from 333 865 stroke patients (IS, 82.4%; ICH, 11.2%; SAH, 2.6%; uncertain type, 3.8%) in the Get With The Guidelines-Stroke database were used. In-hospital mortality varied greatly according to stroke type (IS, 5.5%; ICH, 27.2%; SAH, 25.1%; unknown type, 6.0%; P<0.001). The patients were randomly divided into derivation (60%) and validation (40%) samples. Logistic regression was used to determine the independent predictors of mortality and to assign point scores for a prediction model in the overall population and in the subset with the National Institutes of Health Stroke Scale (NIHSS) recorded (37.1%). The c statistic, a measure of how well the models discriminate the risk of death, was 0.78 in the overall validation sample and 0.86 in the model including NIHSS. The model with NIHSS performed nearly as well in each stroke type as in the overall model including all types (c statistics for IS alone, 0.85; for ICH alone, 0.83; for SAH alone, 0.83; uncertain type alone, 0.86). The calibration of the model was excellent, as demonstrated by plots of observed versus predicted mortality. A single prediction score for all stroke types can be used to predict risk of in-hospital death following stroke admission. Incorporation of NIHSS information substantially improves this predictive accuracy.

  12. Oral LD50 toxicity modeling and prediction of per- and polyfluorinated chemicals on rat and mouse.

    PubMed

    Bhhatarai, Barun; Gramatica, Paola

    2011-05-01

    Quantitative structure-activity relationship (QSAR) analyses were performed using the LD(50) oral toxicity data of per- and polyfluorinated chemicals (PFCs) on rodents: rat and mouse. PFCs are studied under the EU project CADASTER which uses the available experimental data for prediction and prioritization of toxic chemicals for risk assessment by using the in silico tools. The methodology presented here applies chemometrical analysis on the existing experimental data and predicts the toxicity of new compounds. QSAR analyses were performed on the available 58 mouse and 50 rat LD(50) oral data using multiple linear regression (MLR) based on theoretical molecular descriptors selected by genetic algorithm (GA). Training and prediction sets were prepared a priori from available experimental datasets in terms of structure and response. These sets were used to derive statistically robust and predictive (both internally and externally) models. The structural applicability domain (AD) of the models were verified on 376 per- and polyfluorinated chemicals including those in REACH preregistration list. The rat and mouse endpoints were predicted by each model for the studied compounds, and finally 30 compounds, all perfluorinated, were prioritized as most important for experimental toxicity analysis under the project. In addition, cumulative study on compounds within the AD of all four models, including two earlier published models on LC(50) rodent analysis was studied and the cumulative toxicity trend was observed using principal component analysis (PCA). The similarities and the differences observed in terms of descriptors and chemical/mechanistic meaning encoded by descriptors to prioritize the most toxic compounds are highlighted.

  13. Aerodynamic analysis of the Darrieus wind turbines including dynamic-stall effects

    NASA Astrophysics Data System (ADS)

    Paraschivoiu, Ion; Allet, Azeddine

    Experimental data for a 17-m wind turbine are compared with aerodynamic performance predictions obtained with two dynamic stall methods which are based on numerical correlations of the dynamic stall delay with the pitch rate parameter. Unlike the Gormont (1973) model, the MIT model predicts that dynamic stall does not occur in the downwind part of the turbine, although it does exist in the upwind zone. The Gormont model is shown to overestimate the aerodynamic coefficients relative to the MIT model. The MIT model is found to accurately predict the dynamic-stall regime, which is characterized by a plateau oscillating near values of the experimental data for the rotor power vs wind speed at the equator.

  14. Covariant spectator theory of np scattering: Deuteron quadrupole moment

    DOE PAGES

    Gross, Franz

    2015-01-26

    The deuteron quadrupole moment is calculated using two CST model wave functions obtained from the 2007 high precision fits to np scattering data. Included in the calculation are a new class of isoscalar np interaction currents automatically generated by the nuclear force model used in these fits. The prediction for model WJC-1, with larger relativistic P-state components, is 2.5% smaller that the experiential result, in common with the inability of models prior to 2014 to predict this important quantity. However, model WJC-2, with very small P-state components, gives agreement to better than 1%, similar to the results obtained recently frommore » XEFT predictions to order N 3LO.« less

  15. Anisotropic constitutive modeling for nickel base single crystal superalloys using a crystallographic approach

    NASA Technical Reports Server (NTRS)

    Stouffer, D. C.; Sheh, M. Y.

    1988-01-01

    A micromechanical model based on crystallographic slip theory was formulated for nickel-base single crystal superalloys. The current equations include both drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments have been conducted to evaluate the effect of back stress in single crystals. The results showed that (1) the back stress is orientation dependent; and (2) the back stress state variable in the inelastic flow equation is necessary for predicting anelastic behavior of the material. The model also demonstrated improved fatigue predictive capability. Model predictions and experimental data are presented for single crystal superalloy Rene N4 at 982 C.

  16. The creation and evaluation of a model to simulate the probability of conception in seasonal-calving pasture-based dairy heifers.

    PubMed

    Fenlon, Caroline; O'Grady, Luke; Butler, Stephen; Doherty, Michael L; Dunnion, John

    2017-01-01

    Herd fertility in pasture-based dairy farms is a key driver of farm economics. Models for predicting nulliparous reproductive outcomes are rare, but age, genetics, weight, and BCS have been identified as factors influencing heifer conception. The aim of this study was to create a simulation model of heifer conception to service with thorough evaluation. Artificial Insemination service records from two research herds and ten commercial herds were provided to build and evaluate the models. All were managed as spring-calving pasture-based systems. The factors studied were related to age, genetics, and time of service. The data were split into training and testing sets and bootstrapping was used to train the models. Logistic regression (with and without random effects) and generalised additive modelling were selected as the model-building techniques. Two types of evaluation were used to test the predictive ability of the models: discrimination and calibration. Discrimination, which includes sensitivity, specificity, accuracy and ROC analysis, measures a model's ability to distinguish between positive and negative outcomes. Calibration measures the accuracy of the predicted probabilities with the Hosmer-Lemeshow goodness-of-fit, calibration plot and calibration error. After data cleaning and the removal of services with missing values, 1396 services remained to train the models and 597 were left for testing. Age, breed, genetic predicted transmitting ability for calving interval, month and year were significant in the multivariate models. The regression models also included an interaction between age and month. Year within herd was a random effect in the mixed regression model. Overall prediction accuracy was between 77.1% and 78.9%. All three models had very high sensitivity, but low specificity. The two regression models were very well-calibrated. The mean absolute calibration errors were all below 4%. Because the models were not adept at identifying unsuccessful services, they are not suggested for use in predicting the outcome of individual heifer services. Instead, they are useful for the comparison of services with different covariate values or as sub-models in whole-farm simulations. The mixed regression model was identified as the best model for prediction, as the random effects can be ignored and the other variables can be easily obtained or simulated.

  17. New techniques for modeling the reliability of reactor pressure vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, K.I.; Simonen, F.A.; Liebetrau, A.M.

    1985-12-01

    In recent years several probabilistic fracture mechanics codes, including the VISA code, have been developed to predict the reliability of reactor pressure vessels. This paper describes new modeling techniques used in a second generation of the VISA code entitled VISA-II. Results are presented that show the sensitivity of vessel reliability predictions to such factors as inservice inspection to detect flaws, random positioning of flaws within the vessel walls thickness, and fluence distributions that vary through-out the vessel. The algorithms used to implement these modeling techniques are also described. Other new options in VISA-II are also described in this paper. Themore » effect of vessel cladding has been included in the heat transfer, stress, and fracture mechanics solutions in VISA-II. The algorithm for simulating flaws has been changed to consider an entire vessel rather than a single flaw in a single weld. The flaw distribution was changed to include the distribution of both flaw depth and length. A menu of several alternate equations has been included to predict the shift in RTNDT. For flaws that arrest and later re-initiate, an option was also included to allow correlating the current arrest thoughness with subsequent initiation toughnesses. 21 refs.« less

  18. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    PubMed

    Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.

  19. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine

    PubMed Central

    Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829

  20. The Arctic Predictability and Prediction on Seasonal-to-Interannual TimEscales (APPOSITE) project: a summary

    NASA Astrophysics Data System (ADS)

    Hawkins, Ed; Day, Jonny; Tietsche, Steffen

    2016-04-01

    Recent years have seen significant developments in seasonal-to-interannual timescale climate prediction capabilities. However, until recently the potential of such systems to predict Arctic climate had not been assessed. We describe a multi-model predictability experiment which was run as part of the Arctic Predictability and Prediction On Seasonal to Inter-annual TimEscales (APPOSITE) project. The main goal of APPOSITE was to quantify the timescales on which Arctic climate is predictable. In order to achieve this, a coordinated set of idealised initial-value predictability experiments, with seven general circulation models, was conducted. This was the first model intercomparison project designed to quantify the predictability of Arctic climate on seasonal to inter-annual timescales. Here we provide a summary and update of the project's results which include: (1) quantifying the predictability of Arctic climate, especially sea ice; (2) the state-dependence of this predictability, finding that extreme years are potentially more predictable than neutral years; (3) analysing a spring 'predictability barrier' to skillful forecasts; (4) initial sea ice thickness information provides much of the skill for summer forecasts; (5) quantifying the sources of error growth and uncertainty in Arctic predictions. The dataset is now publicly available.

  1. Analysis of model development strategies: predicting ventral hernia recurrence.

    PubMed

    Holihan, Julie L; Li, Linda T; Askenasy, Erik P; Greenberg, Jacob A; Keith, Jerrod N; Martindale, Robert G; Roth, J Scott; Liang, Mike K

    2016-11-01

    There have been many attempts to identify variables associated with ventral hernia recurrence; however, it is unclear which statistical modeling approach results in models with greatest internal and external validity. We aim to assess the predictive accuracy of models developed using five common variable selection strategies to determine variables associated with hernia recurrence. Two multicenter ventral hernia databases were used. Database 1 was randomly split into "development" and "internal validation" cohorts. Database 2 was designated "external validation". The dependent variable for model development was hernia recurrence. Five variable selection strategies were used: (1) "clinical"-variables considered clinically relevant, (2) "selective stepwise"-all variables with a P value <0.20 were assessed in a step-backward model, (3) "liberal stepwise"-all variables were included and step-backward regression was performed, (4) "restrictive internal resampling," and (5) "liberal internal resampling." Variables were included with P < 0.05 for the Restrictive model and P < 0.10 for the Liberal model. A time-to-event analysis using Cox regression was performed using these strategies. The predictive accuracy of the developed models was tested on the internal and external validation cohorts using Harrell's C-statistic where C > 0.70 was considered "reasonable". The recurrence rate was 32.9% (n = 173/526; median/range follow-up, 20/1-58 mo) for the development cohort, 36.0% (n = 95/264, median/range follow-up 20/1-61 mo) for the internal validation cohort, and 12.7% (n = 155/1224, median/range follow-up 9/1-50 mo) for the external validation cohort. Internal validation demonstrated reasonable predictive accuracy (C-statistics = 0.772, 0.760, 0.767, 0.757, 0.763), while on external validation, predictive accuracy dipped precipitously (C-statistic = 0.561, 0.557, 0.562, 0.553, 0.560). Predictive accuracy was equally adequate on internal validation among models; however, on external validation, all five models failed to demonstrate utility. Future studies should report multiple variable selection techniques and demonstrate predictive accuracy on external data sets for model validation. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Intelligent postoperative morbidity prediction of heart disease using artificial intelligence techniques.

    PubMed

    Hsieh, Nan-Chen; Hung, Lun-Ping; Shih, Chun-Che; Keh, Huan-Chao; Chan, Chien-Hui

    2012-06-01

    Endovascular aneurysm repair (EVAR) is an advanced minimally invasive surgical technology that is helpful for reducing patients' recovery time, postoperative morbidity and mortality. This study proposes an ensemble model to predict postoperative morbidity after EVAR. The ensemble model was developed using a training set of consecutive patients who underwent EVAR between 2000 and 2009. All data required for prediction modeling, including patient demographics, preoperative, co-morbidities, and complication as outcome variables, was collected prospectively and entered into a clinical database. A discretization approach was used to categorize numerical values into informative feature space. Then, the Bayesian network (BN), artificial neural network (ANN), and support vector machine (SVM) were adopted as base models, and stacking combined multiple models. The research outcomes consisted of an ensemble model to predict postoperative morbidity after EVAR, the occurrence of postoperative complications prospectively recorded, and the causal effect knowledge by BNs with Markov blanket concept.

  3. A productivity model for parasitized, multibrooded songbirds

    USGS Publications Warehouse

    Powell, L.A.; Knutson, M.G.

    2006-01-01

    We present an enhancement of a simulation model to predict annual productivity for Wood Thrushes (Hylocichla mustelina) and American Redstarts (Setophaga ruticilla); the model includes effects of Brown-headed Cowbird (Molothrus ater) parasitism. We used species-specific data from the Driftless Area Ecoregion of Wisconsin, Minnesota, and Iowa to parameterize the model as a case study. The simulation model predicted annual productivity of 2.03 ?? 1.60 SD for Wood Thrushes and 1.56 ?? 1.31 SD for American Redstarts. Our sensitivity analysis showed that high parasitism lowered Wood Thrush annual productivity more than American Redstart productivity, even though parasitism affected individual nests of redstarts more severely. Annual productivity predictions are valuable for habitat managers, but productivity is not easily obtained from field studies. Our model provides a useful means of integrating complex life history parameters to predict productivity for songbirds that experience nest parasitism. ?? The Cooper Ornithological Society 2006.

  4. Mammographic density, breast cancer risk and risk prediction

    PubMed Central

    Vachon, Celine M; van Gils, Carla H; Sellers, Thomas A; Ghosh, Karthik; Pruthi, Sandhya; Brandt, Kathleen R; Pankratz, V Shane

    2007-01-01

    In this review, we examine the evidence for mammographic density as an independent risk factor for breast cancer, describe the risk prediction models that have incorporated density, and discuss the current and future implications of using mammographic density in clinical practice. Mammographic density is a consistent and strong risk factor for breast cancer in several populations and across age at mammogram. Recently, this risk factor has been added to existing breast cancer risk prediction models, increasing the discriminatory accuracy with its inclusion, albeit slightly. With validation, these models may replace the existing Gail model for clinical risk assessment. However, absolute risk estimates resulting from these improved models are still limited in their ability to characterize an individual's probability of developing cancer. Promising new measures of mammographic density, including volumetric density, which can be standardized using full-field digital mammography, will likely result in a stronger risk factor and improve accuracy of risk prediction models. PMID:18190724

  5. UNRES server for physics-based coarse-grained simulations and prediction of protein structure, dynamics and thermodynamics.

    PubMed

    Czaplewski, Cezary; Karczynska, Agnieszka; Sieradzan, Adam K; Liwo, Adam

    2018-04-30

    A server implementation of the UNRES package (http://www.unres.pl) for coarse-grained simulations of protein structures with the physics-based UNRES model, coined a name UNRES server, is presented. In contrast to most of the protein coarse-grained models, owing to its physics-based origin, the UNRES force field can be used in simulations, including those aimed at protein-structure prediction, without ancillary information from structural databases; however, the implementation includes the possibility of using restraints. Local energy minimization, canonical molecular dynamics simulations, replica exchange and multiplexed replica exchange molecular dynamics simulations can be run with the current UNRES server; the latter are suitable for protein-structure prediction. The user-supplied input includes protein sequence and, optionally, restraints from secondary-structure prediction or small x-ray scattering data, and simulation type and parameters which are selected or typed in. Oligomeric proteins, as well as those containing D-amino-acid residues and disulfide links can be treated. The output is displayed graphically (minimized structures, trajectories, final models, analysis of trajectory/ensembles); however, all output files can be downloaded by the user. The UNRES server can be freely accessed at http://unres-server.chem.ug.edu.pl.

  6. COMPASS: A computational model to predict changes in MMSE scores 24-months after initial assessment of Alzheimer's disease.

    PubMed

    Zhu, Fan; Panwar, Bharat; Dodge, Hiroko H; Li, Hongdong; Hampstead, Benjamin M; Albin, Roger L; Paulson, Henry L; Guan, Yuanfang

    2016-10-05

    We present COMPASS, a COmputational Model to Predict the development of Alzheimer's diSease Spectrum, to model Alzheimer's disease (AD) progression. This was the best-performing method in recent crowdsourcing benchmark study, DREAM Alzheimer's Disease Big Data challenge to predict changes in Mini-Mental State Examination (MMSE) scores over 24-months using standardized data. In the present study, we conducted three additional analyses beyond the DREAM challenge question to improve the clinical contribution of our approach, including: (1) adding pre-validated baseline cognitive composite scores of ADNI-MEM and ADNI-EF, (2) identifying subjects with significant declines in MMSE scores, and (3) incorporating SNPs of top 10 genes connected to APOE identified from functional-relationship network. For (1) above, we significantly improved predictive accuracy, especially for the Mild Cognitive Impairment (MCI) group. For (2), we achieved an area under ROC of 0.814 in predicting significant MMSE decline: our model has 100% precision at 5% recall, and 91% accuracy at 10% recall. For (3), "genetic only" model has Pearson's correlation of 0.15 to predict progression in the MCI group. Even though addition of this limited genetic model to COMPASS did not improve prediction of progression of MCI group, the predictive ability of SNP information extended beyond well-known APOE allele.

  7. Livestock Helminths in a Changing Climate: Approaches and Restrictions to Meaningful Predictions

    PubMed Central

    Fox, Naomi J.; Marion, Glenn; Davidson, Ross S.; White, Piran C. L.; Hutchings, Michael R.

    2012-01-01

    Simple Summary Parasitic helminths represent one of the most pervasive challenges to livestock, and their intensity and distribution will be influenced by climate change. There is a need for long-term predictions to identify potential risks and highlight opportunities for control. We explore the approaches to modelling future helminth risk to livestock under climate change. One of the limitations to model creation is the lack of purpose driven data collection. We also conclude that models need to include a broad view of the livestock system to generate meaningful predictions. Abstract Climate change is a driving force for livestock parasite risk. This is especially true for helminths including the nematodes Haemonchus contortus, Teladorsagia circumcincta, Nematodirus battus, and the trematode Fasciola hepatica, since survival and development of free-living stages is chiefly affected by temperature and moisture. The paucity of long term predictions of helminth risk under climate change has driven us to explore optimal modelling approaches and identify current bottlenecks to generating meaningful predictions. We classify approaches as correlative or mechanistic, exploring their strengths and limitations. Climate is one aspect of a complex system and, at the farm level, husbandry has a dominant influence on helminth transmission. Continuing environmental change will necessitate the adoption of mitigation and adaptation strategies in husbandry. Long term predictive models need to have the architecture to incorporate these changes. Ultimately, an optimal modelling approach is likely to combine mechanistic processes and physiological thresholds with correlative bioclimatic modelling, incorporating changes in livestock husbandry and disease control. Irrespective of approach, the principal limitation to parasite predictions is the availability of active surveillance data and empirical data on physiological responses to climate variables. By combining improved empirical data and refined models with a broad view of the livestock system, robust projections of helminth risk can be developed. PMID:26486780

  8. A MELD-based model to determine risk of mortality among patients with acute variceal bleeding.

    PubMed

    Reverter, Enric; Tandon, Puneeta; Augustin, Salvador; Turon, Fanny; Casu, Stefania; Bastiampillai, Ravin; Keough, Adam; Llop, Elba; González, Antonio; Seijo, Susana; Berzigotti, Annalisa; Ma, Mang; Genescà, Joan; Bosch, Jaume; García-Pagán, Joan Carles; Abraldes, Juan G

    2014-02-01

    Patients with cirrhosis with acute variceal bleeding (AVB) have high mortality rates (15%-20%). Previously described models are seldom used to determine prognoses of these patients, partially because they have not been validated externally and because they include subjective variables, such as bleeding during endoscopy and Child-Pugh score, which are evaluated inconsistently. We aimed to improve determination of risk for patients with AVB. We analyzed data collected from 178 patients with cirrhosis (Child-Pugh scores of A, B, and C: 15%, 57%, and 28%, respectively) and esophageal AVB who received standard therapy from 2007 through 2010. We tested the performance (discrimination and calibration) of previously described models, including the model for end-stage liver disease (MELD), and developed a new MELD calibration to predict the mortality of patients within 6 weeks of presentation with AVB. MELD-based predictions were validated in cohorts of patients from Canada (n = 240) and Spain (n = 221). Among study subjects, the 6-week mortality rate was 16%. MELD was the best model in terms of discrimination; it was recalibrated to predict the 6-week mortality rate with logistic regression (logit, -5.312 + 0.207 • MELD; bootstrapped R(2), 0.3295). MELD values of 19 or greater predicted 20% or greater mortality, whereas MELD scores less than 11 predicted less than 5% mortality. The model performed well for patients from Canada at all risk levels. In the Spanish validation set, in which all patients were treated with banding ligation, MELD predictions were accurate up to the 20% risk threshold. We developed a MELD-based model that accurately predicts mortality among patients with AVB, based on objective variables available at admission. This model could be useful to evaluate the efficacy of new therapies and stratify patients in randomized trials. Copyright © 2014 AGA Institute. Published by Elsevier Inc. All rights reserved.

  9. A Comparative Study of Some Dynamic Stall Models

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Kaza, K. R. V.

    1987-01-01

    Three semi-empirical aerodynamic stall models are compared with respect to their lift and moment hysteresis loop prediction, limit cycle behavior, easy implementation, and feasibility in developing the parameters required for stall flutter prediction of advanced turbines. For the comparison of aeroelastic response prediction including stall, a typical section model and a plate structural model are considered. The response analysis includes both plunging and pitching motions of the blades. In model A, a correction to the angle of attack is applied when the angle of attack exceeds the static stall angle. In model B, a synthesis procedure is used for angles of attack above static stall angles and the time history effects are accounted through the Wagner function. In both models the life and moment coefficients for angle of attack below stall are obtained from tabular data for a given Mach number and angle of attack. In model C, referred to an the ONERA model, the life and moment coefficients are given in the form of two differential equations, one for angles below stall, and the other for angles above stall. The parameters of those equations are nonlinear functions of the angle of attack.

  10. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    PubMed

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Does my patient have chronic Chagas disease? Development and temporal validation of a diagnostic risk score.

    PubMed

    Brasil, Pedro Emmanuel Alvarenga Americano do; Xavier, Sergio Salles; Holanda, Marcelo Teixeira; Hasslocher-Moreno, Alejandro Marcel; Braga, José Ueleres

    2016-01-01

    With the globalization of Chagas disease, unexperienced health care providers may have difficulties in identifying which patients should be examined for this condition. This study aimed to develop and validate a diagnostic clinical prediction model for chronic Chagas disease. This diagnostic cohort study included consecutive volunteers suspected to have chronic Chagas disease. The clinical information was blindly compared to serological tests results, and a logistic regression model was fit and validated. The development cohort included 602 patients, and the validation cohort included 138 patients. The Chagas disease prevalence was 19.9%. Sex, age, referral from blood bank, history of living in a rural area, recognizing the kissing bug, systemic hypertension, number of siblings with Chagas disease, number of relatives with a history of stroke, ECG with low voltage, anterosuperior divisional block, pathologic Q wave, right bundle branch block, and any kind of extrasystole were included in the final model. Calibration and discrimination in the development and validation cohorts (ROC AUC 0.904 and 0.912, respectively) were good. Sensitivity and specificity analyses showed that specificity reaches at least 95% above the predicted 43% risk, while sensitivity is at least 95% below the predicted 7% risk. Net benefit decision curves favor the model across all thresholds. A nomogram and an online calculator (available at http://shiny.ipec.fiocruz.br:3838/pedrobrasil/chronic_chagas_disease_prediction/) were developed to aid in individual risk estimation.

  12. Vfold: a web server for RNA structure and folding thermodynamics prediction.

    PubMed

    Xu, Xiaojun; Zhao, Peinan; Chen, Shi-Jie

    2014-01-01

    The ever increasing discovery of non-coding RNAs leads to unprecedented demand for the accurate modeling of RNA folding, including the predictions of two-dimensional (base pair) and three-dimensional all-atom structures and folding stabilities. Accurate modeling of RNA structure and stability has far-reaching impact on our understanding of RNA functions in human health and our ability to design RNA-based therapeutic strategies. The Vfold server offers a web interface to predict (a) RNA two-dimensional structure from the nucleotide sequence, (b) three-dimensional structure from the two-dimensional structure and the sequence, and (c) folding thermodynamics (heat capacity melting curve) from the sequence. To predict the two-dimensional structure (base pairs), the server generates an ensemble of structures, including loop structures with the different intra-loop mismatches, and evaluates the free energies using the experimental parameters for the base stacks and the loop entropy parameters given by a coarse-grained RNA folding model (the Vfold model) for the loops. To predict the three-dimensional structure, the server assembles the motif scaffolds using structure templates extracted from the known PDB structures and refines the structure using all-atom energy minimization. The Vfold-based web server provides a user friendly tool for the prediction of RNA structure and stability. The web server and the source codes are freely accessible for public use at "http://rna.physics.missouri.edu".

  13. Assessment of turbulent models for scramjet flowfields

    NASA Technical Reports Server (NTRS)

    Sindir, M. M.; Harsha, P. T.

    1982-01-01

    The behavior of several turbulence models applied to the prediction of scramjet combustor flows is described. These models include the basic two equation model, the multiple dissipation length scale variant of the two equation model, and the algebraic stress model (ASM). Predictions were made of planar backward facing step flows and axisymmetric sudden expansion flows using each of these approaches. The formulation of each of these models are discussed, and the application of the different approaches to supersonic flows is described. A modified version of the ASM is found to provide the best prediction of the planar backward facing step flow in the region near the recirculation zone, while the basic ASM provides the best results downstream of the recirculation. Aspects of the interaction of numerica modeling and turbulences modeling as they affect the assessment of turbulence models are discussed.

  14. Prediction of early weight gain during psychotropic treatment using a combinatorial model with clinical and genetic markers.

    PubMed

    Vandenberghe, Frederik; Saigí-Morgui, Núria; Delacrétaz, Aurélie; Quteineh, Lina; Crettol, Séverine; Ansermot, Nicolas; Gholam-Rezaee, Mehdi; von Gunten, Armin; Conus, Philippe; Eap, Chin B

    2016-12-01

    Psychotropic drugs can induce significant (>5%) weight gain (WG) already after 1 month of treatment, which is a good predictor for major WG at 3 and 12 months. The large interindividual variability of drug-induced WG can be explained in part by genetic and clinical factors. The aim of this study was to determine whether extensive analysis of genes, in addition to clinical factors, can improve prediction of patients at risk for more than 5% WG at 1 month of treatment. Data were obtained from a 1-year naturalistic longitudinal study, with weight monitoring during weight-inducing psychotropic treatment. A total of 248 Caucasian psychiatric patients, with at least baseline and 1-month weight measures, and with compliance ascertained were included. Results were tested for replication in a second cohort including 32 patients. Age and baseline BMI were associated significantly with strong WG. The area under the curve (AUC) of the final model including genetic (18 genes) and clinical variables was significantly greater than that of the model including clinical variables only (AUCfinal: 0.92, AUCclinical: 0.75, P<0.0001). Predicted accuracy increased by 17% with genetic markers (Accuracyfinal: 87%), indicating that six patients must be genotyped to avoid one misclassified patient. The validity of the final model was confirmed in a replication cohort. Patients predicted before treatment as having more than 5% WG after 1 month of treatment had 4.4% more WG over 1 year than patients predicted to have up to 5% WG (P≤0.0001). These results may help to implement genetic testing before starting psychotropic drug treatment to identify patients at risk of important WG.

  15. Sound transmission in the chest under surface excitation - An experimental and computational study with diagnostic applications

    PubMed Central

    Peng, Ying; Dai, Zoujun; Mansy, Hansen A.; Sandler, Richard H.; Balk, Robert A; Royston, Thomas. J

    2014-01-01

    Chest physical examination often includes performing chest percussion, which involves introducing sound stimulus to the chest wall and detecting an audible change. This approach relies on observations that underlying acoustic transmission, coupling, and resonance patterns can be altered by chest structure changes due to pathologies. More accurate detection and quantification of these acoustic alterations may provide further useful diagnostic information. To elucidate the physical processes involved, a realistic computer model of sound transmission in the chest is helpful. In the present study, a computational model was developed and validated by comparing its predictions with results from animal and human experiments which involved applying acoustic excitation to the anterior chest while detecting skin vibrations at the posterior chest. To investigate the effect of pathology on sound transmission, the computational model was used to simulate the effects of pneumothorax on sounds introduced at the anterior chest and detected at the posterior. Model predictions and experimental results showed similar trends. The model also predicted wave patterns inside the chest, which may be used to assess results of elastography measurements. Future animal and human tests may expand the predictive power of the model to include acoustic behavior for a wider range of pulmonary conditions. PMID:25001497

  16. Self-Regulation and Recall: Growth Curve Modeling of Intervention Outcomes for Older Adults

    PubMed Central

    West, Robin L.; Hastings, Erin C.

    2013-01-01

    Memory training has often been supported as a potential means to improve performance for older adults. Less often studied are the characteristics of trainees that benefit most from training. Using a self-regulatory perspective, the current project examined a latent growth curve model to predict training-related gains for middle-aged and older adult trainees from individual differences (e.g., education), information processing skills (strategy use) and self-regulatory factors such as self-efficacy, control, and active engagement in training. For name recall, a model including strategy usage and strategy change as predictors of memory gain, along with self-efficacy and self-efficacy change, showed comparable fit to a more parsimonious model including only self-efficacy variables as predictors. The best fit to the text recall data was a model focusing on self-efficacy change as the main predictor of memory change, and that model showed significantly better fit than a model also including strategy usage variables as predictors. In these models, overall performance was significantly predicted by age and memory self-efficacy, and subsequent training-related gains in performance were best predicted directly by change in self-efficacy (text recall), or indirectly through the impact of active engagement and self-efficacy on gains (name recall). These results underscore the benefits of targeting self-regulatory factors in intervention programs designed to improve memory skills. PMID:21604891

  17. Self-regulation and recall: growth curve modeling of intervention outcomes for older adults.

    PubMed

    West, Robin L; Hastings, Erin C

    2011-12-01

    Memory training has often been supported as a potential means to improve performance for older adults. Less often studied are the characteristics of trainees that benefit most from training. Using a self-regulatory perspective, the current project examined a latent growth curve model to predict training-related gains for middle-aged and older adult trainees from individual differences (e.g., education), information processing skills (strategy use) and self-regulatory factors such as self-efficacy, control, and active engagement in training. For name recall, a model including strategy usage and strategy change as predictors of memory gain, along with self-efficacy and self-efficacy change, showed comparable fit to a more parsimonious model including only self-efficacy variables as predictors. The best fit to the text recall data was a model focusing on self-efficacy change as the main predictor of memory change, and that model showed significantly better fit than a model also including strategy usage variables as predictors. In these models, overall performance was significantly predicted by age and memory self-efficacy, and subsequent training-related gains in performance were best predicted directly by change in self-efficacy (text recall), or indirectly through the impact of active engagement and self-efficacy on gains (name recall). These results underscore the benefits of targeting self-regulatory factors in intervention programs designed to improve memory skills.

  18. Predictive Modeling of Risk Factors and Complications of Cataract Surgery

    PubMed Central

    Gaskin, Gregory L; Pershing, Suzann; Cole, Tyler S; Shah, Nigam H

    2016-01-01

    Purpose To quantify the relationship between aggregated preoperative risk factors and cataract surgery complications, as well as to build a model predicting outcomes on an individual-level—given a constellation of demographic, baseline, preoperative, and intraoperative patient characteristics. Setting Stanford Hospital and Clinics between 1994 and 2013. Design Retrospective cohort study Methods Patients age 40 or older who received cataract surgery between 1994 and 2013. Risk factors, complications, and demographic information were extracted from the Electronic Health Record (EHR), based on International Classification of Diseases, 9th edition (ICD-9) codes, Current Procedural Terminology (CPT) codes, drug prescription information, and text data mining using natural language processing. We used a bootstrapped least absolute shrinkage and selection operator (LASSO) model to identify highly-predictive variables. We built random forest classifiers for each complication to create predictive models. Results Our data corroborated existing literature on postoperative complications—including the association of intraoperative complications, complex cataract surgery, black race, and/or prior eye surgery with an increased risk of any postoperative complications. We also found a number of other, less well-described risk factors, including systemic diabetes mellitus, young age (<60 years old), and hyperopia as risk factors for complex cataract surgery and intra- and post-operative complications. Our predictive models based on aggregated outperformed existing published models. Conclusions The constellations of risk factors and complications described here can guide new avenues of research and provide specific, personalized risk assessment for a patient considering cataract surgery. The predictive capacity of our models can enable risk stratification of patients, which has utility as a teaching tool as well as informing quality/value-based reimbursements. PMID:26692059

  19. Predicting tibiotalar and subtalar joint angles from skin-marker data with dual-fluoroscopy as a reference standard.

    PubMed

    Nichols, Jennifer A; Roach, Koren E; Fiorentino, Niccolo M; Anderson, Andrew E

    2016-09-01

    Evidence suggests that the tibiotalar and subtalar joints provide near six degree-of-freedom (DOF) motion. Yet, kinematic models frequently assume one DOF at each of these joints. In this study, we quantified the accuracy of kinematic models to predict joint angles at the tibiotalar and subtalar joints from skin-marker data. Models included 1 or 3 DOF at each joint. Ten asymptomatic subjects, screened for deformities, performed 1.0m/s treadmill walking and a balanced, single-leg heel-rise. Tibiotalar and subtalar joint angles calculated by inverse kinematics for the 1 and 3 DOF models were compared to those measured directly in vivo using dual-fluoroscopy. Results demonstrated that, for each activity, the average error in tibiotalar joint angles predicted by the 1 DOF model were significantly smaller than those predicted by the 3 DOF model for inversion/eversion and internal/external rotation. In contrast, neither model consistently demonstrated smaller errors when predicting subtalar joint angles. Additionally, neither model could accurately predict discrete angles for the tibiotalar and subtalar joints on a per-subject basis. Differences between model predictions and dual-fluoroscopy measurements were highly variable across subjects, with joint angle errors in at least one rotation direction surpassing 10° for 9 out of 10 subjects. Our results suggest that both the 1 and 3 DOF models can predict trends in tibiotalar joint angles on a limited basis. However, as currently implemented, neither model can predict discrete tibiotalar or subtalar joint angles for individual subjects. Inclusion of subject-specific attributes may improve the accuracy of these models. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Modelling Influence and Opinion Evolution in Online Collective Behaviour

    PubMed Central

    Gend, Pascal; Rentfrow, Peter J.; Hendrickx, Julien M.; Blondel, Vincent D.

    2016-01-01

    Opinion evolution and judgment revision are mediated through social influence. Based on a large crowdsourced in vitro experiment (n = 861), it is shown how a consensus model can be used to predict opinion evolution in online collective behaviour. It is the first time the predictive power of a quantitative model of opinion dynamics is tested against a real dataset. Unlike previous research on the topic, the model was validated on data which did not serve to calibrate it. This avoids to favor more complex models over more simple ones and prevents overfitting. The model is parametrized by the influenceability of each individual, a factor representing to what extent individuals incorporate external judgments. The prediction accuracy depends on prior knowledge on the participants’ past behaviour. Several situations reflecting data availability are compared. When the data is scarce, the data from previous participants is used to predict how a new participant will behave. Judgment revision includes unpredictable variations which limit the potential for prediction. A first measure of unpredictability is proposed. The measure is based on a specific control experiment. More than two thirds of the prediction errors are found to occur due to unpredictability of the human judgment revision process rather than to model imperfection. PMID:27336834

  1. Plant water potential improves prediction of empirical stomatal models.

    PubMed

    Anderegg, William R L; Wolf, Adam; Arango-Velez, Adriana; Choat, Brendan; Chmura, Daniel J; Jansen, Steven; Kolb, Thomas; Li, Shan; Meinzer, Frederick; Pita, Pilar; Resco de Dios, Víctor; Sperry, John S; Wolfe, Brett T; Pacala, Stephen

    2017-01-01

    Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  2. NASA Iced Aerodynamics and Controls Current Research

    NASA Technical Reports Server (NTRS)

    Addy, Gene

    2009-01-01

    This slide presentation reviews the state of current research in the area of aerodynamics and aircraft control with ice conditions by the Aviation Safety Program, part of the Integrated Resilient Aircraft Controls Project (IRAC). Included in the presentation is a overview of the modeling efforts. The objective of the modeling is to develop experimental and computational methods to model and predict aircraft response during adverse flight conditions, including icing. The Aircraft icing modeling efforts includes the Ice-Contaminated Aerodynamics Modeling, which examines the effects of ice contamination on aircraft aerodynamics, and CFD modeling of ice-contaminated aircraft aerodynamics, and Advanced Ice Accretion Process Modeling which examines the physics of ice accretion, and works on computational modeling of ice accretions. The IRAC testbed, a Generic Transport Model (GTM) and its use in the investigation of the effects of icing on its aerodynamics is also reviewed. This has led to a more thorough understanding and models, both theoretical and empirical of icing physics and ice accretion for airframes, advanced 3D ice accretion prediction codes, CFD methods for iced aerodynamics and better understanding of aircraft iced aerodynamics and its effects on control surface effectiveness.

  3. Prediction of BP reactivity to talking using hybrid soft computing approaches.

    PubMed

    Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar

    2014-01-01

    High blood pressure (BP) is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI), and arm circumference (AC) were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA) was fused with artificial neural network (ANN), adaptive neurofuzzy inference system (ANFIS), and least square-support vector machine (LS-SVM) model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R (2)), root mean square error (RMSE), and mean absolute percentage error (MAPE) revealed that PCA based LS-SVM (PCA-LS-SVM) model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables.

  4. Predicting Microstructure and Microsegregation in Multicomponent Aluminum Alloys

    NASA Astrophysics Data System (ADS)

    Yan, Xinyan; Ding, Ling; Chen, ShuangLin; Xie, Fanyou; Chu, M.; Chang, Y. Austin

    Accurate predictions of microstructure and microsegregation in metallic alloys are highly important for applications such as alloy design and process optimization. Restricted assumptions concerning the phase diagram could easily lead to erroneous predictions. The best approach is to couple microsegregation modeling with phase diagram computations. A newly developed numerical model for the prediction of microstructure and microsegregation in multicomponent alloys during dendritic solidification was introduced. The micromodel is directly coupled with phase diagram calculations using a user-friendly and robust phase diagram calculation engine-PANDAT. Solid state back diffusion, undercooling and coarsening effects are included in this model, and the experimentally measured cooling curves are used as the inputs to carry out the calculations. This model has been used to predict the microstructure and microsegregation in two multicomponent aluminum alloys, 2219 and 7050. The calculated values were confirmed using results obtained from directional solidification.

  5. Longitudinal predictors of high school completion.

    PubMed

    Barry, Melissa; Reschly, Amy L

    2012-06-01

    This longitudinal study examined predictors of dropout assessed in elementary school. Student demographic data, achievement, attendance, and ratings of behavior from the Behavior Assessment System for Children were used to predict dropout and completion. Two models, which varied on student sex and race, predicted dropout at rates ranging from 75% to 88%. Model A, which included the Behavioral Symptoms Index, School Problems composite, Iowa Tests of Basic Skills battery, and teacher ratings of student work habits, best predicted female and African American dropouts. Model B, which comprised the Adaptive Skills composite, the Externalizing composite, the School Problems composite, referral for a student support team meeting, and sex, was more accurate for predicting Caucasian dropouts. Both models demonstrated the same hit rates for predicting male dropouts. Recommendations for early warning indicators and linking predictors with interventions are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  6. The development of a probabilistic approach to forecast coastal change

    USGS Publications Warehouse

    Lentz, Erika E.; Hapke, Cheryl J.; Rosati, Julie D.; Wang, Ping; Roberts, Tiffany M.

    2011-01-01

    This study demonstrates the applicability of a Bayesian probabilistic model as an effective tool in predicting post-storm beach changes along sandy coastlines. Volume change and net shoreline movement are modeled for two study sites at Fire Island, New York in response to two extratropical storms in 2007 and 2009. Both study areas include modified areas adjacent to unmodified areas in morphologically different segments of coast. Predicted outcomes are evaluated against observed changes to test model accuracy and uncertainty along 163 cross-shore transects. Results show strong agreement in the cross validation of predictions vs. observations, with 70-82% accuracies reported. Although no consistent spatial pattern in inaccurate predictions could be determined, the highest prediction uncertainties appeared in locations that had been recently replenished. Further testing and model refinement are needed; however, these initial results show that Bayesian networks have the potential to serve as important decision-support tools in forecasting coastal change.

  7. Predictors of cultural capital on science academic achievement at the 8th grade level

    NASA Astrophysics Data System (ADS)

    Misner, Johnathan Scott

    The purpose of the study was to determine if students' cultural capital is a significant predictor of 8th grade science achievement test scores in urban locales. Cultural capital refers to the knowledge used and gained by the dominant class, which allows social and economic mobility. Cultural capital variables include magazines at home and parental education level. Other variables analyzed include socioeconomic status (SES), gender, and English language learners (ELL). This non-experimental study analyzed the results of the 2011 Eighth Grade Science National Assessment of Educational Progress (NAEP). The researcher analyzed the data using a multivariate stepwise regression analysis. The researcher concluded that the addition of cultural capital factors significantly increased the predictive power of the model where magazines in home, gender, student classified as ELL, parental education level, and SES were the independent variables and science achievement was the dependent variable. For alpha=0.05, the overall test for the model produced a R2 value of 0.232; therefore the model predicted 23.2% of variance in science achievement results. Other major findings include: higher measures of home resources predicted higher 2011 NAEP eighth grade science achievement; males were predicted to have higher 2011 NAEP 8 th grade science achievement; classified ELL students were predicted to score lower on the NAEP eight grade science achievement; higher parent education predicted higher NAEP eighth grade science achievement; lower measures of SES predicted lower 2011 NAEP eighth grade science achievement. This study contributed to the research in this field by identifying cultural capital factors that have been found to have statistical significance on predicting eighth grade science achievement results, which can lead to strategies to help improve science academic achievement among underserved populations.

  8. Improved Model for Predicting the Free Energy Contribution of Dinucleotide Bulges to RNA Duplex Stability.

    PubMed

    Tomcho, Jeremy C; Tillman, Magdalena R; Znosko, Brent M

    2015-09-01

    Predicting the secondary structure of RNA is an intermediate in predicting RNA three-dimensional structure. Commonly, determining RNA secondary structure from sequence uses free energy minimization and nearest neighbor parameters. Current algorithms utilize a sequence-independent model to predict free energy contributions of dinucleotide bulges. To determine if a sequence-dependent model would be more accurate, short RNA duplexes containing dinucleotide bulges with different sequences and nearest neighbor combinations were optically melted to derive thermodynamic parameters. These data suggested energy contributions of dinucleotide bulges were sequence-dependent, and a sequence-dependent model was derived. This model assigns free energy penalties based on the identity of nucleotides in the bulge (3.06 kcal/mol for two purines, 2.93 kcal/mol for two pyrimidines, 2.71 kcal/mol for 5'-purine-pyrimidine-3', and 2.41 kcal/mol for 5'-pyrimidine-purine-3'). The predictive model also includes a 0.45 kcal/mol penalty for an A-U pair adjacent to the bulge and a -0.28 kcal/mol bonus for a G-U pair adjacent to the bulge. The new sequence-dependent model results in predicted values within, on average, 0.17 kcal/mol of experimental values, a significant improvement over the sequence-independent model. This model and new experimental values can be incorporated into algorithms that predict RNA stability and secondary structure from sequence.

  9. Pavement Performance : Approaches Using Predictive Analytics

    DOT National Transportation Integrated Search

    2018-03-23

    Acceptable pavement condition is paramount to road safety. Using predictive analytics techniques, this project attempted to develop models that provide an assessment of pavement condition based on an array of indictors that include pavement distress,...

  10. Epidemic patch models applied to pandemic influenza: contact matrix, stochasticity, robustness of predictions.

    PubMed

    Lunelli, Antonella; Pugliese, Andrea; Rizzo, Caterina

    2009-07-01

    Due to the recent emergence of H5N1 virus, the modelling of pandemic influenza has become a relevant issue. Here we present an SEIR model formulated to simulate a possible outbreak in Italy, analysing its structure and, more generally, the effect of including specific details into a model. These details regard population heterogeneities, such as age and spatial distribution, as well as stochasticity, that regulates the epidemic dynamics when the number of infectives is low. We discuss and motivate the specific modelling choices made when building the model and investigate how the model details influence the predicted dynamics. Our analysis may help in deciding which elements of complexity are worth including in the design of a deterministic model for pandemic influenza, in a balance between, on the one hand, keeping the model computationally efficient and the number of parameters low and, on the other hand, maintaining the necessary realistic features.

  11. Sociodemographic Factors Associated With Changes in Successful Aging in Spain: A Follow-Up Study.

    PubMed

    Domènech-Abella, Joan; Perales, Jaime; Lara, Elvira; Moneta, Maria Victoria; Izquierdo, Ana; Rico-Uribe, Laura Alejandra; Mundó, Jordi; Haro, Josep Maria

    2017-06-01

    Successful aging (SA) refers to maintaining well-being in old age. Several definitions or models of SA exist (biomedical, psychosocial, and mixed). We examined the longitudinal association between various SA models and sociodemographic factors, and analyzed the patterns of change within these models. This was a nationally representative follow-up in Spain including 3,625 individuals aged ≥50 years. Some 1,970 individuals were interviewed after 3 years. Linear regression models were used to analyze the survey data. Age, sex, and occupation predicted SA in the biomedical model, while marital status, educational level, and urbanicity predicted SA in the psychosocial model. The remaining models included different sets of these predictors as significant. In the psychosocial model, individuals tended to improve over time but this was not the case in the biomedical model. The biomedical and psychosocial components of SA need to be addressed specifically to achieve the best aging trajectories.

  12. Sensitivity of Above-Ground Biomass Estimates to Height-Diameter Modelling in Mixed-Species West African Woodlands

    PubMed Central

    Aynekulu, Ermias; Pitkänen, Sari; Packalen, Petteri

    2016-01-01

    It has been suggested that above-ground biomass (AGB) inventories should include tree height (H), in addition to diameter (D). As H is a difficult variable to measure, H-D models are commonly used to predict H. We tested a number of approaches for H-D modelling, including additive terms which increased the complexity of the model, and observed how differences in tree-level predictions of H propagated to plot-level AGB estimations. We were especially interested in detecting whether the choice of method can lead to bias. The compared approaches listed in the order of increasing complexity were: (B0) AGB estimations from D-only; (B1) involving also H obtained from a fixed-effects H-D model; (B2) involving also species; (B3) including also between-plot variability as random effects; and (B4) involving multilevel nested random effects for grouping plots in clusters. In light of the results, the modelling approach affected the AGB estimation significantly in some cases, although differences were negligible for some of the alternatives. The most important differences were found between including H or not in the AGB estimation. We observed that AGB predictions without H information were very sensitive to the environmental stress parameter (E), which can induce a critical bias. Regarding the H-D modelling, the most relevant effect was found when species was included as an additive term. We presented a two-step methodology, which succeeded in identifying the species for which the general H-D relation was relevant to modify. Based on the results, our final choice was the single-level mixed-effects model (B3), which accounts for the species but also for the plot random effects reflecting site-specific factors such as soil properties and degree of disturbance. PMID:27367857

  13. Predicting hydrological response to forest changes by simple statistical models: the selection of the best indicator of forest changes with a hydrological perspective

    NASA Astrophysics Data System (ADS)

    Ning, D.; Zhang, M.; Ren, S.; Hou, Y.; Yu, L.; Meng, Z.

    2017-01-01

    Forest plays an important role in hydrological cycle, and forest changes will inevitably affect runoff across multiple spatial scales. The selection of a suitable indicator for forest changes is essential for predicting forest-related hydrological response. This study used the Meijiang River, one of the headwaters of the Poyang Lake as an example to identify the best indicator of forest changes for predicting forest change-induced hydrological responses. Correlation analysis was conducted first to detect the relationships between monthly runoff and its predictive variables including antecedent monthly precipitation and indicators for forest changes (forest coverage, vegetation indices including EVI, NDVI, and NDWI), and by use of the identified predictive variables that were most correlated with monthly runoff, multiple linear regression models were then developed. The model with best performance identified in this study included two independent variables -antecedent monthly precipitation and NDWI. It indicates that NDWI is the best indicator of forest change in hydrological prediction while forest coverage, the most commonly used indicator of forest change is insignificantly related to monthly runoff. This highlights the use of vegetation index such as NDWI to indicate forest changes in hydrological studies. This study will provide us with an efficient way to quantify the hydrological impact of large-scale forest changes in the Meijiang River watershed, which is crucial for downstream water resource management and ecological protection in the Poyang Lake basin.

  14. Predictor characteristics necessary for building a clinically useful risk prediction model: a simulation study.

    PubMed

    Schummers, Laura; Himes, Katherine P; Bodnar, Lisa M; Hutcheon, Jennifer A

    2016-09-21

    Compelled by the intuitive appeal of predicting each individual patient's risk of an outcome, there is a growing interest in risk prediction models. While the statistical methods used to build prediction models are increasingly well understood, the literature offers little insight to researchers seeking to gauge a priori whether a prediction model is likely to perform well for their particular research question. The objective of this study was to inform the development of new risk prediction models by evaluating model performance under a wide range of predictor characteristics. Data from all births to overweight or obese women in British Columbia, Canada from 2004 to 2012 (n = 75,225) were used to build a risk prediction model for preeclampsia. The data were then augmented with simulated predictors of the outcome with pre-set prevalence values and univariable odds ratios. We built 120 risk prediction models that included known demographic and clinical predictors, and one, three, or five of the simulated variables. Finally, we evaluated standard model performance criteria (discrimination, risk stratification capacity, calibration, and Nagelkerke's r 2 ) for each model. Findings from our models built with simulated predictors demonstrated the predictor characteristics required for a risk prediction model to adequately discriminate cases from non-cases and to adequately classify patients into clinically distinct risk groups. Several predictor characteristics can yield well performing risk prediction models; however, these characteristics are not typical of predictor-outcome relationships in many population-based or clinical data sets. Novel predictors must be both strongly associated with the outcome and prevalent in the population to be useful for clinical prediction modeling (e.g., one predictor with prevalence ≥20 % and odds ratio ≥8, or 3 predictors with prevalence ≥10 % and odds ratios ≥4). Area under the receiver operating characteristic curve values of >0.8 were necessary to achieve reasonable risk stratification capacity. Our findings provide a guide for researchers to estimate the expected performance of a prediction model before a model has been built based on the characteristics of available predictors.

  15. Spatial Prediction of Coxiella burnetii Outbreak Exposure via Notified Case Counts in a Dose-Response Model.

    PubMed

    Brooke, Russell J; Kretzschmar, Mirjam E E; Hackert, Volker; Hoebe, Christian J P A; Teunis, Peter F M; Waller, Lance A

    2017-01-01

    We develop a novel approach to study an outbreak of Q fever in 2009 in the Netherlands by combining a human dose-response model with geostatistics prediction to relate probability of infection and associated probability of illness to an effective dose of Coxiella burnetii. The spatial distribution of the 220 notified cases in the at-risk population are translated into a smooth spatial field of dose. Based on these symptomatic cases, the dose-response model predicts a median of 611 asymptomatic infections (95% range: 410, 1,084) for the 220 reported symptomatic cases in the at-risk population; 2.78 (95% range: 1.86, 4.93) asymptomatic infections for each reported case. The low attack rates observed during the outbreak range from (Equation is included in full-text article.)to (Equation is included in full-text article.). The estimated peak levels of exposure extend to the north-east from the point source with an increasing proportion of asymptomatic infections further from the source. Our work combines established methodology from model-based geostatistics and dose-response modeling allowing for a novel approach to study outbreaks. Unobserved infections and the spatially varying effective dose can be predicted using the flexible framework without assuming any underlying spatial structure of the outbreak process. Such predictions are important for targeting interventions during an outbreak, estimating future disease burden, and determining acceptable risk levels.

  16. Applicability of linear regression equation for prediction of chlorophyll content in rice leaves

    NASA Astrophysics Data System (ADS)

    Li, Yunmei

    2005-09-01

    A modeling approach is used to assess the applicability of the derived equations which are capable to predict chlorophyll content of rice leaves at a given view direction. Two radiative transfer models, including PROSPECT model operated at leaf level and FCR model operated at canopy level, are used in the study. The study is consisted of three steps: (1) Simulation of bidirectional reflectance from canopy with different leaf chlorophyll contents, leaf-area-index (LAI) and under storey configurations; (2) Establishment of prediction relations of chlorophyll content by stepwise regression; and (3) Assessment of the applicability of these relations. The result shows that the accuracy of prediction is affected by different under storey configurations and, however, the accuracy tends to be greatly improved with increase of LAI.

  17. ZY3-02 Laser Altimeter Footprint Geolocation Prediction

    PubMed Central

    Xie, Junfeng; Tang, Xinming; Mo, Fan; Li, Guoyuan; Zhu, Guangbin; Wang, Zhenming; Fu, Xingke; Gao, Xiaoming; Dou, Xianhui

    2017-01-01

    Successfully launched on 30 May 2016, ZY3-02 is the first Chinese surveying and mapping satellite equipped with a lightweight laser altimeter. Calibration is necessary before the laser altimeter becomes operational. Laser footprint location prediction is the first step in calibration that is based on ground infrared detectors, and it is difficult because the sample frequency of the ZY3-02 laser altimeter is 2 Hz, and the distance between two adjacent laser footprints is about 3.5 km. In this paper, we build an on-orbit rigorous geometric prediction model referenced to the rigorous geometric model of optical remote sensing satellites. The model includes three kinds of data that must be predicted: pointing angle, orbit parameters, and attitude angles. The proposed method is verified by a ZY3-02 laser altimeter on-orbit geometric calibration test. Five laser footprint prediction experiments are conducted based on the model, and the laser footprint prediction accuracy is better than 150 m on the ground. The effectiveness and accuracy of the on-orbit rigorous geometric prediction model are confirmed by the test results. The geolocation is predicted precisely by the proposed method, and this will give a reference to the geolocation prediction of future land laser detectors in other laser altimeter calibration test. PMID:28934160

  18. ZY3-02 Laser Altimeter Footprint Geolocation Prediction.

    PubMed

    Xie, Junfeng; Tang, Xinming; Mo, Fan; Li, Guoyuan; Zhu, Guangbin; Wang, Zhenming; Fu, Xingke; Gao, Xiaoming; Dou, Xianhui

    2017-09-21

    Successfully launched on 30 May 2016, ZY3-02 is the first Chinese surveying and mapping satellite equipped with a lightweight laser altimeter. Calibration is necessary before the laser altimeter becomes operational. Laser footprint location prediction is the first step in calibration that is based on ground infrared detectors, and it is difficult because the sample frequency of the ZY3-02 laser altimeter is 2 Hz, and the distance between two adjacent laser footprints is about 3.5 km. In this paper, we build an on-orbit rigorous geometric prediction model referenced to the rigorous geometric model of optical remote sensing satellites. The model includes three kinds of data that must be predicted: pointing angle, orbit parameters, and attitude angles. The proposed method is verified by a ZY3-02 laser altimeter on-orbit geometric calibration test. Five laser footprint prediction experiments are conducted based on the model, and the laser footprint prediction accuracy is better than 150 m on the ground. The effectiveness and accuracy of the on-orbit rigorous geometric prediction model are confirmed by the test results. The geolocation is predicted precisely by the proposed method, and this will give a reference to the geolocation prediction of future land laser detectors in other laser altimeter calibration test.

  19. Threshold models for genome-enabled prediction of ordinal categorical traits in plant breeding.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo; Eskridge, Kent; Crossa, José

    2014-12-23

    Categorical scores for disease susceptibility or resistance often are recorded in plant breeding. The aim of this study was to introduce genomic models for analyzing ordinal characters and to assess the predictive ability of genomic predictions for ordered categorical phenotypes using a threshold model counterpart of the Genomic Best Linear Unbiased Predictor (i.e., TGBLUP). The threshold model was used to relate a hypothetical underlying scale to the outward categorical response. We present an empirical application where a total of nine models, five without interaction and four with genomic × environment interaction (G×E) and genomic additive × additive × environment interaction (G×G×E), were used. We assessed the proposed models using data consisting of 278 maize lines genotyped with 46,347 single-nucleotide polymorphisms and evaluated for disease resistance [with ordinal scores from 1 (no disease) to 5 (complete infection)] in three environments (Colombia, Zimbabwe, and Mexico). Models with G×E captured a sizeable proportion of the total variability, which indicates the importance of introducing interaction to improve prediction accuracy. Relative to models based on main effects only, the models that included G×E achieved 9-14% gains in prediction accuracy; adding additive × additive interactions did not increase prediction accuracy consistently across locations. Copyright © 2015 Montesinos-López et al.

  20. A Predictive Model of Student Loan Default at a Two-Year Community College

    ERIC Educational Resources Information Center

    Brown, Chanda Denea

    2015-01-01

    This study explored whether a predictive model of student loan default could be developed with data from an institution's three-year cohort default rate report. The study used borrower data provided by a large two-year community college. Independent variables under investigation included total undergraduate Stafford student loan debt, total number…

Top