Sample records for prediction model combining

  1. Future missions studies: Combining Schatten's solar activity prediction model with a chaotic prediction model

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.

    1991-01-01

    K. Schatten (1991) recently developed a method for combining his prediction model with our chaotic model. The philosophy behind this combined model and his method of combination is explained. Because the Schatten solar prediction model (KS) uses a dynamo to mimic solar dynamics, accurate prediction is limited to long-term solar behavior (10 to 20 years). The Chaotic prediction model (SA) uses the recently developed techniques of nonlinear dynamics to predict solar activity. It can be used to predict activity only up to the horizon. In theory, the chaotic prediction should be several orders of magnitude better than statistical predictions up to that horizon; beyond the horizon, chaotic predictions would theoretically be just as good as statistical predictions. Therefore, chaos theory puts a fundamental limit on predictability.

  2. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  3. Predicting Drug Combination Index and Simulating the Network-Regulation Dynamics by Mathematical Modeling of Drug-Targeted EGFR-ERK Signaling Pathway

    NASA Astrophysics Data System (ADS)

    Huang, Lu; Jiang, Yuyang; Chen, Yuzong

    2017-01-01

    Synergistic drug combinations enable enhanced therapeutics. Their discovery typically involves the measurement and assessment of drug combination index (CI), which can be facilitated by the development and applications of in-silico CI predictive tools. In this work, we developed and tested the ability of a mathematical model of drug-targeted EGFR-ERK pathway in predicting CIs and in analyzing multiple synergistic drug combinations against observations. Our mathematical model was validated against the literature reported signaling, drug response dynamics, and EGFR-MEK drug combination effect. The predicted CIs and combination therapeutic effects of the EGFR-BRaf, BRaf-MEK, FTI-MEK, and FTI-BRaf inhibitor combinations showed consistent synergism. Our results suggest that existing pathway models may be potentially extended for developing drug-targeted pathway models to predict drug combination CI values, isobolograms, and drug-response surfaces as well as to analyze the dynamics of individual and combinations of drugs. With our model, the efficacy of potential drug combinations can be predicted. Our method complements the developed in-silico methods (e.g. the chemogenomic profile and the statistically-inferenced network models) by predicting drug combination effects from the perspectives of pathway dynamics using experimental or validated molecular kinetic constants, thereby facilitating the collective prediction of drug combination effects in diverse ranges of disease systems.

  4. Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination

    NASA Astrophysics Data System (ADS)

    Li, Weihua; Sankarasubramanian, A.

    2012-12-01

    Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.

  5. The Role of Multimodel Combination in Improving Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Li, W.

    2008-12-01

    Model errors are the inevitable part in any prediction exercise. One approach that is currently gaining attention to reduce model errors is by optimally combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictability. In this study, we present a new approach to combine multiple hydrological models by evaluating their predictability contingent on the predictor state. We combine two hydrological models, 'abcd' model and Variable Infiltration Capacity (VIC) model, with each model's parameter being estimated by two different objective functions to develop multimodel streamflow predictions. The performance of multimodel predictions is compared with individual model predictions using correlation, root mean square error and Nash-Sutcliffe coefficient. To quantify precisely under what conditions the multimodel predictions result in improved predictions, we evaluate the proposed algorithm by testing it against streamflow generated from a known model ('abcd' model or VIC model) with errors being homoscedastic or heteroscedastic. Results from the study show that streamflow simulated from individual models performed better than multimodels under almost no model error. Under increased model error, the multimodel consistently performed better than the single model prediction in terms of all performance measures. The study also evaluates the proposed algorithm for streamflow predictions in two humid river basins from NC as well as in two arid basins from Arizona. Through detailed validation in these four sites, the study shows that multimodel approach better predicts the observed streamflow in comparison to the single model predictions.

  6. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  7. TOXICO-CHEMINFORMATICS AND QSAR MODELING OF ...

    EPA Pesticide Factsheets

    This abstract concludes that QSAR approaches combined with toxico-chemoinformatics descriptors can enhance predictive toxicology models. This abstract concludes that QSAR approaches combined with toxico-chemoinformatics descriptors can enhance predictive toxicology models.

  8. Predictors of transitions from single to multiple job holding: Results of a longitudinal study among employees aged 45-64 in the Netherlands.

    PubMed

    Bouwhuis, Stef; Geuskens, Goedele A; Boot, Cécile R L; Bongers, Paulien M; van der Beek, Allard J

    2017-08-01

    To construct prediction models for transitions to combination multiple job holding (MJH) (multiple jobs as an employee) and hybrid MJH (being an employee and self-employed), among employees aged 45-64. A total of 5187 employees in the Netherlands completed online questionnaires annually between 2010 and 2013. We applied logistic regression analyses with a backward elimination strategy to construct prediction models. Transitions to combination MJH and hybrid MJH were best predicted by a combination of factors including: demographics, health and mastery, work characteristics, work history, skills and knowledge, social factors, and financial factors. Not having a permanent contract and a poor household financial situation predicted both transitions. Some predictors only predicted combination MJH, e.g., working part-time, or hybrid MJH, e.g., work-home interference. A wide variety of factors predict combination MJH and/or hybrid MJH. The prediction model approach allowed for the identification of predictors that have not been previously studied. © 2017 Wiley Periodicals, Inc.

  9. Assessing the impact of land use change on hydrology by ensemble modelling (LUCHEM) II: Ensemble combinations and predictions

    USGS Publications Warehouse

    Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.

    2009-01-01

    This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in multi-model ensembles. The reasons behind these observations may relate to the effects of the weighting schemes, non-stationarity of the climate series and possible cross-correlations between models. Crown Copyright ?? 2008.

  10. A High Precision Prediction Model Using Hybrid Grey Dynamic Model

    ERIC Educational Resources Information Center

    Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro

    2008-01-01

    In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…

  11. Incorporating single-side sparing in models for predicting parotid dose sparing in head and neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Lulin, E-mail: lulin.yuan@duke.edu; Wu, Q. Jackie; Yin, Fang-Fang

    2014-02-15

    Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trainedmore » with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined model differ from actual values by only 2.2 Gy (p = 0.005). Similarly, the sum of residues between the modeled and the actual plan DVHs is the same for the bilateral sparing cases by both models (p = 0.67), while the standard model predicts significantly higher DVHs than the combined model for the single-side sparing cases (p = 0.01). Conclusions: The combined model for predicting parotid sparing that takes into account single-side sparing improves the prediction accuracy over the previous model.« less

  12. Incorporating single-side sparing in models for predicting parotid dose sparing in head and neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Lulin, E-mail: lulin.yuan@duke.edu; Wu, Q. Jackie; Yin, Fang-Fang

    Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trainedmore » with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined model differ from actual values by only 2.2 Gy (p = 0.005). Similarly, the sum of residues between the modeled and the actual plan DVHs is the same for the bilateral sparing cases by both models (p = 0.67), while the standard model predicts significantly higher DVHs than the combined model for the single-side sparing cases (p = 0.01). Conclusions: The combined model for predicting parotid sparing that takes into account single-side sparing improves the prediction accuracy over the previous model.« less

  13. ACToR-AGGREGATED COMPUTATIONAL TOXICOLOGY ...

    EPA Pesticide Factsheets

    One goal of the field of computational toxicology is to predict chemical toxicity by combining computer models with biological and toxicological data. predict chemical toxicity by combining computer models with biological and toxicological data

  14. Climatological Observations for Maritime Prediction and Analysis Support Service (COMPASS)

    NASA Astrophysics Data System (ADS)

    OConnor, A.; Kirtman, B. P.; Harrison, S.; Gorman, J.

    2016-02-01

    Current US Navy forecasting systems cannot easily incorporate extended-range forecasts that can improve mission readiness and effectiveness; ensure safety; and reduce cost, labor, and resource requirements. If Navy operational planners had systems that incorporated these forecasts, they could plan missions using more reliable and longer-term weather and climate predictions. Further, using multi-model forecast ensembles instead of single forecasts would produce higher predictive performance. Extended-range multi-model forecast ensembles, such as those available in the North American Multi-Model Ensemble (NMME), are ideal for system integration because of their high skill predictions; however, even higher skill predictions can be produced if forecast model ensembles are combined correctly. While many methods for weighting models exist, the best method in a given environment requires expert knowledge of the models and combination methods.We present an innovative approach that uses machine learning to combine extended-range predictions from multi-model forecast ensembles and generate a probabilistic forecast for any region of the globe up to 12 months in advance. Our machine-learning approach uses 30 years of hindcast predictions to learn patterns of forecast model successes and failures. Each model is assigned a weight for each environmental condition, 100 km2 region, and day given any expected environmental information. These weights are then applied to the respective predictions for the region and time of interest to effectively stitch together a single, coherent probabilistic forecast. Our experimental results demonstrate the benefits of our approach to produce extended-range probabilistic forecasts for regions and time periods of interest that are superior, in terms of skill, to individual NMME forecast models and commonly weighted models. The probabilistic forecast leverages the strengths of three NMME forecast models to predict environmental conditions for an area spanning from San Diego, CA to Honolulu, HI, seven months in-advance. Key findings include: weighted combinations of models are strictly better than individual models; machine-learned combinations are especially better; and forecasts produced using our approach have the highest rank probability skill score most often.

  15. Rapid biochemical methane potential prediction of urban organic waste with near-infrared reflectance spectroscopy.

    PubMed

    Fitamo, T; Triolo, J M; Boldrin, A; Scheutz, C

    2017-08-01

    The anaerobic digestibility of various biomass feedstocks in biogas plants is determined with biochemical methane potential (BMP) assays. However, experimental BMP analysis is time-consuming, costly and challenging to optimise stock management and feeding to achieve improved biogas production. The aim of the present study is to develop a fast and reliable model based on near-infrared reflectance spectroscopy (NIRS) for the BMP prediction of urban organic waste (UOW). The model comprised 87 UOW samples. Additionally, 88 plant biomass samples were included, to develop a combined model predicting BMP. The coefficient of determination (R 2 ) and root mean square error in prediction (RMSE P ) of the UOW model were 0.88 and 44 mL CH 4 /g VS, while the combined model was 0.89 and 50 mL CH 4 /g VS. Improved model performance was obtained for the two individual models compared to the combined version. The BMP prediction with NIRS was satisfactory and moderately successful. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Prediction of high-energy radiation belt electron fluxes using a combined VERB-NARMAX model

    NASA Astrophysics Data System (ADS)

    Pakhotin, I. P.; Balikhin, M. A.; Shprits, Y.; Subbotin, D.; Boynton, R.

    2013-12-01

    This study is concerned with the modelling and forecasting of energetic electron fluxes that endanger satellites in space. By combining data-driven predictions from the NARMAX methodology with the physics-based VERB code, it becomes possible to predict electron fluxes with a high level of accuracy and across a radial distance from inside the local acceleration region to out beyond geosynchronous orbit. The model coupling also makes is possible to avoid accounting for seed electron variations at the outer boundary. Conversely, combining a convection code with the VERB and NARMAX models has the potential to provide even greater accuracy in forecasting that is not limited to geostationary orbit but makes predictions across the entire outer radiation belt region.

  17. Combined Prediction Model of Death Toll for Road Traffic Accidents Based on Independent and Dependent Variables

    PubMed Central

    Zhong-xiang, Feng; Shi-sheng, Lu; Wei-hua, Zhang; Nan-nan, Zhang

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability. PMID:25610454

  18. Combined prediction model of death toll for road traffic accidents based on independent and dependent variables.

    PubMed

    Feng, Zhong-xiang; Lu, Shi-sheng; Zhang, Wei-hua; Zhang, Nan-nan

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.

  19. Combined heat transfer and kinetic models to predict cooking loss during heat treatment of beef meat.

    PubMed

    Kondjoyan, Alain; Oillic, Samuel; Portanguen, Stéphane; Gros, Jean-Bernard

    2013-10-01

    A heat transfer model was used to simulate the temperature in 3 dimensions inside the meat. This model was combined with a first-order kinetic models to predict cooking losses. Identification of the parameters of the kinetic models and first validations were performed in a water bath. Afterwards, the performance of the combined model was determined in a fan-assisted oven under different air/steam conditions. Accurate knowledge of the heat transfer coefficient values and consideration of the retraction of the meat pieces are needed for the prediction of meat temperature. This is important since the temperature at the center of the product is often used to determine the cooking time. The combined model was also able to predict cooking losses from meat pieces of different sizes and subjected to different air/steam conditions. It was found that under the studied conditions, most of the water loss comes from the juice expelled by protein denaturation and contraction and not from evaporation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. A method for grounding grid corrosion rate prediction

    NASA Astrophysics Data System (ADS)

    Han, Juan; Du, Jingyi

    2017-06-01

    Involved in a variety of factors, prediction of grounding grid corrosion complex, and uncertainty in the acquisition process, we propose a combination of EAHP (extended AHP) and fuzzy nearness degree of effective grounding grid corrosion rate prediction model. EAHP is used to establish judgment matrix and calculate the weight of each factors corrosion of grounding grid; different sample classification properties have different corrosion rate of contribution, and combining the principle of close to predict corrosion rate.The application result shows, the model can better capture data variation, thus to improve the validity of the model to get higher prediction precision.

  1. Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.

    PubMed

    Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen

    2016-07-01

    This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.

  2. Force Modelling in Orthogonal Cutting Considering Flank Wear Effect

    NASA Astrophysics Data System (ADS)

    Rathod, Kanti Bhikhubhai; Lalwani, Devdas I.

    2017-05-01

    In the present work, an attempt has been made to provide a predictive cutting force model during orthogonal cutting by combining two different force models, that is, a force model for a perfectly sharp tool plus considering the effect of edge radius and a force model for a worn tool. The first force model is for a perfectly sharp tool that is based on Oxley's predictive machining theory for orthogonal cutting as the Oxley's model is for perfectly sharp tool, the effect of cutting edge radius (hone radius) is added and improve model is presented. The second force model is based on worn tool (flank wear) that was proposed by Waldorf. Further, the developed combined force model is also used to predict flank wear width using inverse approach. The performance of the developed combined total force model is compared with the previously published results for AISI 1045 and AISI 4142 materials and found reasonably good agreement.

  3. Bi-objective integer programming for RNA secondary structure prediction with pseudoknots.

    PubMed

    Legendre, Audrey; Angel, Eric; Tahi, Fariza

    2018-01-15

    RNA structure prediction is an important field in bioinformatics, and numerous methods and tools have been proposed. Pseudoknots are specific motifs of RNA secondary structures that are difficult to predict. Almost all existing methods are based on a single model and return one solution, often missing the real structure. An alternative approach would be to combine different models and return a (small) set of solutions, maximizing its quality and diversity in order to increase the probability that it contains the real structure. We propose here an original method for predicting RNA secondary structures with pseudoknots, based on integer programming. We developed a generic bi-objective integer programming algorithm allowing to return optimal and sub-optimal solutions optimizing simultaneously two models. This algorithm was then applied to the combination of two known models of RNA secondary structure prediction, namely MEA and MFE. The resulting tool, called BiokoP, is compared with the other methods in the literature. The results show that the best solution (structure with the highest F 1 -score) is, in most cases, given by BiokoP. Moreover, the results of BiokoP are homogeneous, regardless of the pseudoknot type or the presence or not of pseudoknots. Indeed, the F 1 -scores are always higher than 70% for any number of solutions returned. The results obtained by BiokoP show that combining the MEA and the MFE models, as well as returning several optimal and several sub-optimal solutions, allow to improve the prediction of secondary structures. One perspective of our work is to combine better mono-criterion models, in particular to combine a model based on the comparative approach with the MEA and the MFE models. This leads to develop in the future a new multi-objective algorithm to combine more than two models. BiokoP is available on the EvryRNA platform: https://EvryRNA.ibisc.univ-evry.fr .

  4. Prediction of first-trimester preeclampsia: Relevance of the oxidative stress marker MDA in a combination model with PP-13, PAPP-A and beta-HCG.

    PubMed

    Asiltas, Burak; Surmen-Gur, Esma; Uncu, Gurkan

    2018-02-27

    Early diagnosis of preeclampsia (PE) is very important and various parameters, individually or in combined models, are reported useful for prediction of PE. The objective of this study is to investigate the predictive value of pregnancy-associated plasma protein-A (PAPP-A), placental protein-13 (PP-13), human Chorionic Gonadotropin (B-HCG), and oxidative stress marker malondialdehyde (MDA), individually and in combination. Maternal sera of 38 cases with PE and 122 controls were collected for first trimester screening and tested for PAPP-A and B-HCG by chemiluminescence, for PP-13 by using ELISA, and for MDA by high-performance liquid chromatography. Combined models of parameters were constituted as "MDA + PP-13", "PP-13 + PAPP-A + B-HCG" and "MDA + PP-13 + PAPP-A + B-HCG". The diagnostic performances of serum markers of preeclampsia were examined by nonparametric receiver-operator characteristics (ROC) analysis. PP-13 levels were significantly lower (p < 0.001) and MDA levels were significantly higher (p < 0.001) in PE. The area under the ROC curve (AUC) for MDA and PP-13 were greater than those for PAPP-A and B-HCG (p < 0.001). The AUCs of the combined models were significantly larger than those of individual parameters. The combined model "MDA + PP-13 + PAPP-A + B-HCG" exhibited the best predictive outcome with an AUC of 0.91 [95% CI 0.86-0.95], 97% [95% CI 86.2-99.9] sensitivity and 75% [95% CI 66.5-82.6] specificity, and was significantly different from that of "PAPP-A + PP-13 + B-HCG" model, but similar to that of "MDA + PP-13" model. Combined models consisting of various parameters of different origin, may provide better predictive outcomes, and oxidative markers should be considered in combination with other placental biomarkers in prediction of PE. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. A hadoop-based method to predict potential effective drug combination.

    PubMed

    Sun, Yifan; Xiong, Yi; Xu, Qian; Wei, Dongqing

    2014-01-01

    Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request.

  6. A Hadoop-Based Method to Predict Potential Effective Drug Combination

    PubMed Central

    Xiong, Yi; Xu, Qian; Wei, Dongqing

    2014-01-01

    Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request. PMID:25147789

  7. A combined-slip predictive control of vehicle stability with experimental verification

    NASA Astrophysics Data System (ADS)

    Jalali, Milad; Hashemi, Ehsan; Khajepour, Amir; Chen, Shih-ken; Litkouhi, Bakhtiar

    2018-02-01

    In this paper, a model predictive vehicle stability controller is designed based on a combined-slip LuGre tyre model. Variations in the lateral tyre forces due to changes in tyre slip ratios are considered in the prediction model of the controller. It is observed that the proposed combined-slip controller takes advantage of the more accurate tyre model and can adjust tyre slip ratios based on lateral forces of the front axle. This results in an interesting closed-loop response that challenges the notion of braking only the wheels on one side of the vehicle in differential braking. The performance of the proposed controller is evaluated in software simulations and is compared to a similar pure-slip controller. Furthermore, experimental tests are conducted on a rear-wheel drive electric Chevrolet Equinox equipped with differential brakes to evaluate the closed-loop response of the model predictive control controller.

  8. A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Shengzhi; Ming, Bo; Huang, Qiang

    It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less

  9. Predicting combined sewer overflows chamber depth using artificial neural networks with rainfall radar data.

    PubMed

    Mounce, S R; Shepherd, W; Sailor, G; Shucksmith, J; Saul, A J

    2014-01-01

    Combined sewer overflows (CSOs) represent a common feature in combined urban drainage systems and are used to discharge excess water to the environment during heavy storms. To better understand the performance of CSOs, the UK water industry has installed a large number of monitoring systems that provide data for these assets. This paper presents research into the prediction of the hydraulic performance of CSOs using artificial neural networks (ANN) as an alternative to hydraulic models. Previous work has explored using an ANN model for the prediction of chamber depth using time series for depth and rain gauge data. Rainfall intensity data that can be provided by rainfall radar devices can be used to improve on this approach. Results are presented using real data from a CSO for a catchment in the North of England, UK. An ANN model trained with the pseudo-inverse rule was shown to be capable of predicting CSO depth with less than 5% error for predictions more than 1 hour ahead for unseen data. Such predictive approaches are important to the future management of combined sewer systems.

  10. Predicting Power Outages Using Multi-Model Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Cerrai, D.; Anagnostou, E. N.; Yang, J.; Astitha, M.

    2017-12-01

    Power outages affect every year millions of people in the United States, affecting the economy and conditioning the everyday life. An Outage Prediction Model (OPM) has been developed at the University of Connecticut for helping utilities to quickly restore outages and to limit their adverse consequences on the population. The OPM, operational since 2015, combines several non-parametric machine learning (ML) models that use historical weather storm simulations and high-resolution weather forecasts, satellite remote sensing data, and infrastructure and land cover data to predict the number and spatial distribution of power outages. A new methodology, developed for improving the outage model performances by combining weather- and soil-related variables using three different weather models (WRF 3.7, WRF 3.8 and RAMS/ICLAMS), will be presented in this study. First, we will present a performance evaluation of each model variable, by comparing historical weather analyses with station data or reanalysis over the entire storm data set. Hence, each variable of the new outage model version is extracted from the best performing weather model for that variable, and sensitivity tests are performed for investigating the most efficient variable combination for outage prediction purposes. Despite that the final variables combination is extracted from different weather models, this ensemble based on multi-weather forcing and multi-statistical model power outage prediction outperforms the currently operational OPM version that is based on a single weather forcing variable (WRF 3.7), because each model component is the closest to the actual atmospheric state.

  11. Machine learning in updating predictive models of planning and scheduling transportation projects

    DOT National Transportation Integrated Search

    1997-01-01

    A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...

  12. Bootstrap study of genome-enabled prediction reliabilities using haplotype blocks across Nordic Red cattle breeds.

    PubMed

    Cuyabano, B C D; Su, G; Rosa, G J M; Lund, M S; Gianola, D

    2015-10-01

    This study compared the accuracy of genome-enabled prediction models using individual single nucleotide polymorphisms (SNP) or haplotype blocks as covariates when using either a single breed or a combined population of Nordic Red cattle. The main objective was to compare predictions of breeding values of complex traits using a combined training population with haplotype blocks, with predictions using a single breed as training population and individual SNP as predictors. To compare the prediction reliabilities, bootstrap samples were taken from the test data set. With the bootstrapped samples of prediction reliabilities, we built and graphed confidence ellipses to allow comparisons. Finally, measures of statistical distances were used to calculate the gain in predictive ability. Our analyses are innovative in the context of assessment of predictive models, allowing a better understanding of prediction reliabilities and providing a statistical basis to effectively calibrate whether one prediction scenario is indeed more accurate than another. An ANOVA indicated that use of haplotype blocks produced significant gains mainly when Bayesian mixture models were used but not when Bayesian BLUP was fitted to the data. Furthermore, when haplotype blocks were used to train prediction models in a combined Nordic Red cattle population, we obtained up to a statistically significant 5.5% average gain in prediction accuracy, over predictions using individual SNP and training the model with a single breed. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Predicting locations of rare aquatic species’ habitat with a combination of species-specific and assemblage-based models

    USGS Publications Warehouse

    McKenna, James E.; Carlson, Douglas M.; Payne-Wynne, Molly L.

    2013-01-01

    Aim: Rare aquatic species are a substantial component of biodiversity, and their conservation is a major objective of many management plans. However, they are difficult to assess, and their optimal habitats are often poorly known. Methods to effectively predict the likely locations of suitable rare aquatic species habitats are needed. We combine two modelling approaches to predict occurrence and general abundance of several rare fish species. Location: Allegheny watershed of western New York State (USA) Methods: Our method used two empirical neural network modelling approaches (species specific and assemblage based) to predict stream-by-stream occurrence and general abundance of rare darters, based on broad-scale habitat conditions. Species-specific models were developed for longhead darter (Percina macrocephala), spotted darter (Etheostoma maculatum) and variegate darter (Etheostoma variatum) in the Allegheny drainage. An additional model predicted the type of rare darter-containing assemblage expected in each stream reach. Predictions from both models were then combined inclusively and exclusively and compared with additional independent data. Results Example rare darter predictions demonstrate the method's effectiveness. Models performed well (R2 ≥ 0.79), identified where suitable darter habitat was most likely to occur, and predictions matched well to those of collection sites. Additional independent data showed that the most conservative (exclusive) model slightly underestimated the distributions of these rare darters or predictions were displaced by one stream reach, suggesting that new darter habitat types were detected in the later collections. Main conclusions Broad-scale habitat variables can be used to effectively identify rare species' habitats. Combining species-specific and assemblage-based models enhances our ability to make use of the sparse data on rare species and to identify habitat units most likely and least likely to support those species. This hybrid approach may assist managers with the prioritization of habitats to be examined or conserved for rare species.

  14. Large-scale exploration and analysis of drug combinations.

    PubMed

    Li, Peng; Huang, Chao; Fu, Yingxue; Wang, Jinan; Wu, Ziyin; Ru, Jinlong; Zheng, Chunli; Guo, Zihu; Chen, Xuetong; Zhou, Wei; Zhang, Wenjuan; Li, Yan; Chen, Jianxin; Lu, Aiping; Wang, Yonghua

    2015-06-15

    Drug combinations are a promising strategy for combating complex diseases by improving the efficacy and reducing corresponding side effects. Currently, a widely studied problem in pharmacology is to predict effective drug combinations, either through empirically screening in clinic or pure experimental trials. However, the large-scale prediction of drug combination by a systems method is rarely considered. We report a systems pharmacology framework to predict drug combinations (PreDCs) on a computational model, termed probability ensemble approach (PEA), for analysis of both the efficacy and adverse effects of drug combinations. First, a Bayesian network integrating with a similarity algorithm is developed to model the combinations from drug molecular and pharmacological phenotypes, and the predictions are then assessed with both clinical efficacy and adverse effects. It is illustrated that PEA can predict the combination efficacy of drugs spanning different therapeutic classes with high specificity and sensitivity (AUC = 0.90), which was further validated by independent data or new experimental assays. PEA also evaluates the adverse effects (AUC = 0.95) quantitatively and detects the therapeutic indications for drug combinations. Finally, the PreDC database includes 1571 known and 3269 predicted optimal combinations as well as their potential side effects and therapeutic indications. The PreDC database is available at http://sm.nwsuaf.edu.cn/lsp/predc.php. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    NASA Astrophysics Data System (ADS)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  16. Estimating community health needs against a Triple Aim background: What can we learn from current predictive risk models?

    PubMed

    Elissen, Arianne M J; Struijs, Jeroen N; Baan, Caroline A; Ruwaard, Dirk

    2015-05-01

    To support providers and commissioners in accurately assessing their local populations' health needs, this study produces an overview of Dutch predictive risk models for health care, focusing specifically on the type, combination and relevance of included determinants for achieving the Triple Aim (improved health, better care experience, and lower costs). We conducted a mixed-methods study combining document analyses, interviews and a Delphi study. Predictive risk models were identified based on a web search and expert input. Participating in the study were Dutch experts in predictive risk modelling (interviews; n=11) and experts in healthcare delivery, insurance and/or funding methodology (Delphi panel; n=15). Ten predictive risk models were analysed, comprising 17 unique determinants. Twelve were considered relevant by experts for estimating community health needs. Although some compositional similarities were identified between models, the combination and operationalisation of determinants varied considerably. Existing predictive risk models provide a good starting point, but optimally balancing resources and targeting interventions on the community level will likely require a more holistic approach to health needs assessment. Development of additional determinants, such as measures of people's lifestyle and social network, may require policies pushing the integration of routine data from different (healthcare) sources. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, X. F.; Oswald, Fred B.

    1992-01-01

    Analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise radiated from the box. The FEM was used to predict the vibration, and the surface vibration was used as input to the BEM to predict the sound intensity and sound power. Vibration predicted by the FEM model was validated by experimental modal analysis. Noise predicted by the BEM was validated by sound intensity measurements. Three types of results are presented for the total radiated sound power: (1) sound power predicted by the BEM modeling using vibration data measured on the surface of the box; (2) sound power predicted by the FEM/BEM model; and (3) sound power measured by a sound intensity scan. The sound power predicted from the BEM model using measured vibration data yields an excellent prediction of radiated noise. The sound power predicted by the combined FEM/BEM model also gives a good prediction of radiated noise except for a shift of the natural frequencies that are due to limitations in the FEM model.

  18. Analysis of Mining-Induced Subsidence Prediction by Exponent Knothe Model Combined with Insar and Leveling

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Zhang, Liguo; Tang, Yixian; Zhang, Hong

    2018-04-01

    The principle of exponent Knothe model was introduced in detail and the variation process of mining subsidence with time was analysed based on the formulas of subsidence, subsidence velocity and subsidence acceleration in the paper. Five scenes of radar images and six levelling measurements were collected to extract ground deformation characteristics in one coal mining area in this study. Then the unknown parameters of exponent Knothe model were estimated by combined levelling data with deformation information along the line of sight obtained by InSAR technique. By compared the fitting and prediction results obtained by InSAR and levelling with that obtained only by levelling, it was shown that the accuracy of fitting and prediction combined with InSAR and levelling was obviously better than the other that. Therefore, the InSAR measurements can significantly improve the fitting and prediction accuracy of exponent Knothe model.

  19. Improving Flash Flood Prediction in Multiple Environments

    NASA Astrophysics Data System (ADS)

    Broxton, P. D.; Troch, P. A.; Schaffner, M.; Unkrich, C.; Goodrich, D.; Wagener, T.; Yatheendradas, S.

    2009-12-01

    Flash flooding is a major concern in many fast responding headwater catchments . There are many efforts to model and to predict these flood events, though it is not currently possible to adequately predict the nature of flash flood events with a single model, and furthermore, many of these efforts do not even consider snow, which can, by itself, or in combination with rainfall events, cause destructive floods. The current research is aimed at broadening the applicability of flash flood modeling. Specifically, we will take a state of the art flash flood model that is designed to work with warm season precipitation in arid environments, the KINematic runoff and EROSion model (KINEROS2), and combine it with a continuous subsurface flow model and an energy balance snow model. This should improve its predictive capacity in humid environments where lateral subsurface flow significantly contributes to streamflow, and it will make possible the prediction of flooding events that involve rain-on-snow or rapid snowmelt. By modeling changes in the hydrologic state of a catchment before a flood begins, we can also better understand the factors or combination of factors that are necessary to produce large floods. Broadening the applicability of an already state of the art flash flood model, such as KINEROS2, is logical because flash floods can occur in all types of environments, and it may lead to better predictions, which are necessary to preserve life and property.

  20. Predicting the Impacts of Intravehicular Displays on Driving Performance with Human Performance Modeling

    NASA Technical Reports Server (NTRS)

    Mitchell, Diane Kuhl; Wojciechowski, Josephine; Samms, Charneta

    2012-01-01

    A challenge facing the U.S. National Highway Traffic Safety Administration (NHTSA), as well as international safety experts, is the need to educate car drivers about the dangers associated with performing distraction tasks while driving. Researchers working for the U.S. Army Research Laboratory have developed a technique for predicting the increase in mental workload that results when distraction tasks are combined with driving. They implement this technique using human performance modeling. They have predicted workload associated with driving combined with cell phone use. In addition, they have predicted the workload associated with driving military vehicles combined with threat detection. Their technique can be used by safety personnel internationally to demonstrate the dangers of combining distracter tasks with driving and to mitigate the safety risks.

  1. Predictive information processing in music cognition. A critical review.

    PubMed

    Rohrmeier, Martin A; Koelsch, Stefan

    2012-02-01

    Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. [Formulation of combined predictive indicators using logistic regression model in predicting sepsis and prognosis].

    PubMed

    Duan, Liwei; Zhang, Sheng; Lin, Zhaofen

    2017-02-01

    To explore the method and performance of using multiple indices to diagnose sepsis and to predict the prognosis of severe ill patients. Critically ill patients at first admission to intensive care unit (ICU) of Changzheng Hospital, Second Military Medical University, from January 2014 to September 2015 were enrolled if the following conditions were satisfied: (1) patients were 18-75 years old; (2) the length of ICU stay was more than 24 hours; (3) All records of the patients were available. Data of the patients was collected by searching the electronic medical record system. Logistic regression model was formulated to create the new combined predictive indicator and the receiver operating characteristic (ROC) curve for the new predictive indicator was built. The area under the ROC curve (AUC) for both the new indicator and original ones were compared. The optimal cut-off point was obtained where the Youden index reached the maximum value. Diagnostic parameters such as sensitivity, specificity and predictive accuracy were also calculated for comparison. Finally, individual values were substituted into the equation to test the performance in predicting clinical outcomes. A total of 362 patients (218 males and 144 females) were enrolled in our study and 66 patients died. The average age was (48.3±19.3) years old. (1) For the predictive model only containing categorical covariants [including procalcitonin (PCT), lipopolysaccharide (LPS), infection, white blood cells count (WBC) and fever], increased PCT, increased WBC and fever were demonstrated to be independent risk factors for sepsis in the logistic equation. The AUC for the new combined predictive indicator was higher than that of any other indictor, including PCT, LPS, infection, WBC and fever (0.930 vs. 0.661, 0.503, 0.570, 0.837, 0.800). The optimal cut-off value for the new combined predictive indicator was 0.518. Using the new indicator to diagnose sepsis, the sensitivity, specificity and diagnostic accuracy rate were 78.00%, 93.36% and 87.47%, respectively. One patient was randomly selected, and the clinical data was substituted into the probability equation for prediction. The calculated value was 0.015, which was less than the cut-off value (0.518), indicating that the prognosis was non-sepsis at an accuracy of 87.47%. (2) For the predictive model only containing continuous covariants, the logistic model which combined acute physiology and chronic health evaluation II (APACHE II) score and sequential organ failure assessment (SOFA) score to predict in-hospital death events, both APACHE II score and SOFA score were independent risk factors for death. The AUC for the new predictive indicator was higher than that of APACHE II score and SOFA score (0.834 vs. 0.812, 0.813). The optimal cut-off value for the new combined predictive indicator in predicting in-hospital death events was 0.236, and the corresponding sensitivity, specificity and diagnostic accuracy for the combined predictive indicator were 73.12%, 76.51% and 75.70%, respectively. One patient was randomly selected, and the APACHE II score and SOFA score was substituted into the probability equation for prediction. The calculated value was 0.570, which was higher than the cut-off value (0.236), indicating that the death prognosis at an accuracy of 75.70%. The combined predictive indicator, which is formulated by logistic regression models, is superior to any single indicator in predicting sepsis or in-hospital death events.

  3. PDC-SGB: Prediction of effective drug combinations using a stochastic gradient boosting algorithm.

    PubMed

    Xu, Qian; Xiong, Yi; Dai, Hao; Kumari, Kotni Meena; Xu, Qin; Ou, Hong-Yu; Wei, Dong-Qing

    2017-03-21

    Combinatorial therapy is a promising strategy for combating complex diseases by improving the efficacy and reducing the side effects. To facilitate the identification of drug combinations in pharmacology, we proposed a new computational model, termed PDC-SGB, to predict effective drug combinations by integrating biological, chemical and pharmacological information based on a stochastic gradient boosting algorithm. To begin with, a set of 352 golden positive samples were collected from the public drug combination database. Then, a set of 732 dimensional feature vector involving biological, chemical and pharmaceutical information was constructed for each drug combination to describe its properties. To avoid overfitting, the maximum relevance & minimum redundancy (mRMR) method was performed to extract useful ones by removing redundant subsets. Based on the selected features, the three different type of classification algorithms were employed to build the drug combination prediction models. Our results demonstrated that the model based on the stochastic gradient boosting algorithm yield out the best performance. Furthermore, it is indicated that the feature patterns of therapy had powerful ability to discriminate effective drug combinations from non-effective ones. By analyzing various features, it is shown that the enriched features occurred frequently in golden positive samples can help predict novel drug combinations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Symbolic Processing Combined with Model-Based Reasoning

    NASA Technical Reports Server (NTRS)

    James, Mark

    2009-01-01

    A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.

  5. Multiscale Modeling of Structurally-Graded Materials Using Discrete Dislocation Plasticity Models and Continuum Crystal Plasticity Models

    NASA Technical Reports Server (NTRS)

    Saether, Erik; Hochhalter, Jacob D.; Glaessgen, Edward H.

    2012-01-01

    A multiscale modeling methodology that combines the predictive capability of discrete dislocation plasticity and the computational efficiency of continuum crystal plasticity is developed. Single crystal configurations of different grain sizes modeled with periodic boundary conditions are analyzed using discrete dislocation plasticity (DD) to obtain grain size-dependent stress-strain predictions. These relationships are mapped into crystal plasticity parameters to develop a multiscale DD/CP model for continuum level simulations. A polycrystal model of a structurally-graded microstructure is developed, analyzed and used as a benchmark for comparison between the multiscale DD/CP model and the DD predictions. The multiscale DD/CP model follows the DD predictions closely up to an initial peak stress and then follows a strain hardening path that is parallel but somewhat offset from the DD predictions. The difference is believed to be from a combination of the strain rate in the DD simulation and the inability of the DD/CP model to represent non-monotonic material response.

  6. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering.

    PubMed

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares.

  7. Real-time evaluation of polyphenol oxidase (PPO) activity in lychee pericarp based on weighted combination of spectral data and image features as determined by fuzzy neural network.

    PubMed

    Yang, Yi-Chao; Sun, Da-Wen; Wang, Nan-Nan; Xie, Anguo

    2015-07-01

    A novel method of using hyperspectral imaging technique with the weighted combination of spectral data and image features by fuzzy neural network (FNN) was proposed for real-time prediction of polyphenol oxidase (PPO) activity in lychee pericarp. Lychee images were obtained by a hyperspectral reflectance imaging system operating in the range of 400-1000nm. A support vector machine-recursive feature elimination (SVM-RFE) algorithm was applied to eliminating variables with no or little information for the prediction from all bands, resulting in a reduced set of optimal wavelengths. Spectral information at the optimal wavelengths and image color features were then used respectively to develop calibration models for the prediction of PPO in pericarp during storage, and the results of two models were compared. In order to improve the prediction accuracy, a decision strategy was developed based on weighted combination of spectral data and image features, in which the weights were determined by FNN for a better estimation of PPO activity. The results showed that the combined decision model was the best among all of the calibration models, with high R(2) values of 0.9117 and 0.9072 and low RMSEs of 0.45% and 0.459% for calibration and prediction, respectively. These results demonstrate that the proposed weighted combined decision method has great potential for improving model performance. The proposed technique could be used for a better prediction of other internal and external quality attributes of fruits. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Reservoir water level forecasting using group method of data handling

    NASA Astrophysics Data System (ADS)

    Zaji, Amir Hossein; Bonakdari, Hossein; Gharabaghi, Bahram

    2018-06-01

    Accurately forecasted reservoir water level is among the most vital data for efficient reservoir structure design and management. In this study, the group method of data handling is combined with the minimum description length method to develop a very practical and functional model for predicting reservoir water levels. The models' performance is evaluated using two groups of input combinations based on recent days and recent weeks. Four different input combinations are considered in total. The data collected from Chahnimeh#1 Reservoir in eastern Iran are used for model training and validation. To assess the models' applicability in practical situations, the models are made to predict a non-observed dataset for the nearby Chahnimeh#4 Reservoir. According to the results, input combinations (L, L -1) and (L, L -1, L -12) for recent days with root-mean-squared error (RMSE) of 0.3478 and 0.3767, respectively, outperform input combinations (L, L -7) and (L, L -7, L -14) for recent weeks with RMSE of 0.3866 and 0.4378, respectively, with the dataset from https://www.typingclub.com/st. Accordingly, (L, L -1) is selected as the best input combination for making 7-day ahead predictions of reservoir water levels.

  9. Interference Path Loss Prediction in A319/320 Airplanes Using Modulated Fuzzy Logic and Neural Networks

    NASA Technical Reports Server (NTRS)

    Jafri, Madiha J.; Ely, Jay J.; Vahala, Linda L.

    2007-01-01

    In this paper, neural network (NN) modeling is combined with fuzzy logic to estimate Interference Path Loss measurements on Airbus 319 and 320 airplanes. Interference patterns inside the aircraft are classified and predicted based on the locations of the doors, windows, aircraft structures and the communication/navigation system-of-concern. Modeled results are compared with measured data. Combining fuzzy logic and NN modeling is shown to improve estimates of measured data over estimates obtained with NN alone. A plan is proposed to enhance the modeling for better prediction of electromagnetic coupling problems inside aircraft.

  10. Methodological issues in current practice may lead to bias in the development of biomarker combinations for predicting acute kidney injury.

    PubMed

    Meisner, Allison; Kerr, Kathleen F; Thiessen-Philbrook, Heather; Coca, Steven G; Parikh, Chirag R

    2016-02-01

    Individual biomarkers of renal injury are only modestly predictive of acute kidney injury (AKI). Using multiple biomarkers has the potential to improve predictive capacity. In this systematic review, statistical methods of articles developing biomarker combinations to predict AKI were assessed. We identified and described three potential sources of bias (resubstitution bias, model selection bias, and bias due to center differences) that may compromise the development of biomarker combinations. Fifteen studies reported developing kidney injury biomarker combinations for the prediction of AKI after cardiac surgery (8 articles), in the intensive care unit (4 articles), or other settings (3 articles). All studies were susceptible to at least one source of bias and did not account for or acknowledge the bias. Inadequate reporting often hindered our assessment of the articles. We then evaluated, when possible (7 articles), the performance of published biomarker combinations in the TRIBE-AKI cardiac surgery cohort. Predictive performance was markedly attenuated in six out of seven cases. Thus, deficiencies in analysis and reporting are avoidable, and care should be taken to provide accurate estimates of risk prediction model performance. Hence, rigorous design, analysis, and reporting of biomarker combination studies are essential to realizing the promise of biomarkers in clinical practice.

  11. Regression models for predicting peak and continuous three-dimensional spinal loads during symmetric and asymmetric lifting tasks.

    PubMed

    Fathallah, F A; Marras, W S; Parnianpour, M

    1999-09-01

    Most biomechanical assessments of spinal loading during industrial work have focused on estimating peak spinal compressive forces under static and sagittally symmetric conditions. The main objective of this study was to explore the potential of feasibly predicting three-dimensional (3D) spinal loading in industry from various combinations of trunk kinematics, kinetics, and subject-load characteristics. The study used spinal loading, predicted by a validated electromyography-assisted model, from 11 male participants who performed a series of symmetric and asymmetric lifts. Three classes of models were developed: (a) models using workplace, subject, and trunk motion parameters as independent variables (kinematic models); (b) models using workplace, subject, and measured moments variables (kinetic models); and (c) models incorporating workplace, subject, trunk motion, and measured moments variables (combined models). The results showed that peak 3D spinal loading during symmetric and asymmetric lifting were predicted equally well using all three types of regression models. Continuous 3D loading was predicted best using the combined models. When the use of such models is infeasible, the kinematic models can provide adequate predictions. Finally, lateral shear forces (peak and continuous) were consistently underestimated using all three types of models. The study demonstrated the feasibility of predicting 3D loads on the spine under specific symmetric and asymmetric lifting tasks without the need for collecting EMG information. However, further validation and development of the models should be conducted to assess and extend their applicability to lifting conditions other than those presented in this study. Actual or potential applications of this research include exposure assessment in epidemiological studies, ergonomic intervention, and laboratory task assessment.

  12. Predicting dermal penetration for ToxCast chemicals using in silico estimates for diffusion in combination with physiologically based pharmacokinetic (PBPK) modeling.

    EPA Science Inventory

    Predicting dermal penetration for ToxCast chemicals using in silico estimates for diffusion in combination with physiologically based pharmacokinetic (PBPK) modeling.Evans, M.V., Sawyer, M.E., Isaacs, K.K, and Wambaugh, J.With the development of efficient high-throughput (HT) in ...

  13. Prediction of clinical behaviour and treatment for cancers.

    PubMed

    Futschik, Matthias E; Sullivan, Mike; Reeve, Anthony; Kasabov, Nikola

    2003-01-01

    Prediction of clinical behaviour and treatment for cancers is based on the integration of clinical and pathological parameters. Recent reports have demonstrated that gene expression profiling provides a powerful new approach for determining disease outcome. If clinical and microarray data each contain independent information then it should be possible to combine these datasets to gain more accurate prognostic information. Here, we have used existing clinical information and microarray data to generate a combined prognostic model for outcome prediction for diffuse large B-cell lymphoma (DLBCL). A prediction accuracy of 87.5% was achieved. This constitutes a significant improvement compared to the previously most accurate prognostic model with an accuracy of 77.6%. The model introduced here may be generally applicable to the combination of various types of molecular and clinical data for improving medical decision support systems and individualising patient care.

  14. Predictive model for survival in patients with gastric cancer.

    PubMed

    Goshayeshi, Ladan; Hoseini, Benyamin; Yousefli, Zahra; Khooie, Alireza; Etminani, Kobra; Esmaeilzadeh, Abbas; Golabpour, Amin

    2017-12-01

    Gastric cancer is one of the most prevalent cancers in the world. Characterized by poor prognosis, it is a frequent cause of cancer in Iran. The aim of the study was to design a predictive model of survival time for patients suffering from gastric cancer. This was a historical cohort conducted between 2011 and 2016. Study population were 277 patients suffering from gastric cancer. Data were gathered from the Iranian Cancer Registry and the laboratory of Emam Reza Hospital in Mashhad, Iran. Patients or their relatives underwent interviews where it was needed. Missing values were imputed by data mining techniques. Fifteen factors were analyzed. Survival was addressed as a dependent variable. Then, the predictive model was designed by combining both genetic algorithm and logistic regression. Matlab 2014 software was used to combine them. Of the 277 patients, only survival of 80 patients was available whose data were used for designing the predictive model. Mean ?SD of missing values for each patient was 4.43?.41 combined predictive model achieved 72.57% accuracy. Sex, birth year, age at diagnosis time, age at diagnosis time of patients' family, family history of gastric cancer, and family history of other gastrointestinal cancers were six parameters associated with patient survival. The study revealed that imputing missing values by data mining techniques have a good accuracy. And it also revealed six parameters extracted by genetic algorithm effect on the survival of patients with gastric cancer. Our combined predictive model, with a good accuracy, is appropriate to forecast the survival of patients suffering from Gastric cancer. So, we suggest policy makers and specialists to apply it for prediction of patients' survival.

  15. Can We Predict Individual Combined Benefit and Harm of Therapy? Warfarin Therapy for Atrial Fibrillation as a Test Case

    PubMed Central

    Li, Guowei; Thabane, Lehana; Delate, Thomas; Witt, Daniel M.; Levine, Mitchell A. H.; Cheng, Ji; Holbrook, Anne

    2016-01-01

    Objectives To construct and validate a prediction model for individual combined benefit and harm outcomes (stroke with no major bleeding, major bleeding with no stroke, neither event, or both) in patients with atrial fibrillation (AF) with and without warfarin therapy. Methods Using the Kaiser Permanente Colorado databases, we included patients newly diagnosed with AF between January 1, 2005 and December 31, 2012 for model construction and validation. The primary outcome was a prediction model of composite of stroke or major bleeding using polytomous logistic regression (PLR) modelling. The secondary outcome was a prediction model of all-cause mortality using the Cox regression modelling. Results We included 9074 patients with 4537 and 4537 warfarin users and non-users, respectively. In the derivation cohort (n = 4632), there were 136 strokes (2.94%), 280 major bleedings (6.04%) and 1194 deaths (25.78%) occurred. In the prediction models, warfarin use was not significantly associated with risk of stroke, but increased the risk of major bleeding and decreased the risk of death. Both the PLR and Cox models were robust, internally and externally validated, and with acceptable model performances. Conclusions In this study, we introduce a new methodology for predicting individual combined benefit and harm outcomes associated with warfarin therapy for patients with AF. Should this approach be validated in other patient populations, it has potential advantages over existing risk stratification approaches as a patient-physician aid for shared decision-making PMID:27513986

  16. Modeling the Non-Linear Response of Fiber-Reinforced Laminates Using a Combined Damage/Plasticity Model

    NASA Technical Reports Server (NTRS)

    Schuecker, Clara; Davila, Carlos G.; Pettermann, Heinz E.

    2008-01-01

    The present work is concerned with modeling the non-linear response of fiber reinforced polymer laminates. Recent experimental data suggests that the non-linearity is not only caused by matrix cracking but also by matrix plasticity due to shear stresses. To capture the effects of those two mechanisms, a model combining a plasticity formulation with continuum damage has been developed to simulate the non-linear response of laminates under plane stress states. The model is used to compare the predicted behavior of various laminate lay-ups to experimental data from the literature by looking at the degradation of axial modulus and Poisson s ratio of the laminates. The influence of residual curing stresses and in-situ effect on the predicted response is also investigated. It is shown that predictions of the combined damage/plasticity model, in general, correlate well with the experimental data. The test data shows that there are two different mechanisms that can have opposite effects on the degradation of the laminate Poisson s ratio which is captured correctly by the damage/plasticity model. Residual curing stresses are found to have a minor influence on the predicted response for the cases considered here. Some open questions remain regarding the prediction of damage onset.

  17. Seasonal-to-decadal predictability in the Nordic Seas and Arctic with the Norwegian Climate Prediction Model

    NASA Astrophysics Data System (ADS)

    Counillon, Francois; Kimmritz, Madlen; Keenlyside, Noel; Wang, Yiguo; Bethke, Ingo

    2017-04-01

    The Norwegian Climate Prediction Model combines the Norwegian Earth System Model and the Ensemble Kalman Filter data assimilation method. The prediction skills of different versions of the system (with 30 members) are tested in the Nordic Seas and the Arctic region. Comparing the hindcasts branched from a SST-only assimilation run with a free ensemble run of 30 members, we are able to dissociate the predictability rooted in the external forcing from the predictability harvest from SST derived initial conditions. The latter adds predictability in the North Atlantic subpolar gyre and the Nordic Seas regions and overall there is very little degradation or forecast drift. Combined assimilation of SST and T-S profiles further improves the prediction skill in the Nordic Seas and into the Arctic. These lead to multi-year predictability in the high-latitudes. Ongoing developments of strongly coupled assimilation (ocean and sea ice) of ice concentration in idealized twin experiment will be shown, as way to further enhance prediction skill in the Arctic.

  18. Comparison of five modelling techniques to predict the spatial distribution and abundance of seabirds

    USGS Publications Warehouse

    O'Connell, Allan F.; Gardner, Beth; Oppel, Steffen; Meirinho, Ana; Ramírez, Iván; Miller, Peter I.; Louzao, Maite

    2012-01-01

    Knowledge about the spatial distribution of seabirds at sea is important for conservation. During marine conservation planning, logistical constraints preclude seabird surveys covering the complete area of interest and spatial distribution of seabirds is frequently inferred from predictive statistical models. Increasingly complex models are available to relate the distribution and abundance of pelagic seabirds to environmental variables, but a comparison of their usefulness for delineating protected areas for seabirds is lacking. Here we compare the performance of five modelling techniques (generalised linear models, generalised additive models, Random Forest, boosted regression trees, and maximum entropy) to predict the distribution of Balearic Shearwaters (Puffinus mauretanicus) along the coast of the western Iberian Peninsula. We used ship transect data from 2004 to 2009 and 13 environmental variables to predict occurrence and density, and evaluated predictive performance of all models using spatially segregated test data. Predicted distribution varied among the different models, although predictive performance varied little. An ensemble prediction that combined results from all five techniques was robust and confirmed the existence of marine important bird areas for Balearic Shearwaters in Portugal and Spain. Our predictions suggested additional areas that would be of high priority for conservation and could be proposed as protected areas. Abundance data were extremely difficult to predict, and none of five modelling techniques provided a reliable prediction of spatial patterns. We advocate the use of ensemble modelling that combines the output of several methods to predict the spatial distribution of seabirds, and use these predictions to target separate surveys assessing the abundance of seabirds in areas of regular use.

  19. Predictive discomfort in single- and combined-axis whole-body vibration considering different seated postures.

    PubMed

    DeShaw, Jonathan; Rahmatalla, Salam

    2014-08-01

    The aim of this study was to develop a predictive discomfort model in single-axis, 3-D, and 6-D combined-axis whole-body vibrations of seated occupants considering different postures. Non-neutral postures in seated whole-body vibration play a significant role in the resulting level of perceived discomfort and potential long-term injury. The current international standards address contact points but not postures. The proposed model computes discomfort on the basis of static deviation of human joints from their neutral positions and how fast humans rotate their joints under vibration. Four seated postures were investigated. For practical implications, the coefficients of the predictive discomfort model were changed into the Borg scale with psychophysical data from 12 volunteers in different vibration conditions (single-axis random fore-aft, lateral, and vertical and two magnitudes of 3-D). The model was tested under two magnitudes of 6-D vibration. Significant correlations (R = .93) were found between the predictive discomfort model and the reported discomfort with different postures and vibrations. The ISO 2631-1 correlated very well with discomfort (R2 = .89) but was not able to predict the effect of posture. Human discomfort in seated whole-body vibration with different non-neutral postures can be closely predicted by a combination of static posture and the angular velocities of the joint. The predictive discomfort model can assist ergonomists and human factors researchers design safer environments for seated operators under vibration. The model can be integrated with advanced computer biomechanical models to investigate the complex interaction between posture and vibration.

  20. Predictions of heading date in bread wheat (Triticum aestivum L.) using QTL-based parameters of an ecophysiological model

    PubMed Central

    Bogard, Matthieu; Ravel, Catherine; Paux, Etienne; Bordes, Jacques; Balfourier, François; Chapman, Scott C.; Le Gouis, Jacques; Allard, Vincent

    2014-01-01

    Prediction of wheat phenology facilitates the selection of cultivars with specific adaptations to a particular environment. However, while QTL analysis for heading date can identify major genes controlling phenology, the results are limited to the environments and genotypes tested. Moreover, while ecophysiological models allow accurate predictions in new environments, they may require substantial phenotypic data to parameterize each genotype. Also, the model parameters are rarely related to all underlying genes, and all the possible allelic combinations that could be obtained by breeding cannot be tested with models. In this study, a QTL-based model is proposed to predict heading date in bread wheat (Triticum aestivum L.). Two parameters of an ecophysiological model (V sat and P base, representing genotype vernalization requirements and photoperiod sensitivity, respectively) were optimized for 210 genotypes grown in 10 contrasting location × sowing date combinations. Multiple linear regression models predicting V sat and P base with 11 and 12 associated genetic markers accounted for 71 and 68% of the variance of these parameters, respectively. QTL-based V sat and P base estimates were able to predict heading date of an independent validation data set (88 genotypes in six location × sowing date combinations) with a root mean square error of prediction of 5 to 8.6 days, explaining 48 to 63% of the variation for heading date. The QTL-based model proposed in this study may be used for agronomic purposes and to assist breeders in suggesting locally adapted ideotypes for wheat phenology. PMID:25148833

  1. Grand European and Asian-Pacific multi-model seasonal forecasts: maximization of skill and of potential economical value to end-users

    NASA Astrophysics Data System (ADS)

    Alessandri, Andrea; Felice, Matteo De; Catalano, Franco; Lee, June-Yi; Wang, Bin; Lee, Doo Young; Yoo, Jin-Ho; Weisheimer, Antije

    2018-04-01

    Multi-model ensembles (MMEs) are powerful tools in dynamical climate prediction as they account for the overconfidence and the uncertainties related to single-model ensembles. Previous works suggested that the potential benefit that can be expected by using a MME amplifies with the increase of the independence of the contributing Seasonal Prediction Systems. In this work we combine the two MME Seasonal Prediction Systems (SPSs) independently developed by the European (ENSEMBLES) and by the Asian-Pacific (APCC/CliPAS) communities. To this aim, all the possible multi-model combinations obtained by putting together the 5 models from ENSEMBLES and the 11 models from APCC/CliPAS have been evaluated. The grand ENSEMBLES-APCC/CliPAS MME enhances significantly the skill in predicting 2m temperature and precipitation compared to previous estimates from the contributing MMEs. Our results show that, in general, the better combinations of SPSs are obtained by mixing ENSEMBLES and APCC/CliPAS models and that only a limited number of SPSs is required to obtain the maximum performance. The number and selection of models that perform better is usually different depending on the region/phenomenon under consideration so that all models are useful in some cases. It is shown that the incremental performance contribution tends to be higher when adding one model from ENSEMBLES to APCC/CliPAS MMEs and vice versa, confirming that the benefit of using MMEs amplifies with the increase of the independence the contributing models. To verify the above results for a real world application, the Grand ENSEMBLES-APCC/CliPAS MME is used to predict retrospective energy demand over Italy as provided by TERNA (Italian Transmission System Operator) for the period 1990-2007. The results demonstrate the useful application of MME seasonal predictions for energy demand forecasting over Italy. It is shown a significant enhancement of the potential economic value of forecasting energy demand when using the better combinations from the Grand MME by comparison to the maximum value obtained from the better combinations of each of the two contributing MMEs. The above results demonstrate for the first time the potential of the Grand MME to significantly contribute in obtaining useful predictions at the seasonal time-scale.

  2. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.

    PubMed

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José

    2018-03-28

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.

  3. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    PubMed Central

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  4. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  5. Testing of the European Union exposure-response relationships and annoyance equivalents model for annoyance due to transportation noises: The need of revised exposure-response relationships and annoyance equivalents model.

    PubMed

    Gille, Laure-Anne; Marquis-Favre, Catherine; Morel, Julien

    2016-09-01

    An in situ survey was performed in 8 French cities in 2012 to study the annoyance due to combined transportation noises. As the European Commission recommends to use the exposure-response relationships suggested by Miedema and Oudshoorn [Environmental Health Perspective, 2001] to predict annoyance due to single transportation noise, these exposure-response relationships were tested using the annoyance due to each transportation noise measured during the French survey. These relationships only enabled a good prediction in terms of the percentages of people highly annoyed by road traffic noise. For the percentages of people annoyed and a little annoyed by road traffic noise, the quality of prediction is weak. For aircraft and railway noises, prediction of annoyance is not satisfactory either. As a consequence, the annoyance equivalents model of Miedema [The Journal of the Acoustical Society of America, 2004], based on these exposure-response relationships did not enable a good prediction of annoyance due to combined transportation noises. Local exposure-response relationships were derived, following the whole computation suggested by Miedema and Oudshoorn [Environmental Health Perspective, 2001]. They led to a better calculation of annoyance due to each transportation noise in the French cities. A new version of the annoyance equivalents model was proposed using these new exposure-response relationships. This model enabled a better prediction of the total annoyance due to the combined transportation noises. These results encourage therefore to improve the annoyance prediction for noise in isolation with local or revised exposure-response relationships, which will also contribute to improve annoyance modeling for combined noises. With this aim in mind, a methodology is proposed to consider noise sensitivity in exposure-response relationships and in the annoyance equivalents model. The results showed that taking into account such variable did not enable to enhance both exposure-response relationships and the annoyance equivalents model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Middle and long-term prediction of UT1-UTC based on combination of Gray Model and Autoregressive Integrated Moving Average

    NASA Astrophysics Data System (ADS)

    Jia, Song; Xu, Tian-he; Sun, Zhang-zhen; Li, Jia-jing

    2017-02-01

    UT1-UTC is an important part of the Earth Orientation Parameters (EOP). The high-precision predictions of UT1-UTC play a key role in practical applications of deep space exploration, spacecraft tracking and satellite navigation and positioning. In this paper, a new prediction method with combination of Gray Model (GM(1, 1)) and Autoregressive Integrated Moving Average (ARIMA) is developed. The main idea is as following. Firstly, the UT1-UTC data are preprocessed by removing the leap second and Earth's zonal harmonic tidal to get UT1R-TAI data. Periodic terms are estimated and removed by the least square to get UT2R-TAI. Then the linear terms of UT2R-TAI data are modeled by the GM(1, 1), and the residual terms are modeled by the ARIMA. Finally, the UT2R-TAI prediction can be performed based on the combined model of GM(1, 1) and ARIMA, and the UT1-UTC predictions are obtained by adding the corresponding periodic terms, leap second correction and the Earth's zonal harmonic tidal correction. The results show that the proposed model can be used to predict UT1-UTC effectively with higher middle and long-term (from 32 to 360 days) accuracy than those of LS + AR, LS + MAR and WLS + MAR.

  7. Labor estimation by informational objective assessment (LEIOA) for preterm delivery prediction.

    PubMed

    Malaina, Iker; Aranburu, Larraitz; Martínez, Luis; Fernández-Llebrez, Luis; Bringas, Carlos; De la Fuente, Ildefonso M; Pérez, Martín Blás; González, Leire; Arana, Itziar; Matorras, Roberto

    2018-05-01

    To introduce LEIOA, a new screening method to forecast which patients admitted to the hospital because of suspected threatened premature delivery will give birth in < 7 days, so that it can be used to assist in the prognosis and treatment jointly with other clinical tools. From 2010 to 2013, 286 tocographies from women with gestational ages comprehended between 24 and 37 weeks were collected and studied. Then, we developed a new predictive model based on uterine contractions which combine the Generalized Hurst Exponent and the Approximate Entropy by logistic regression (LEIOA model). We compared it with a model using exclusively obstetric variables, and afterwards, we joined both to evaluate the gain. Finally, a cross validation was performed. The combination of LEIOA with the medical model resulted in an increase (in average) of predictive values of 12% with respect to the medical model alone, giving a sensitivity of 0.937, a specificity of 0.747, a positive predictive value of 0.907 and a negative predictive value of 0.819. Besides, adding LEIOA reduced the percentage of incorrectly classified cases by the medical model by almost 50%. Due to the significant increase in predictive parameters and the reduction of incorrectly classified cases when LEIOA was combined with the medical variables, we conclude that it could be a very useful tool to improve the estimation of the immediacy of preterm delivery.

  8. Comparing Nonsynergy Gamma Models and Interaction Models To Predict Growth of Emetic Bacillus cereus for Combinations of pH and Water Activity Values ▿

    PubMed Central

    Biesta-Peters, Elisabeth G.; Reij, Martine W.; Zwietering, Marcel H.; Gorris, Leon G. M.

    2011-01-01

    This research aims to test the absence (gamma hypothesis) or occurrence of synergy between two growth-limiting factors, i.e., pH and water activity (aw), using a systematic approach for model selection. In this approach, preset criteria were used to evaluate the performance of models. Such a systematic approach is required to be confident in the correctness of the individual components of the combined (synergy) models. With Bacillus cereus F4810/72 as the test organism, estimated growth boundaries for the aw-lowering solutes NaCl, KCl, and glucose were 1.13 M, 1.13 M, and 1.68 M, respectively. The accompanying aw values were 0.954, 0.956, and 0.961, respectively, indicating that equal aw values result in similar effects on growth. Out of the 12 models evaluated using the preset criteria, the model of J. H. T. Luong (Biotechnol. Bioeng. 27:280–285, 1985) was the best model to describe the effect of aw on growth. This aw model and the previously selected pH model were combined into a gamma model and into two synergy models. None of the three models was able to describe the combined pH and aw conditions sufficiently well to satisfy the preset criteria. The best matches between predicted and experimental data were obtained with the gamma model, followed by the synergy model of Y. Le Marc et al. (Int. J. Food Microbiol. 73:219–237, 2002). No combination of models that was able to predict the impact of both individual and combined hurdles correctly could be found. Consequently, in this case we could not prove the existence of synergy nor falsify the gamma hypothesis. PMID:21705525

  9. Automatically updating predictive modeling workflows support decision-making in drug design.

    PubMed

    Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O

    2016-09-01

    Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.

  10. Technical note: Combining quantile forecasts and predictive distributions of streamflows

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano

    2017-11-01

    The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.

  11. Advanced Performance Modeling with Combined Passive and Active Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dovrolis, Constantine; Sim, Alex

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performancemore » information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.« less

  12. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

    USGS Publications Warehouse

    Hoos, Anne B.; Patel, Anant R.

    1996-01-01

    Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

  13. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    PubMed

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  14. Grand European and Asian-Pacific multi-model seasonal forecasts: maximization of skill and of potential economical value to end-users

    NASA Astrophysics Data System (ADS)

    Alessandri, A.; De Felice, M.; Catalano, F.; Lee, J. Y.; Wang, B.; Lee, D. Y.; Yoo, J. H.; Weisheimer, A.

    2017-12-01

    By initiating a novel cooperation between the European and the Asian-Pacific climate-prediction communities, this work demonstrates the potential of gathering together their Multi-Model Ensembles (MMEs) to obtain useful climate predictions at seasonal time-scale.MMEs are powerful tools in dynamical climate prediction as they account for the overconfidence and the uncertainties related to single-model ensembles and increasing benefit is expected with the increase of the independence of the contributing Seasonal Prediction Systems (SPSs). In this work we combine the two MME SPSs independently developed by the European (ENSEMBLES) and by the Asian-Pacific (APCC/CliPAS) communities by establishing an unprecedented partnerships. To this aim, all the possible MME combinations obtained by putting together the 5 models from ENSEMBLES and the 11 models from APCC/CliPAS have been evaluated. The Grand ENSEMBLES-APCC/CliPAS MME enhances significantly the skill in predicting 2m temperature and precipitation. Our results show that, in general, the better combinations of SPSs are obtained by mixing ENSEMBLES and APCC/CliPAS models and that only a limited number of SPSs is required to obtain the maximum performance. The selection of models that perform better is usually different depending on the region/phenomenon under consideration so that all models are useful in some cases. It is shown that the incremental performance contribution tends to be higher when adding one model from ENSEMBLES to APCC/CliPAS MMEs and vice versa, confirming that the benefit of using MMEs amplifies with the increase of the independence the contributing models.To verify the above results for a real world application, the Grand MME is used to predict energy demand over Italy as provided by TERNA (Italian Transmission System Operator) for the period 1990-2007. The results demonstrate the useful application of MME seasonal predictions for energy demand forecasting over Italy. It is shown a significant enhancement of the potential economic value of forecasting energy demand when using the better combinations from the Grand MME by comparison to the maximum value obtained from the better combinations of each of the two contributing MMEs. Above results are discussed in a Clim Dyn paper (Alessandri et al., 2017; doi:10.1007/s00382-016-3372-4).

  15. Extracting falsifiable predictions from sloppy models.

    PubMed

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  16. Tampa Bay Water Clarity Model (TBWCM): As a Predictive Tool

    EPA Science Inventory

    The Tampa Bay Water Clarity Model was developed as a predictive tool for estimating the impact of changing nutrient loads on water clarity as measured by secchi depth. The model combines a physical mixing model with an irradiance model and nutrient cycling model. A 10 segment bi...

  17. Biomarkers for predicting type 2 diabetes development-Can metabolomics improve on existing biomarkers?

    PubMed

    Savolainen, Otto; Fagerberg, Björn; Vendelbo Lind, Mads; Sandberg, Ann-Sofie; Ross, Alastair B; Bergström, Göran

    2017-01-01

    The aim was to determine if metabolomics could be used to build a predictive model for type 2 diabetes (T2D) risk that would improve prediction of T2D over current risk markers. Gas chromatography-tandem mass spectrometry metabolomics was used in a nested case-control study based on a screening sample of 64-year-old Caucasian women (n = 629). Candidate metabolic markers of T2D were identified in plasma obtained at baseline and the power to predict diabetes was tested in 69 incident cases occurring during 5.5 years follow-up. The metabolomics results were used as a standalone prediction model and in combination with established T2D predictive biomarkers for building eight T2D prediction models that were compared with each other based on their sensitivity and selectivity for predicting T2D. Established markers of T2D (impaired fasting glucose, impaired glucose tolerance, insulin resistance (HOMA), smoking, serum adiponectin)) alone, and in combination with metabolomics had the largest areas under the curve (AUC) (0.794 (95% confidence interval [0.738-0.850]) and 0.808 [0.749-0.867] respectively), with the standalone metabolomics model based on nine fasting plasma markers having a lower predictive power (0.657 [0.577-0.736]). Prediction based on non-blood based measures was 0.638 [0.565-0.711]). Established measures of T2D risk remain the best predictor of T2D risk in this population. Additional markers detected using metabolomics are likely related to these measures as they did not enhance the overall prediction in a combined model.

  18. Biomarkers for predicting type 2 diabetes development—Can metabolomics improve on existing biomarkers?

    PubMed Central

    Savolainen, Otto; Fagerberg, Björn; Vendelbo Lind, Mads; Sandberg, Ann-Sofie; Ross, Alastair B.; Bergström, Göran

    2017-01-01

    Aim The aim was to determine if metabolomics could be used to build a predictive model for type 2 diabetes (T2D) risk that would improve prediction of T2D over current risk markers. Methods Gas chromatography-tandem mass spectrometry metabolomics was used in a nested case-control study based on a screening sample of 64-year-old Caucasian women (n = 629). Candidate metabolic markers of T2D were identified in plasma obtained at baseline and the power to predict diabetes was tested in 69 incident cases occurring during 5.5 years follow-up. The metabolomics results were used as a standalone prediction model and in combination with established T2D predictive biomarkers for building eight T2D prediction models that were compared with each other based on their sensitivity and selectivity for predicting T2D. Results Established markers of T2D (impaired fasting glucose, impaired glucose tolerance, insulin resistance (HOMA), smoking, serum adiponectin)) alone, and in combination with metabolomics had the largest areas under the curve (AUC) (0.794 (95% confidence interval [0.738–0.850]) and 0.808 [0.749–0.867] respectively), with the standalone metabolomics model based on nine fasting plasma markers having a lower predictive power (0.657 [0.577–0.736]). Prediction based on non-blood based measures was 0.638 [0.565–0.711]). Conclusions Established measures of T2D risk remain the best predictor of T2D risk in this population. Additional markers detected using metabolomics are likely related to these measures as they did not enhance the overall prediction in a combined model. PMID:28692646

  19. A Combined Pharmacokinetic and Radiologic Assessment of Dynamic Contrast-Enhanced Magnetic Resonance Imaging Predicts Response to Chemoradiation in Locally Advanced Cervical Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Semple, Scott; Harry, Vanessa N. MRCOG.; Parkin, David E.

    2009-10-01

    Purpose: To investigate the combination of pharmacokinetic and radiologic assessment of dynamic contrast-enhanced magnetic resonance imaging (MRI) as an early response indicator in women receiving chemoradiation for advanced cervical cancer. Methods and Materials: Twenty women with locally advanced cervical cancer were included in a prospective cohort study. Dynamic contrast-enhanced MRI was carried out before chemoradiation, after 2 weeks of therapy, and at the conclusion of therapy using a 1.5-T MRI scanner. Radiologic assessment of uptake parameters was obtained from resultant intensity curves. Pharmacokinetic analysis using a multicompartment model was also performed. General linear modeling was used to combine radiologic andmore » pharmacokinetic parameters and correlated with eventual response as determined by change in MRI tumor size and conventional clinical response. A subgroup of 11 women underwent repeat pretherapy MRI to test pharmacokinetic reproducibility. Results: Pretherapy radiologic parameters and pharmacokinetic K{sup trans} correlated with response (p < 0.01). General linear modeling demonstrated that a combination of radiologic and pharmacokinetic assessments before therapy was able to predict more than 88% of variance of response. Reproducibility of pharmacokinetic modeling was confirmed. Conclusions: A combination of radiologic assessment with pharmacokinetic modeling applied to dynamic MRI before the start of chemoradiation improves the predictive power of either by more than 20%. The potential improvements in therapy response prediction using this type of combined analysis of dynamic contrast-enhanced MRI may aid in the development of more individualized, effective therapy regimens for this patient group.« less

  20. Study of clusters and hypernuclei production within PHSD+FRIGA model

    NASA Astrophysics Data System (ADS)

    Kireyeu, Viktar; Le Fèvre, Arnaud; Bratkovskaya, Elena

    2017-03-01

    We report on the results on the dynamical modelling of cluster formation with the new combined PHSD+FRIGA model at Nuclotron and NICA energies. The FRIGA clusterization algorithm, which can be applied to the transport models, is based on the simulated annealing technique to obtain the most bound configuration of fragments and nucleons. The PHSD+FRIGA model is able to predict isotope yields as well as hypernucleus production. Based on present predictions of the combined model we study the possibility to detect such clusters and hypernuclei in the BM@N and MPD/NICA detectors.

  1. Study of Clusters and Hypernuclei production within PHSD+FRIGA model

    NASA Astrophysics Data System (ADS)

    Kireyeu, V.; Le Fèvre, A.; Bratkovskaya, E.

    2017-01-01

    We report on the results on the dynamical modelling of cluster formation with the new combined PHSD+FRIGA model at Nuclotron and NICA energies. The FRIGA clusterisation algorithm, which can be applied to the transport models, is based on the simulated annealing technique to obtain the most bound configuration of fragments and nucleons. The PHSD+FRIGA model is able to predict isotope yields as well as hyper-nucleus production. Based on present predictions of the combined model we study the possibility to detect such clusters and hypernuclei in the BM@N and MPD/NICA detectors.

  2. Rapid prediction of chemical metabolism by human UDP-glucuronosyltransferase isoforms using quantum chemical descriptors derived with the electronegativity equalization method.

    PubMed

    Sorich, Michael J; McKinnon, Ross A; Miners, John O; Winkler, David A; Smith, Paul A

    2004-10-07

    This study aimed to evaluate in silico models based on quantum chemical (QC) descriptors derived using the electronegativity equalization method (EEM) and to assess the use of QC properties to predict chemical metabolism by human UDP-glucuronosyltransferase (UGT) isoforms. Various EEM-derived QC molecular descriptors were calculated for known UGT substrates and nonsubstrates. Classification models were developed using support vector machine and partial least squares discriminant analysis. In general, the most predictive models were generated with the support vector machine. Combining QC and 2D descriptors (from previous work) using a consensus approach resulted in a statistically significant improvement in predictivity (to 84%) over both the QC and 2D models and the other methods of combining the descriptors. EEM-derived QC descriptors were shown to be both highly predictive and computationally efficient. It is likely that EEM-derived QC properties will be generally useful for predicting ADMET and physicochemical properties during drug discovery.

  3. Combining inferences from models of capture efficiency, detectability, and suitable habitat to classify landscapes for conservation of threatened bull trout

    USGS Publications Warehouse

    Peterson, J.; Dunham, J.B.

    2003-01-01

    Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.

  4. Integrated model of multiple kernel learning and differential evolution for EUR/USD trading.

    PubMed

    Deng, Shangkun; Sakurai, Akito

    2014-01-01

    Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits.

  5. Comparing models of the combined-stimulation advantage for speech recognition.

    PubMed

    Micheyl, Christophe; Oxenham, Andrew J

    2012-05-01

    The "combined-stimulation advantage" refers to an improvement in speech recognition when cochlear-implant or vocoded stimulation is supplemented by low-frequency acoustic information. Previous studies have been interpreted as evidence for "super-additive" or "synergistic" effects in the combination of low-frequency and electric or vocoded speech information by human listeners. However, this conclusion was based on predictions of performance obtained using a suboptimal high-threshold model of information combination. The present study shows that a different model, based on Gaussian signal detection theory, can predict surprisingly large combined-stimulation advantages, even when performance with either information source alone is close to chance, without involving any synergistic interaction. A reanalysis of published data using this model reveals that previous results, which have been interpreted as evidence for super-additive effects in perception of combined speech stimuli, are actually consistent with a more parsimonious explanation, according to which the combined-stimulation advantage reflects an optimal combination of two independent sources of information. The present results do not rule out the possible existence of synergistic effects in combined stimulation; however, they emphasize the possibility that the combined-stimulation advantages observed in some studies can be explained simply by non-interactive combination of two information sources.

  6. Combination of acoustical radiosity and the image source method.

    PubMed

    Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho; Jacobsen, Finn

    2013-06-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part. The model is based on conservation of acoustical energy. Losses are taken into account by the energy absorption coefficient, and the diffuse reflections are controlled via the scattering coefficient, which defines the portion of energy that has been diffusely reflected. The way the model is formulated allows for a dynamic control of the image source production, so that no fixed maximum reflection order is required. The model is optimized for energy impulse response predictions in arbitrary polyhedral rooms. The predictions are validated by comparison with published measured data for a real music studio hall. The proposed model turns out to be promising for acoustic predictions providing a high level of detail and accuracy.

  7. Research on light rail electric load forecasting based on ARMA model

    NASA Astrophysics Data System (ADS)

    Huang, Yifan

    2018-04-01

    The article compares a variety of time series models and combines the characteristics of power load forecasting. Then, a light load forecasting model based on ARMA model is established. Based on this model, a light rail system is forecasted. The prediction results show that the accuracy of the model prediction is high.

  8. A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises.

    PubMed

    Marquis-Favre, Catherine; Morel, Julien

    2015-07-21

    Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances.

  9. Predicting survival across chronic interstitial lung disease: the ILD-GAP model.

    PubMed

    Ryerson, Christopher J; Vittinghoff, Eric; Ley, Brett; Lee, Joyce S; Mooney, Joshua J; Jones, Kirk D; Elicker, Brett M; Wolters, Paul J; Koth, Laura L; King, Talmadge E; Collard, Harold R

    2014-04-01

    Risk prediction is challenging in chronic interstitial lung disease (ILD) because of heterogeneity in disease-specific and patient-specific variables. Our objective was to determine whether mortality is accurately predicted in patients with chronic ILD using the GAP model, a clinical prediction model based on sex, age, and lung physiology, that was previously validated in patients with idiopathic pulmonary fibrosis. Patients with idiopathic pulmonary fibrosis (n=307), chronic hypersensitivity pneumonitis (n=206), connective tissue disease-associated ILD (n=281), idiopathic nonspecific interstitial pneumonia (n=45), or unclassifiable ILD (n=173) were selected from an ongoing database (N=1,012). Performance of the previously validated GAP model was compared with novel prediction models in each ILD subtype and the combined cohort. Patients with follow-up pulmonary function data were used for longitudinal model validation. The GAP model had good performance in all ILD subtypes (c-index, 74.6 in the combined cohort), which was maintained at all stages of disease severity and during follow-up evaluation. The GAP model had similar performance compared with alternative prediction models. A modified ILD-GAP Index was developed for application across all ILD subtypes to provide disease-specific survival estimates using a single risk prediction model. This was done by adding a disease subtype variable that accounted for better adjusted survival in connective tissue disease-associated ILD, chronic hypersensitivity pneumonitis, and idiopathic nonspecific interstitial pneumonia. The GAP model accurately predicts risk of death in chronic ILD. The ILD-GAP model accurately predicts mortality in major chronic ILD subtypes and at all stages of disease.

  10. Human and Server Docking Prediction for CAPRI Round 30–35 Using LZerD with Combined Scoring Functions

    PubMed Central

    Peterson, Lenna X.; Kim, Hyungrae; Esquivel-Rodriguez, Juan; Roy, Amitava; Han, Xusi; Shin, Woong-Hee; Zhang, Jian; Terashi, Genki; Lee, Matt; Kihara, Daisuke

    2016-01-01

    We report the performance of protein-protein docking predictions by our group for recent rounds of the Critical Assessment of Prediction of Interactions (CAPRI), a community-wide assessment of state-of-the-art docking methods. Our prediction procedure uses a protein-protein docking program named LZerD developed in our group. LZerD represents a protein surface with 3D Zernike descriptors (3DZD), which are based on a mathematical series expansion of a 3D function. The appropriate soft representation of protein surface with 3DZD makes the method more tolerant to conformational change of proteins upon docking, which adds an advantage for unbound docking. Docking was guided by interface residue prediction performed with BindML and cons-PPISP as well as literature information when available. The generated docking models were ranked by a combination of scoring functions, including PRESCO, which evaluates the native-likeness of residues’ spatial environments in structure models. First, we discuss the overall performance of our group in the CAPRI prediction rounds and investigate the reasons for unsuccessful cases. Then, we examine the performance of several knowledge-based scoring functions and their combinations for ranking docking models. It was found that the quality of a pool of docking models generated by LZerD, i.e. whether or not the pool includes near-native models, can be predicted by the correlation of multiple scores. Although the current analysis used docking models generated by LZerD, findings on scoring functions are expected to be universally applicable to other docking methods. PMID:27654025

  11. Combined electrochemical, heat generation, and thermal model for large prismatic lithium-ion batteries in real-time applications

    NASA Astrophysics Data System (ADS)

    Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid

    2017-08-01

    Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].

  12. Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds

    NASA Astrophysics Data System (ADS)

    Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea

    2013-04-01

    Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.

  13. Mathematical Modeling and Optimizing of in Vitro Hormonal Combination for G × N15 Vegetative Rootstock Proliferation Using Artificial Neural Network-Genetic Algorithm (ANN-GA)

    PubMed Central

    Arab, Mohammad M.; Yadollahi, Abbas; Ahmadi, Hamed; Eftekhari, Maliheh; Maleki, Masoud

    2017-01-01

    The efficiency of a hybrid systems method which combined artificial neural networks (ANNs) as a modeling tool and genetic algorithms (GAs) as an optimizing method for input variables used in ANN modeling was assessed. Hence, as a new technique, it was applied for the prediction and optimization of the plant hormones concentrations and combinations for in vitro proliferation of Garnem (G × N15) rootstock as a case study. Optimizing hormones combination was surveyed by modeling the effects of various concentrations of cytokinin–auxin, i.e., BAP, KIN, TDZ, IBA, and NAA combinations (inputs) on four growth parameters (outputs), i.e., micro-shoots number per explant, length of micro-shoots, developed callus weight (CW) and the quality index (QI) of plantlets. Calculation of statistical values such as R2 (coefficient of determination) related to the accuracy of ANN-GA models showed a considerably higher prediction accuracy for ANN models, i.e., micro-shoots number: R2 = 0.81, length of micro-shoots: R2 = 0.87, CW: R2 = 0.88, QI: R2 = 0.87. According to the results, among the input variables, BAP (19.3), KIN (9.64), and IBA (2.63) showed the highest values of variable sensitivity ratio for proliferation rate. The GA showed that media containing 1.02 mg/l BAP in combination with 0.098 mg/l IBA could lead to the optimal proliferation rate (10.53) for G × N15 rootstock. Another objective of the present study was to compare the performance of predicted and optimized cytokinin–auxin combination with the best optimized obtained concentrations of our other experiments. Considering three growth parameters (length of micro-shoots, micro-shoots number, and proliferation rate), the last treatment was found to be superior to the rest of treatments for G × N15 rootstock in vitro multiplication. Very little difference between the ANN predicted and experimental data confirmed high capability of ANN-GA method in predicting new optimized protocols for plant in vitro propagation. PMID:29163583

  14. A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments

    NASA Astrophysics Data System (ADS)

    Gokhale, Sharad; Khare, Mukesh

    Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two-wheelers (scooters, motorcycles, etc).

  15. A Simplified Micromechanical Modeling Approach to Predict the Tensile Flow Curve Behavior of Dual-Phase Steels

    NASA Astrophysics Data System (ADS)

    Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal

    2017-11-01

    Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.

  16. Process Optimization of Dual-Laser Beam Welding of Advanced Al-Li Alloys Through Hot Cracking Susceptibility Modeling

    NASA Astrophysics Data System (ADS)

    Tian, Yingtao; Robson, Joseph D.; Riekehr, Stefan; Kashaev, Nikolai; Wang, Li; Lowe, Tristan; Karanika, Alexandra

    2016-07-01

    Laser welding of advanced Al-Li alloys has been developed to meet the increasing demand for light-weight and high-strength aerospace structures. However, welding of high-strength Al-Li alloys can be problematic due to the tendency for hot cracking. Finding suitable welding parameters and filler material for this combination currently requires extensive and costly trial and error experimentation. The present work describes a novel coupled model to predict hot crack susceptibility (HCS) in Al-Li welds. Such a model can be used to shortcut the weld development process. The coupled model combines finite element process simulation with a two-level HCS model. The finite element process model predicts thermal field data for the subsequent HCS hot cracking prediction. The model can be used to predict the influences of filler wire composition and welding parameters on HCS. The modeling results have been validated by comparing predictions with results from fully instrumented laser welds performed under a range of process parameters and analyzed using high-resolution X-ray tomography to identify weld defects. It is shown that the model is capable of accurately predicting the thermal field around the weld and the trend of HCS as a function of process parameters.

  17. Combining Gene Signatures Improves Prediction of Breast Cancer Survival

    PubMed Central

    Zhao, Xi; Naume, Bjørn; Langerød, Anita; Frigessi, Arnoldo; Kristensen, Vessela N.; Børresen-Dale, Anne-Lise; Lingjærde, Ole Christian

    2011-01-01

    Background Several gene sets for prediction of breast cancer survival have been derived from whole-genome mRNA expression profiles. Here, we develop a statistical framework to explore whether combination of the information from such sets may improve prediction of recurrence and breast cancer specific death in early-stage breast cancers. Microarray data from two clinically similar cohorts of breast cancer patients are used as training (n = 123) and test set (n = 81), respectively. Gene sets from eleven previously published gene signatures are included in the study. Principal Findings To investigate the relationship between breast cancer survival and gene expression on a particular gene set, a Cox proportional hazards model is applied using partial likelihood regression with an L2 penalty to avoid overfitting and using cross-validation to determine the penalty weight. The fitted models are applied to an independent test set to obtain a predicted risk for each individual and each gene set. Hierarchical clustering of the test individuals on the basis of the vector of predicted risks results in two clusters with distinct clinical characteristics in terms of the distribution of molecular subtypes, ER, PR status, TP53 mutation status and histological grade category, and associated with significantly different survival probabilities (recurrence: p = 0.005; breast cancer death: p = 0.014). Finally, principal components analysis of the gene signatures is used to derive combined predictors used to fit a new Cox model. This model classifies test individuals into two risk groups with distinct survival characteristics (recurrence: p = 0.003; breast cancer death: p = 0.001). The latter classifier outperforms all the individual gene signatures, as well as Cox models based on traditional clinical parameters and the Adjuvant! Online for survival prediction. Conclusion Combining the predictive strength of multiple gene signatures improves prediction of breast cancer survival. The presented methodology is broadly applicable to breast cancer risk assessment using any new identified gene set. PMID:21423775

  18. The discomfort produced by noise and whole-body vertical vibration presented separately and in combination.

    PubMed

    Huang, Yu; Griffin, Michael J

    2014-01-01

    This study investigated the prediction of the discomfort caused by simultaneous noise and vibration from the discomfort caused by noise and the discomfort caused by vibration when they are presented separately. A total of 24 subjects used absolute magnitude estimation to report their discomfort caused by seven levels of noise (70-88 dBA SEL), 7 magnitudes of vibration (0.146-2.318 ms(- 1.75)) and all 49 possible combinations of these noise and vibration stimuli. Vibration did not significantly influence judgements of noise discomfort, but noise reduced vibration discomfort by an amount that increased with increasing noise level, consistent with a 'masking effect' of noise on judgements of vibration discomfort. A multiple linear regression model or a root-sums-of-squares model predicted the discomfort caused by combined noise and vibration, but the root-sums-of-squares model is more convenient and provided a more accurate prediction of the discomfort produced by combined noise and vibration.

  19. Effect of heteroscedasticity treatment in residual error models on model calibration and prediction uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli

    2017-11-01

    The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.

  20. Predicting Jakarta composite index using hybrid of fuzzy time series and support vector regression models

    NASA Astrophysics Data System (ADS)

    Febrian Umbara, Rian; Tarwidi, Dede; Budi Setiawan, Erwin

    2018-03-01

    The paper discusses the prediction of Jakarta Composite Index (JCI) in Indonesia Stock Exchange. The study is based on JCI historical data for 1286 days to predict the value of JCI one day ahead. This paper proposes predictions done in two stages., The first stage using Fuzzy Time Series (FTS) to predict values of ten technical indicators, and the second stage using Support Vector Regression (SVR) to predict the value of JCI one day ahead, resulting in a hybrid prediction model FTS-SVR. The performance of this combined prediction model is compared with the performance of the single stage prediction model using SVR only. Ten technical indicators are used as input for each model.

  1. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  2. Predicting Loss-of-Control Boundaries Toward a Piloting Aid

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.

  3. An improved Multimodel Approach for Global Sea Surface Temperature Forecasts

    NASA Astrophysics Data System (ADS)

    Khan, M. Z. K.; Mehrotra, R.; Sharma, A.

    2014-12-01

    The concept of ensemble combinations for formulating improved climate forecasts has gained popularity in recent years. However, many climate models share similar physics or modeling processes, which may lead to similar (or strongly correlated) forecasts. Recent approaches for combining forecasts that take into consideration differences in model accuracy over space and time have either ignored the similarity of forecast among the models or followed a pairwise dynamic combination approach. Here we present a basis for combining model predictions, illustrating the improvements that can be achieved if procedures for factoring in inter-model dependence are utilised. The utility of the approach is demonstrated by combining sea surface temperature (SST) forecasts from five climate models over a period of 1960-2005. The variable of interest, the monthly global sea surface temperature anomalies (SSTA) at a 50´50 latitude-longitude grid, is predicted three months in advance to demonstrate the utility of the proposed algorithm. Results indicate that the proposed approach offers consistent and significant improvements for majority of grid points compared to the case where the dependence among the models is ignored. Therefore, the proposed approach of combining multiple models by taking into account the existing interdependence, provides an attractive alternative to obtain improved climate forecast. In addition, an approach to combine seasonal forecasts from multiple climate models with varying periods of availability is also demonstrated.

  4. Mathematical Modeling Of A Nuclear/Thermionic Power Source

    NASA Technical Reports Server (NTRS)

    Vandersande, Jan W.; Ewell, Richard C.

    1992-01-01

    Report discusses mathematical modeling to predict performance and lifetime of spacecraft power source that is integrated combination of nuclear-fission reactor and thermionic converters. Details of nuclear reaction, thermal conditions in core, and thermionic performance combined with model of swelling of fuel.

  5. Predicting the impact of combined therapies on myeloma cell growth using a hybrid multi-scale agent-based model.

    PubMed

    Ji, Zhiwei; Su, Jing; Wu, Dan; Peng, Huiming; Zhao, Weiling; Nlong Zhao, Brian; Zhou, Xiaobo

    2017-01-31

    Multiple myeloma is a malignant still incurable plasma cell disorder. This is due to refractory disease relapse, immune impairment, and development of multi-drug resistance. The growth of malignant plasma cells is dependent on the bone marrow (BM) microenvironment and evasion of the host's anti-tumor immune response. Hence, we hypothesized that targeting tumor-stromal cell interaction and endogenous immune system in BM will potentially improve the response of multiple myeloma (MM). Therefore, we proposed a computational simulation of the myeloma development in the complicated microenvironment which includes immune cell components and bone marrow stromal cells and predicted the effects of combined treatment with multi-drugs on myeloma cell growth. We constructed a hybrid multi-scale agent-based model (HABM) that combines an ODE system and Agent-based model (ABM). The ODEs was used for modeling the dynamic changes of intracellular signal transductions and ABM for modeling the cell-cell interactions between stromal cells, tumor, and immune components in the BM. This model simulated myeloma growth in the bone marrow microenvironment and revealed the important role of immune system in this process. The predicted outcomes were consistent with the experimental observations from previous studies. Moreover, we applied this model to predict the treatment effects of three key therapeutic drugs used for MM, and found that the combination of these three drugs potentially suppress the growth of myeloma cells and reactivate the immune response. In summary, the proposed model may serve as a novel computational platform for simulating the formation of MM and evaluating the treatment response of MM to multiple drugs.

  6. Nonstationary time series prediction combined with slow feature analysis

    NASA Astrophysics Data System (ADS)

    Wang, G.; Chen, X.

    2015-01-01

    Almost all climate time series have some degree of nonstationarity due to external driving forces perturbations of the observed system. Therefore, these external driving forces should be taken into account when reconstructing the climate dynamics. This paper presents a new technique of combining the driving force of a time series obtained using the Slow Feature Analysis (SFA) approach, then introducing the driving force into a predictive model to predict non-stationary time series. In essence, the main idea of the technique is to consider the driving forces as state variables and incorporate them into the prediction model. To test the method, experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted. The results showed improved and effective prediction skill.

  7. Aqueous and Tissue Residue-Based Interspecies Correlation Estimation Models Provide Conservative Hazard Estimates for Aromatic Compounds

    EPA Science Inventory

    Interspecies correlation estimation (ICE) models were developed for 30 nonpolar aromatic compounds to allow comparison of prediction accuracy between 2 data compilation approaches. Type 1 models used data combined across studies, and type 2 models used data combined only within s...

  8. Chikungunya Virus: In Vitro Response to Combination Therapy With Ribavirin and Interferon Alfa 2a.

    PubMed

    Gallegos, Karen M; Drusano, George L; D Argenio, David Z; Brown, Ashley N

    2016-10-15

    We evaluated the antiviral activities of ribavirin (RBV) and interferon (IFN) alfa as monotherapy and combination therapy against chikungunya virus (CHIKV). Vero cells were infected with CHIKV in the presence of RBV and/or IFN alfa, and viral production was quantified by plaque assay. A mathematical model was fit to the data to identify drug interactions for effect. We ran simulations using the best-fit model parameters to predict the antiviral activity associated with clinically relevant regimens of RBV and IFN alfa as combination therapy. The model predictions were validated using the hollow fiber infection model (HFIM) system. RBV and IFN alfa were effective against CHIKV as monotherapy at supraphysiological concentrations. However, RBV and IFN alfa were highly synergistic for antiviral effect when administered as combination therapy. Simulations with our mathematical model predicted that a standard clinical regimen of RBV plus IFN alfa would inhibit CHIKV burden by 2.5 log10 following 24 hours of treatment. In the HFIM system, RBV plus IFN alfa at clinical exposures resulted in a 2.1-log10 decrease in the CHIKV burden following 24 hours of therapy. These findings validate the prediction made by the mathematical model. These studies illustrate the promise of RBV plus IFN alfa as a potential therapeutic strategy for the treatment of CHIKV infections. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  9. Combining Structural Modeling with Ensemble Machine Learning to Accurately Predict Protein Fold Stability and Binding Affinity Effects upon Mutation

    PubMed Central

    Garcia Lopez, Sebastian; Kim, Philip M.

    2014-01-01

    Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403

  10. Modelling PK/QT relationships from Phase I dose-escalation trials for drug combinations and developing quantitative risk assessments of clinically relevant QT prolongations.

    PubMed

    Sinclair, Karen; Kinable, Els; Grosch, Kai; Wang, Jixian

    2016-05-01

    In current industry practice, it is difficult to assess QT effects at potential therapeutic doses based on Phase I dose-escalation trials in oncology due to data scarcity, particularly in combinations trials. In this paper, we propose to use dose-concentration and concentration-QT models jointly to model the exposures and effects of multiple drugs in combination. The fitted models then can be used to make early predictions for QT prolongation to aid choosing recommended dose combinations for further investigation. The models consider potential correlation between concentrations of test drugs and potential drug-drug interactions at PK and QT levels. In addition, this approach allows for the assessment of the probability of QT prolongation exceeding given thresholds of clinical significance. The performance of this approach was examined via simulation under practical scenarios for dose-escalation trials for a combination of two drugs. The simulation results show that invaluable information of QT effects at therapeutic dose combinations can be gained by the proposed approaches. Early detection of dose combinations with substantial QT prolongation is evaluated effectively through the CIs of the predicted peak QT prolongation at each dose combination. Furthermore, the probability of QT prolongation exceeding a certain threshold is also computed to support early detection of safety signals while accounting for uncertainty associated with data from Phase I studies. While the prediction of QT effects is sensitive to the dose escalation process, the sensitivity and limited sample size should be considered when providing support to the decision-making process for further developing certain dose combinations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. An Integrative Model of Physiological Traits Can be Used to Predict Obstructive Sleep Apnea and Response to Non Positive Airway Pressure Therapy.

    PubMed

    Owens, Robert L; Edwards, Bradley A; Eckert, Danny J; Jordan, Amy S; Sands, Scott A; Malhotra, Atul; White, David P; Loring, Stephen H; Butler, James P; Wellman, Andrew

    2015-06-01

    Both anatomical and nonanatomical traits are important in obstructive sleep apnea (OSA) pathogenesis. We have previously described a model combining these traits, but have not determined its diagnostic accuracy to predict OSA. A valid model, and knowledge of the published effect sizes of trait manipulation, would also allow us to predict the number of patients with OSA who might be effectively treated without using positive airway pressure (PAP). Fifty-seven subjects with and without OSA underwent standard clinical and research sleep studies to measure OSA severity and the physiological traits important for OSA pathogenesis, respectively. The traits were incorporated into a physiological model to predict OSA. The model validity was determined by comparing the model prediction of OSA to the clinical diagnosis of OSA. The effect of various trait manipulations was then simulated to predict the proportion of patients treated by each intervention. The model had good sensitivity (80%) and specificity (100%) for predicting OSA. A single intervention on one trait would be predicted to treat OSA in approximately one quarter of all patients. Combination therapy with two interventions was predicted to treat OSA in ∼50% of patients. An integrative model of physiological traits can be used to predict population-wide and individual responses to non-PAP therapy. Many patients with OSA would be expected to be treated based on known trait manipulations, making a strong case for the importance of non-anatomical traits in OSA pathogenesis and the effectiveness of non-PAP therapies. © 2015 Associated Professional Sleep Societies, LLC.

  12. Drug response in a genetically engineered mouse model of multiple myeloma is predictive of clinical efficacy

    PubMed Central

    Chesi, Marta; Matthews, Geoffrey M.; Garbitt, Victoria M.; Palmer, Stephen E.; Shortt, Jake; Lefebure, Marcus; Stewart, A. Keith; Johnstone, Ricky W.

    2012-01-01

    The attrition rate for anticancer drugs entering clinical trials is unacceptably high. For multiple myeloma (MM), we postulate that this is because of preclinical models that overemphasize the antiproliferative activity of drugs, and clinical trials performed in refractory end-stage patients. We validate the Vk*MYC transgenic mouse as a faithful model to predict single-agent drug activity in MM with a positive predictive value of 67% (4 of 6) for clinical activity, and a negative predictive value of 86% (6 of 7) for clinical inactivity. We identify 4 novel agents that should be prioritized for evaluation in clinical trials. Transplantation of Vk*MYC tumor cells into congenic mice selected for a more aggressive disease that models end-stage drug-resistant MM and responds only to combinations of drugs with single-agent activity in untreated Vk*MYC MM. We predict that combinations of standard agents, histone deacetylase inhibitors, bromodomain inhibitors, and hypoxia-activated prodrugs will demonstrate efficacy in the treatment of relapsed MM. PMID:22451422

  13. Prediction of passenger ride quality in a multifactor environment

    NASA Technical Reports Server (NTRS)

    Dempsey, T. K.; Leatherwood, J. D.

    1976-01-01

    A model being developed, permits the understanding and prediction of passenger discomfort in a multifactor environment with particular emphasis upon combined noise and vibration. The model has general applicability to diverse transportation systems and provides a means of developing ride quality design criteria as well as a diagnostic tool for identifying the vibration and/or noise stimuli causing discomfort. Presented are: (1) a review of the basic theoretical and mathematical computations associated with the model, (2) a discussion of methodological and criteria investigations for both the vertical and roll axes of vibration, (3) a description of within-axis masking of discomfort responses for the vertical axis, thereby allowing prediction of the total discomfort due to any random vertical vibration, (4) a discussion of initial data on between-axis masking, and (5) discussion of a study directed towards extension of the vibration model to the more general case of predicting ride quality in the combined noise and vibration environments.

  14. Preoperative Electrocardiogram Score for Predicting New-Onset Postoperative Atrial Fibrillation in Patients Undergoing Cardiac Surgery.

    PubMed

    Gu, Jiwei; Andreasen, Jan J; Melgaard, Jacob; Lundbye-Christensen, Søren; Hansen, John; Schmidt, Erik B; Thorsteinsson, Kristinn; Graff, Claus

    2017-02-01

    To investigate if electrocardiogram (ECG) markers from routine preoperative ECGs can be used in combination with clinical data to predict new-onset postoperative atrial fibrillation (POAF) following cardiac surgery. Retrospective observational case-control study. Single-center university hospital. One hundred consecutive adult patients (50 POAF, 50 without POAF) who underwent coronary artery bypass grafting, valve surgery, or combinations. Retrospective review of medical records and registration of POAF. Clinical data and demographics were retrieved from the Western Denmark Heart Registry and patient records. Paper tracings of preoperative ECGs were collected from patient records, and ECG measurements were read by two independent readers blinded to outcome. A subset of four clinical variables (age, gender, body mass index, and type of surgery) were selected to form a multivariate clinical prediction model for POAF and five ECG variables (QRS duration, PR interval, P-wave duration, left atrial enlargement, and left ventricular hypertrophy) were used in a multivariate ECG model. Adding ECG variables to the clinical prediction model significantly improved the area under the receiver operating characteristic curve from 0.54 to 0.67 (with cross-validation). The best predictive model for POAF was a combined clinical and ECG model with the following four variables: age, PR-interval, QRS duration, and left atrial enlargement. ECG markers obtained from a routine preoperative ECG may be helpful in predicting new-onset POAF in patients undergoing cardiac surgery. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

    NASA Technical Reports Server (NTRS)

    MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

    2005-01-01

    Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

  16. Adaptive rival penalized competitive learning and combined linear predictor model for financial forecast and investment.

    PubMed

    Cheung, Y M; Leung, W M; Xu, L

    1997-01-01

    We propose a prediction model called Rival Penalized Competitive Learning (RPCL) and Combined Linear Predictor method (CLP), which involves a set of local linear predictors such that a prediction is made by the combination of some activated predictors through a gating network (Xu et al., 1994). Furthermore, we present its improved variant named Adaptive RPCL-CLP that includes an adaptive learning mechanism as well as a data pre-and-post processing scheme. We compare them with some existing models by demonstrating their performance on two real-world financial time series--a China stock price and an exchange-rate series of US Dollar (USD) versus Deutschmark (DEM). Experiments have shown that Adaptive RPCL-CLP not only outperforms the other approaches with the smallest prediction error and training costs, but also brings in considerable high profits in the trading simulation of foreign exchange market.

  17. Advances in modeling soil erosion after disturbance on rangelands

    USDA-ARS?s Scientific Manuscript database

    Research has been undertaken to develop process based models that predict soil erosion rate after disturbance on rangelands. In these models soil detachment is predicted as a combination of multiple erosion processes, rain splash and thin sheet flow (splash and sheet) detachment and concentrated flo...

  18. Integrated Model of Multiple Kernel Learning and Differential Evolution for EUR/USD Trading

    PubMed Central

    Deng, Shangkun; Sakurai, Akito

    2014-01-01

    Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits. PMID:25097891

  19. Predicting the safety and efficacy of buffer therapy to raise tumour pHe: an integrative modelling study.

    PubMed

    Martin, N K; Robey, I F; Gaffney, E A; Gillies, R J; Gatenby, R A; Maini, P K

    2012-03-27

    Clinical positron emission tomography imaging has demonstrated the vast majority of human cancers exhibit significantly increased glucose metabolism when compared with adjacent normal tissue, resulting in an acidic tumour microenvironment. Recent studies demonstrated reducing this acidity through systemic buffers significantly inhibits development and growth of metastases in mouse xenografts. We apply and extend a previously developed mathematical model of blood and tumour buffering to examine the impact of oral administration of bicarbonate buffer in mice, and the potential impact in humans. We recapitulate the experimentally observed tumour pHe effect of buffer therapy, testing a model prediction in vivo in mice. We parameterise the model to humans to determine the translational safety and efficacy, and predict patient subgroups who could have enhanced treatment response, and the most promising combination or alternative buffer therapies. The model predicts a previously unseen potentially dangerous elevation in blood pHe resulting from bicarbonate therapy in mice, which is confirmed by our in vivo experiments. Simulations predict limited efficacy of bicarbonate, especially in humans with more aggressive cancers. We predict buffer therapy would be most effectual: in elderly patients or individuals with renal impairments; in combination with proton production inhibitors (such as dichloroacetate), renal glomular filtration rate inhibitors (such as non-steroidal anti-inflammatory drugs and angiotensin-converting enzyme inhibitors), or with an alternative buffer reagent possessing an optimal pK of 7.1-7.2. Our mathematical model confirms bicarbonate acts as an effective agent to raise tumour pHe, but potentially induces metabolic alkalosis at the high doses necessary for tumour pHe normalisation. We predict use in elderly patients or in combination with proton production inhibitors or buffers with a pK of 7.1-7.2 is most promising.

  20. Human and server docking prediction for CAPRI round 30-35 using LZerD with combined scoring functions.

    PubMed

    Peterson, Lenna X; Kim, Hyungrae; Esquivel-Rodriguez, Juan; Roy, Amitava; Han, Xusi; Shin, Woong-Hee; Zhang, Jian; Terashi, Genki; Lee, Matt; Kihara, Daisuke

    2017-03-01

    We report the performance of protein-protein docking predictions by our group for recent rounds of the Critical Assessment of Prediction of Interactions (CAPRI), a community-wide assessment of state-of-the-art docking methods. Our prediction procedure uses a protein-protein docking program named LZerD developed in our group. LZerD represents a protein surface with 3D Zernike descriptors (3DZD), which are based on a mathematical series expansion of a 3D function. The appropriate soft representation of protein surface with 3DZD makes the method more tolerant to conformational change of proteins upon docking, which adds an advantage for unbound docking. Docking was guided by interface residue prediction performed with BindML and cons-PPISP as well as literature information when available. The generated docking models were ranked by a combination of scoring functions, including PRESCO, which evaluates the native-likeness of residues' spatial environments in structure models. First, we discuss the overall performance of our group in the CAPRI prediction rounds and investigate the reasons for unsuccessful cases. Then, we examine the performance of several knowledge-based scoring functions and their combinations for ranking docking models. It was found that the quality of a pool of docking models generated by LZerD, that is whether or not the pool includes near-native models, can be predicted by the correlation of multiple scores. Although the current analysis used docking models generated by LZerD, findings on scoring functions are expected to be universally applicable to other docking methods. Proteins 2017; 85:513-527. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. Predicting Rib Fracture Risk With Whole-Body Finite Element Models: Development and Preliminary Evaluation of a Probabilistic Analytical Framework

    PubMed Central

    Forman, Jason L.; Kent, Richard W.; Mroz, Krystoffer; Pipkorn, Bengt; Bostrom, Ola; Segui-Gomez, Maria

    2012-01-01

    This study sought to develop a strain-based probabilistic method to predict rib fracture risk with whole-body finite element (FE) models, and to describe a method to combine the results with collision exposure information to predict injury risk and potential intervention effectiveness in the field. An age-adjusted ultimate strain distribution was used to estimate local rib fracture probabilities within an FE model. These local probabilities were combined to predict injury risk and severity within the whole ribcage. The ultimate strain distribution was developed from a literature dataset of 133 tests. Frontal collision simulations were performed with the THUMS (Total HUman Model for Safety) model with four levels of delta-V and two restraints: a standard 3-point belt and a progressive 3.5–7 kN force-limited, pretensioned (FL+PT) belt. The results of three simulations (29 km/h standard, 48 km/h standard, and 48 km/h FL+PT) were compared to matched cadaver sled tests. The numbers of fractures predicted for the comparison cases were consistent with those observed experimentally. Combining these results with field exposure informantion (ΔV, NASS-CDS 1992–2002) suggests a 8.9% probability of incurring AIS3+ rib fractures for a 60 year-old restrained by a standard belt in a tow-away frontal collision with this restraint, vehicle, and occupant configuration, compared to 4.6% for the FL+PT belt. This is the first study to describe a probabilistic framework to predict rib fracture risk based on strains observed in human-body FE models. Using this analytical framework, future efforts may incorporate additional subject or collision factors for multi-variable probabilistic injury prediction. PMID:23169122

  2. Stable Isotope Ratio and Elemental Profile Combined with Support Vector Machine for Provenance Discrimination of Oolong Tea (Wuyi-Rock Tea)

    PubMed Central

    Lou, Yun-xiao; Fu, Xian-shu; Yu, Xiao-ping; Zhang, Ya-fen

    2017-01-01

    This paper focused on an effective method to discriminate the geographical origin of Wuyi-Rock tea by the stable isotope ratio (SIR) and metallic element profiling (MEP) combined with support vector machine (SVM) analysis. Wuyi-Rock tea (n = 99) collected from nine producing areas and non-Wuyi-Rock tea (n = 33) from eleven nonproducing areas were analysed for SIR and MEP by established methods. The SVM model based on coupled data produced the best prediction accuracy (0.9773). This prediction shows that instrumental methods combined with a classification model can provide an effective and stable tool for provenance discrimination. Moreover, every feature variable in stable isotope and metallic element data was ranked by its contribution to the model. The results show that δ2H, δ18O, Cs, Cu, Ca, and Rb contents are significant indications for provenance discrimination and not all of the metallic elements improve the prediction accuracy of the SVM model. PMID:28473941

  3. BiPPred: Combined sequence- and structure-based prediction of peptide binding to the Hsp70 chaperone BiP.

    PubMed

    Schneider, Markus; Rosam, Mathias; Glaser, Manuel; Patronov, Atanas; Shah, Harpreet; Back, Katrin Christiane; Daake, Marina Angelika; Buchner, Johannes; Antes, Iris

    2016-10-01

    Substrate binding to Hsp70 chaperones is involved in many biological processes, and the identification of potential substrates is important for a comprehensive understanding of these events. We present a multi-scale pipeline for an accurate, yet efficient prediction of peptides binding to the Hsp70 chaperone BiP by combining sequence-based prediction with molecular docking and MMPBSA calculations. First, we measured the binding of 15mer peptides from known substrate proteins of BiP by peptide array (PA) experiments and performed an accuracy assessment of the PA data by fluorescence anisotropy studies. Several sequence-based prediction models were fitted using this and other peptide binding data. A structure-based position-specific scoring matrix (SB-PSSM) derived solely from structural modeling data forms the core of all models. The matrix elements are based on a combination of binding energy estimations, molecular dynamics simulations, and analysis of the BiP binding site, which led to new insights into the peptide binding specificities of the chaperone. Using this SB-PSSM, peptide binders could be predicted with high selectivity even without training of the model on experimental data. Additional training further increased the prediction accuracies. Subsequent molecular docking (DynaDock) and MMGBSA/MMPBSA-based binding affinity estimations for predicted binders allowed the identification of the correct binding mode of the peptides as well as the calculation of nearly quantitative binding affinities. The general concept behind the developed multi-scale pipeline can readily be applied to other protein-peptide complexes with linearly bound peptides, for which sufficient experimental binding data for the training of classical sequence-based prediction models is not available. Proteins 2016; 84:1390-1407. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Uniting Cheminformatics and Chemical Theory To Predict the Intrinsic Aqueous Solubility of Crystalline Druglike Molecules

    PubMed Central

    2014-01-01

    We present four models of solution free-energy prediction for druglike molecules utilizing cheminformatics descriptors and theoretically calculated thermodynamic values. We make predictions of solution free energy using physics-based theory alone and using machine learning/quantitative structure–property relationship (QSPR) models. We also develop machine learning models where the theoretical energies and cheminformatics descriptors are used as combined input. These models are used to predict solvation free energy. While direct theoretical calculation does not give accurate results in this approach, machine learning is able to give predictions with a root mean squared error (RMSE) of ∼1.1 log S units in a 10-fold cross-validation for our Drug-Like-Solubility-100 (DLS-100) dataset of 100 druglike molecules. We find that a model built using energy terms from our theoretical methodology as descriptors is marginally less predictive than one built on Chemistry Development Kit (CDK) descriptors. Combining both sets of descriptors allows a further but very modest improvement in the predictions. However, in some cases, this is a statistically significant enhancement. These results suggest that there is little complementarity between the chemical information provided by these two sets of descriptors, despite their different sources and methods of calculation. Our machine learning models are also able to predict the well-known Solubility Challenge dataset with an RMSE value of 0.9–1.0 log S units. PMID:24564264

  5. Intelligent sensing sensory quality of Chinese rice wine using near infrared spectroscopy and nonlinear tools

    NASA Astrophysics Data System (ADS)

    Ouyang, Qin; Chen, Quansheng; Zhao, Jiewen

    2016-02-01

    The approach presented herein reports the application of near infrared (NIR) spectroscopy, in contrast with human sensory panel, as a tool for estimating Chinese rice wine quality; concretely, to achieve the prediction of the overall sensory scores assigned by the trained sensory panel. Back propagation artificial neural network (BPANN) combined with adaptive boosting (AdaBoost) algorithm, namely BP-AdaBoost, as a novel nonlinear algorithm, was proposed in modeling. First, the optimal spectra intervals were selected by synergy interval partial least square (Si-PLS). Then, BP-AdaBoost model based on the optimal spectra intervals was established, called Si-BP-AdaBoost model. These models were optimized by cross validation, and the performance of each final model was evaluated according to correlation coefficient (Rp) and root mean square error of prediction (RMSEP) in prediction set. Si-BP-AdaBoost showed excellent performance in comparison with other models. The best Si-BP-AdaBoost model was achieved with Rp = 0.9180 and RMSEP = 2.23 in the prediction set. It was concluded that NIR spectroscopy combined with Si-BP-AdaBoost was an appropriate method for the prediction of the sensory quality in Chinese rice wine.

  6. Combined mechanical loading of composite tubes

    NASA Technical Reports Server (NTRS)

    Derstine, Mark S.; Pindera, Marek-Jerzy; Bowles, David E.

    1988-01-01

    An analytical/experimental investigation was performed to study the effect of material nonlinearities on the response of composite tubes subjected to combined axial and torsional loading. The effect of residual stresses on subsequent mechanical response was included in the investigation. Experiments were performed on P75/934 graphite-epoxy tubes with a stacking sequence of (15/0/ + or - 10/0/ -15), using pure torsion and combined axial/torsional loading. In the presence of residual stresses, the analytical model predicted a reduction in the initial shear modulus. Experimentally, coupling between axial loading and shear strain was observed in laminated tubes under combined loading. The phenomenon was predicted by the nonlinear analytical model. The experimentally observed linear limit of the global shear response was found to correspond to the analytically predicted first ply failure. Further, the failure of the tubes was found to be path dependent above a critical load level.

  7. Interaction Analysis of Longevity Interventions Using Survival Curves.

    PubMed

    Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim

    2018-01-06

    A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans . We find that interactions are generally weak even when the standard analysis indicates otherwise.

  8. Interaction Analysis of Longevity Interventions Using Survival Curves

    PubMed Central

    Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G.; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim

    2018-01-01

    A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans. We find that interactions are generally weak even when the standard analysis indicates otherwise. PMID:29316622

  9. Predicting damage in concrete due to expansive aggregates : modeling to enable sustainable material design.

    DOT National Transportation Integrated Search

    2012-04-01

    A poroelastic model is developed that can predict stress and strain distributions and, thus, ostensibly : damage likelihood in concrete under freezing conditions caused by aggregates with undesirable : combinations of geometry and constitutive proper...

  10. A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises

    PubMed Central

    Marquis-Favre, Catherine; Morel, Julien

    2015-01-01

    Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances. PMID:26197326

  11. External validation of EPIWIN biodegradation models.

    PubMed

    Posthumus, R; Traas, T P; Peijnenburg, W J G M; Hulzebos, E M

    2005-01-01

    The BIOWIN biodegradation models were evaluated for their suitability for regulatory purposes. BIOWIN includes the linear and non-linear BIODEG and MITI models for estimating the probability of rapid aerobic biodegradation and an expert survey model for primary and ultimate biodegradation estimation. Experimental biodegradation data for 110 newly notified substances were compared with the estimations of the different models. The models were applied separately and in combinations to determine which model(s) showed the best performance. The results of this study were compared with the results of other validation studies and other biodegradation models. The BIOWIN models predict not-readily biodegradable substances with high accuracy in contrast to ready biodegradability. In view of the high environmental concern of persistent chemicals and in view of the large number of not-readily biodegradable chemicals compared to the readily ones, a model is preferred that gives a minimum of false positives without a corresponding high percentage false negatives. A combination of the BIOWIN models (BIOWIN2 or BIOWIN6) showed the highest predictive value for not-readily biodegradability. However, the highest score for overall predictivity with lowest percentage false predictions was achieved by applying BIOWIN3 (pass level 2.75) and BIOWIN6.

  12. Extended evaluation on the ES-D3 cell differentiation assay combined with the BeWo transport model, to predict relative developmental toxicity of triazole compounds.

    PubMed

    Li, Hequn; Flick, Burkhard; Rietjens, Ivonne M C M; Louisse, Jochem; Schneider, Steffen; van Ravenzwaay, Bennard

    2016-05-01

    The mouse embryonic stem D3 (ES-D3) cell differentiation assay is based on the morphometric measurement of cardiomyocyte differentiation and is a promising tool to detect developmental toxicity of compounds. The BeWo transport model, consisting of BeWo b30 cells grown on transwell inserts and mimicking the placental barrier, is useful to determine relative placental transport velocities of compounds. We have previously demonstrated the usefulness of the ES-D3 cell differentiation assay in combination with the in vitro BeWo transport model to predict the relative in vivo developmental toxicity potencies of a set of reference azole compounds. To further evaluate this combined in vitro toxicokinetic and toxicodynamic approach, we combined ES-D3 cell differentiation data of six novel triazoles with relative transport rates obtained from the BeWo model and compared the obtained ranking to the developmental toxicity ranking as derived from in vivo data. The data show that the combined in vitro approach provided a correct prediction for in vivo developmental toxicity, whereas the ES-D3 cell differentiation assay as stand-alone did not. In conclusion, we have validated the combined in vitro approach for developmental toxicity, which we have previously developed with a set of reference azoles, for a set of six novel triazoles. We suggest that this combined model, which takes both toxicodynamic and toxicokinetic aspects into account, should be further validated for other chemical classes of developmental toxicants.

  13. Seasonality and Trend Forecasting of Tuberculosis Prevalence Data in Eastern Cape, South Africa, Using a Hybrid Model.

    PubMed

    Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin

    2016-07-26

    Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.

  14. Target and Tissue Selectivity Prediction by Integrated Mechanistic Pharmacokinetic-Target Binding and Quantitative Structure Activity Modeling.

    PubMed

    Vlot, Anna H C; de Witte, Wilhelmus E A; Danhof, Meindert; van der Graaf, Piet H; van Westen, Gerard J P; de Lange, Elizabeth C M

    2017-12-04

    Selectivity is an important attribute of effective and safe drugs, and prediction of in vivo target and tissue selectivity would likely improve drug development success rates. However, a lack of understanding of the underlying (pharmacological) mechanisms and availability of directly applicable predictive methods complicates the prediction of selectivity. We explore the value of combining physiologically based pharmacokinetic (PBPK) modeling with quantitative structure-activity relationship (QSAR) modeling to predict the influence of the target dissociation constant (K D ) and the target dissociation rate constant on target and tissue selectivity. The K D values of CB1 ligands in the ChEMBL database are predicted by QSAR random forest (RF) modeling for the CB1 receptor and known off-targets (TRPV1, mGlu5, 5-HT1a). Of these CB1 ligands, rimonabant, CP-55940, and Δ 8 -tetrahydrocanabinol, one of the active ingredients of cannabis, were selected for simulations of target occupancy for CB1, TRPV1, mGlu5, and 5-HT1a in three brain regions, to illustrate the principles of the combined PBPK-QSAR modeling. Our combined PBPK and target binding modeling demonstrated that the optimal values of the K D and k off for target and tissue selectivity were dependent on target concentration and tissue distribution kinetics. Interestingly, if the target concentration is high and the perfusion of the target site is low, the optimal K D value is often not the lowest K D value, suggesting that optimization towards high drug-target affinity can decrease the benefit-risk ratio. The presented integrative structure-pharmacokinetic-pharmacodynamic modeling provides an improved understanding of tissue and target selectivity.

  15. The brain, self and society: a social-neuroscience model of predictive processing.

    PubMed

    Kelly, Michael P; Kriznik, Natasha M; Kinmonth, Ann Louise; Fletcher, Paul C

    2018-05-10

    This paper presents a hypothesis about how social interactions shape and influence predictive processing in the brain. The paper integrates concepts from neuroscience and sociology where a gulf presently exists between the ways that each describe the same phenomenon - how the social world is engaged with by thinking humans. We combine the concepts of predictive processing models (also called predictive coding models in the neuroscience literature) with ideal types, typifications and social practice - concepts from the sociological literature. This generates a unified hypothetical framework integrating the social world and hypothesised brain processes. The hypothesis combines aspects of neuroscience and psychology with social theory to show how social behaviors may be "mapped" onto brain processes. It outlines a conceptual framework that connects the two disciplines and that may enable creative dialogue and potential future research.

  16. Doppler ultrasonography combined with transient elastography improves the non-invasive assessment of fibrosis in patients with chronic liver diseases.

    PubMed

    Alempijevic, Tamara; Zec, Simon; Nikolic, Vladimir; Veljkovic, Aleksandar; Stojanovic, Zoran; Matovic, Vera; Milosavljevic, Tomica

    2017-01-31

    Accurate clinical assessment of liver fibrosis is essential and the aim of our study was to compare and combine hemodynamic Doppler ultrasonography, liver stiffness by transient elastography, and non-invasive serum biomarkers with the degree of fibrosis confirmed by liver biopsy, and thereby to determine the value of combining non-invasive method in the prediction significant liver fibrosis. We included 102 patients with chronic liver disease of various etiology. Each patient was evaluated using Doppler ultrasonography measurements of the velocity and flow pattern at portal trunk, hepatic and splenic artery, serum fibrosis biomarkers, and transient elastography. These parameters were then input into a multilayer perceptron artificial neural network with two hidden layers, and used to create models for predicting significant fibrosis. According to METAVIR score, clinically significant fibrosis (≥F2) was detected in 57.8% of patients. A model based only on Doppler parameters (hepatic artery diameter, hepatic artery systolic and diastolic velocity, splenic artery systolic velocity and splenic artery Resistance Index), predicted significant liver fibrosis with a sensitivity and specificity of75.0% and 60.0%. The addition of unrelated non-invasive tests improved the diagnostic accuracy of Doppler examination. The best model for prediction of significant fibrosis was obtained by combining Doppler parameters, non-invasive markers (APRI, ASPRI, and FIB-4) and transient elastography, with a sensitivity and specificity of 88.9% and 100%. Doppler parameters alone predict the presence of ≥F2 fibrosis with fair accuracy. Better prediction rates are achieved by combining Doppler variables with non-invasive markers and liver stiffness by transient elastography.

  17. Tracking children's mental states while solving algebra equations.

    PubMed

    Anderson, John R; Betts, Shawn; Ferris, Jennifer L; Fincham, Jon M

    2012-11-01

    Behavioral and function magnetic resonance imagery (fMRI) data were combined to infer the mental states of students as they interacted with an intelligent tutoring system. Sixteen children interacted with a computer tutor for solving linear equations over a six-day period (days 0-5), with days 1 and 5 occurring in an fMRI scanner. Hidden Markov model algorithms combined a model of student behavior with multi-voxel imaging pattern data to predict the mental states of students. We separately assessed the algorithms' ability to predict which step in a problem-solving sequence was performed and whether the step was performed correctly. For day 1, the data patterns of other students were used to predict the mental states of a target student. These predictions were improved on day 5 by adding information about the target student's behavioral and imaging data from day 1. Successful tracking of mental states depended on using the combination of a behavioral model and multi-voxel pattern analysis, illustrating the effectiveness of an integrated approach to tracking the cognition of individuals in real time as they perform complex tasks. Copyright © 2011 Wiley Periodicals, Inc.

  18. Experimental design and modelling approach to evaluate efficacy of β-lactam/β-lactamase inhibitor combinations.

    PubMed

    Sy, S K B; Derendorf, H

    2017-07-29

    A β-lactamase inhibitor (BLI) confers susceptibility of β-lactamase-expressing multidrug resistant (MDR) organisms to the partnering β-lactam (BL). To discuss the experimental design and modelling strategies for two-drug combinations, using ceftazidime- and aztreonam-avibactam combinations, as examples. The information came from several publications on avibactam in vitro time-kill studies and corresponding pharmacodynamic models. The experimental design to optimally gather crucial information from constant-concentration time-kill studies is to use an agile matrix of two-drug concentration combinations that cover 0.25- to 4-fold BL minimum inhibitory concentration (MIC) relative to the BLI concentrations to be tested against the particular isolate. This shifting agile design can save substantial costs and resources, without sacrificing crucial information needed for model development. The complex synergistic BL/BLI interaction is quantitatively explored using a semi-mechanistic pharmacokinetic-pharmacodynamic (PK/PD) mathematical model that accounts for antimicrobial activities in the combination, bacteria-mediated BL degradation and inhibition of BL degradation by BLI. A predictive mathematical formulation for the two-drug killing effects preserves the correlation between the model-derived EC 50 of BL and the BL MIC. The predictive value of PK/PD model is evaluated against external data that were not used for model development, including but not limited to in vitro hollow fibre and in vivo murine infection models. As a framework for translational predictions, the goal of this modelling strategy is to significantly decrease the decision-making time by running clinical trial simulations with MIC-substituted EC 50 function for isolates of comparable susceptibility through established correlation between BL MIC and EC 50 values. Copyright © 2017 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  19. Improving Environmental Model Calibration and Prediction

    DTIC Science & Technology

    2011-01-18

    REPORT Final Report - Improving Environmental Model Calibration and Prediction 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: First, we have continued to...develop tools for efficient global optimization of environmental models. Our algorithms are hybrid algorithms that combine evolutionary strategies...toward practical hybrid optimization tools for environmental models. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 18-01-2011 13

  20. Discovering Anti-platelet Drug Combinations with an Integrated Model of Activator-Inhibitor Relationships, Activator-Activator Synergies and Inhibitor-Inhibitor Synergies

    PubMed Central

    Lombardi, Federica; Golla, Kalyan; Fitzpatrick, Darren J.; Casey, Fergal P.; Moran, Niamh; Shields, Denis C.

    2015-01-01

    Identifying effective therapeutic drug combinations that modulate complex signaling pathways in platelets is central to the advancement of effective anti-thrombotic therapies. However, there is no systems model of the platelet that predicts responses to different inhibitor combinations. We developed an approach which goes beyond current inhibitor-inhibitor combination screening to efficiently consider other signaling aspects that may give insights into the behaviour of the platelet as a system. We investigated combinations of platelet inhibitors and activators. We evaluated three distinct strands of information, namely: activator-inhibitor combination screens (testing a panel of inhibitors against a panel of activators); inhibitor-inhibitor synergy screens; and activator-activator synergy screens. We demonstrated how these analyses may be efficiently performed, both experimentally and computationally, to identify particular combinations of most interest. Robust tests of activator-activator synergy and of inhibitor-inhibitor synergy required combinations to show significant excesses over the double doses of each component. Modeling identified multiple effects of an inhibitor of the P2Y12 ADP receptor, and complementarity between inhibitor-inhibitor synergy effects and activator-inhibitor combination effects. This approach accelerates the mapping of combination effects of compounds to develop combinations that may be therapeutically beneficial. We integrated the three information sources into a unified model that predicted the benefits of a triple drug combination targeting ADP, thromboxane and thrombin signaling. PMID:25875950

  1. A Diagnostic Calculator for Detecting Glaucoma on the Basis of Retinal Nerve Fiber Layer, Optic Disc, and Retinal Ganglion Cell Analysis by Optical Coherence Tomography.

    PubMed

    Larrosa, José Manuel; Moreno-Montañés, Javier; Martinez-de-la-Casa, José María; Polo, Vicente; Velázquez-Villoria, Álvaro; Berrozpe, Clara; García-Granero, Marta

    2015-10-01

    The purpose of this study was to develop and validate a multivariate predictive model to detect glaucoma by using a combination of retinal nerve fiber layer (RNFL), retinal ganglion cell-inner plexiform (GCIPL), and optic disc parameters measured using spectral-domain optical coherence tomography (OCT). Five hundred eyes from 500 participants and 187 eyes of another 187 participants were included in the study and validation groups, respectively. Patients with glaucoma were classified in five groups based on visual field damage. Sensitivity and specificity of all glaucoma OCT parameters were analyzed. Receiver operating characteristic curves (ROC) and areas under the ROC (AUC) were compared. Three predictive multivariate models (quantitative, qualitative, and combined) that used a combination of the best OCT parameters were constructed. A diagnostic calculator was created using the combined multivariate model. The best AUC parameters were: inferior RNFL, average RNFL, vertical cup/disc ratio, minimal GCIPL, and inferior-temporal GCIPL. Comparisons among the parameters did not show that the GCIPL parameters were better than those of the RNFL in early and advanced glaucoma. The highest AUC was in the combined predictive model (0.937; 95% confidence interval, 0.911-0.957) and was significantly (P = 0.0001) higher than the other isolated parameters considered in early and advanced glaucoma. The validation group displayed similar results to those of the study group. Best GCIPL, RNFL, and optic disc parameters showed a similar ability to detect glaucoma. The combined predictive formula improved the glaucoma detection compared to the best isolated parameters evaluated. The diagnostic calculator obtained good classification from participants in both the study and validation groups.

  2. ADMET Evaluation in Drug Discovery. 16. Predicting hERG Blockers by Combining Multiple Pharmacophores and Machine Learning Approaches.

    PubMed

    Wang, Shuangquan; Sun, Huiyong; Liu, Hui; Li, Dan; Li, Youyong; Hou, Tingjun

    2016-08-01

    Blockade of human ether-à-go-go related gene (hERG) channel by compounds may lead to drug-induced QT prolongation, arrhythmia, and Torsades de Pointes (TdP), and therefore reliable prediction of hERG liability in the early stages of drug design is quite important to reduce the risk of cardiotoxicity-related attritions in the later development stages. In this study, pharmacophore modeling and machine learning approaches were combined to construct classification models to distinguish hERG active from inactive compounds based on a diverse data set. First, an optimal ensemble of pharmacophore hypotheses that had good capability to differentiate hERG active from inactive compounds was identified by the recursive partitioning (RP) approach. Then, the naive Bayesian classification (NBC) and support vector machine (SVM) approaches were employed to construct classification models by integrating multiple important pharmacophore hypotheses. The integrated classification models showed improved predictive capability over any single pharmacophore hypothesis, suggesting that the broad binding polyspecificity of hERG can only be well characterized by multiple pharmacophores. The best SVM model achieved the prediction accuracies of 84.7% for the training set and 82.1% for the external test set. Notably, the accuracies for the hERG blockers and nonblockers in the test set reached 83.6% and 78.2%, respectively. Analysis of significant pharmacophores helps to understand the multimechanisms of action of hERG blockers. We believe that the combination of pharmacophore modeling and SVM is a powerful strategy to develop reliable theoretical models for the prediction of potential hERG liability.

  3. The value of model averaging and dynamical climate model predictions for improving statistical seasonal streamflow forecasts over Australia

    NASA Astrophysics Data System (ADS)

    Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.

    2013-10-01

    Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.

  4. [Research on Kalman interpolation prediction model based on micro-region PM2.5 concentration].

    PubMed

    Wang, Wei; Zheng, Bin; Chen, Binlin; An, Yaoming; Jiang, Xiaoming; Li, Zhangyong

    2018-02-01

    In recent years, the pollution problem of particulate matter, especially PM2.5, is becoming more and more serious, which has attracted many people's attention from all over the world. In this paper, a Kalman prediction model combined with cubic spline interpolation is proposed, which is applied to predict the concentration of PM2.5 in the micro-regional environment of campus, and to realize interpolation simulation diagram of concentration of PM2.5 and simulate the spatial distribution of PM2.5. The experiment data are based on the environmental information monitoring system which has been set up by our laboratory. And the predicted and actual values of PM2.5 concentration data have been checked by the way of Wilcoxon signed-rank test. We find that the value of bilateral progressive significance probability was 0.527, which is much greater than the significant level α = 0.05. The mean absolute error (MEA) of Kalman prediction model was 1.8 μg/m 3 , the average relative error (MER) was 6%, and the correlation coefficient R was 0.87. Thus, the Kalman prediction model has a better effect on the prediction of concentration of PM2.5 than those of the back propagation (BP) prediction and support vector machine (SVM) prediction. In addition, with the combination of Kalman prediction model and the spline interpolation method, the spatial distribution and local pollution characteristics of PM2.5 can be simulated.

  5. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    PubMed

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  6. Radiomics biomarkers for accurate tumor progression prediction of oropharyngeal cancer

    NASA Astrophysics Data System (ADS)

    Hadjiiski, Lubomir; Chan, Heang-Ping; Cha, Kenny H.; Srinivasan, Ashok; Wei, Jun; Zhou, Chuan; Prince, Mark; Papagerakis, Silvana

    2017-03-01

    Accurate tumor progression prediction for oropharyngeal cancers is crucial for identifying patients who would best be treated with optimized treatment and therefore minimize the risk of under- or over-treatment. An objective decision support system that can merge the available radiomics, histopathologic and molecular biomarkers in a predictive model based on statistical outcomes of previous cases and machine learning may assist clinicians in making more accurate assessment of oropharyngeal tumor progression. In this study, we evaluated the feasibility of developing individual and combined predictive models based on quantitative image analysis from radiomics, histopathology and molecular biomarkers for oropharyngeal tumor progression prediction. With IRB approval, 31, 84, and 127 patients with head and neck CT (CT-HN), tumor tissue microarrays (TMAs) and molecular biomarker expressions, respectively, were collected. For 8 of the patients all 3 types of biomarkers were available and they were sequestered in a test set. The CT-HN lesions were automatically segmented using our level sets based method. Morphological, texture and molecular based features were extracted from CT-HN and TMA images, and selected features were merged by a neural network. The classification accuracy was quantified using the area under the ROC curve (AUC). Test AUCs of 0.87, 0.74, and 0.71 were obtained with the individual predictive models based on radiomics, histopathologic, and molecular features, respectively. Combining the radiomics and molecular models increased the test AUC to 0.90. Combining all 3 models increased the test AUC further to 0.94. This preliminary study demonstrates that the individual domains of biomarkers are useful and the integrated multi-domain approach is most promising for tumor progression prediction.

  7. Modeling residential lawn fertilization practices: integrating high resolution remote sensing with socioeconomic data.

    PubMed

    Zhou, Weiqi; Troy, Austin; Grove, Morgan

    2008-05-01

    This article investigates how remotely sensed lawn characteristics, such as parcel lawn area and parcel lawn greenness, combined with household characteristics, can be used to predict household lawn fertilization practices on private residential lands. This study involves two watersheds, Glyndon and Baisman's Run, in Baltimore County, Maryland, USA. Parcel lawn area and lawn greenness were derived from high-resolution aerial imagery using an object-oriented classification approach. Four indicators of household characteristics, including lot size, square footage of the house, housing value, and housing age were obtained from a property database. Residential lawn care survey data combined with remotely sensed parcel lawn area and greenness data were used to estimate two measures of household lawn fertilization practices, household annual fertilizer nitrogen application amount (N_yr) and household annual fertilizer nitrogen application rate (N_ha_yr). Using multiple regression with multi-model inferential procedures, we found that a combination of parcel lawn area and parcel lawn greenness best predicts N_yr, whereas a combination of parcel lawn greenness and lot size best predicts variation in N_ha_yr. Our analyses show that household fertilization practices can be effectively predicted by remotely sensed lawn indices and household characteristics. This has significant implications for urban watershed managers and modelers.

  8. Drought Prediction Site Specific and Regional up to Three Years in Advance

    NASA Astrophysics Data System (ADS)

    Suhler, G.; O'Brien, D. P.

    2002-12-01

    Dynamic Predictables has developed proprietary software that analyzes and predicts future climatic behavior based on past data. The programs employ both a regional thermodynamic model together with a unique predictive algorithm to achieve a high degree of prediction accuracy up to 36 months. The thermodynamic model was developed initially to explain the results of a study on global circulation models done at SUNY-Stony Brook by S. Hameed, R.G. Currie, and H. LaGrone (Int. Jour. Climatology, 15, pp.852-871, 1995). The authors pointed out that on a time scale of 2-70 months the spectrum of sea level pressure is dominated by the harmonics and subharmonics of the seasonal cycle and their combination tones. These oscillations are fundamental to an understanding of climatic variations on a sub-regional to continental basis. The oscillatory nature of these variations allows them to be used as broad based climate predictors. In addition, they can be subtracted from the data to yield residuals. The residuals are then analyzed to determine components that are predictable. The program then combines both the thermodynamic model results (the primary predictive model) with those from the residual data (the secondary model) to yield an estimate of the future behavior of the climatic variable. Spatial resolution is site specific or aggregated regional based upon appropriate length (45 years or more monthly data) and reasonable quality weather observation records. Most climate analysis has been based on monthly time-step data, but time scales on the order of days can be used. Oregon Climate Division 1 (Coastal) precipitation provides an example relating DynaPred's method to nature's observed elements in the early 2000s. The prediction's leading dynamic factors are the strong seasonal in the primary model combined with high secondary model contributions from planet Earth's Chandler Wobble (near 15 months) and what has been called the Quasi-Triennial Oscillation (QTO, near 36 months) in equatorial regions. Examples of regional aggregate and site-specific predictions previously made blind forward and publicly available (AASC Annual Meetings 1998-2002) will be shown. Certain climate dynamics features relevant to extrema prediction and specifically drought prediction will then be discussed. Time steps presented will be monthly. Climate variables examined are mean temperature and accumulated precipitation. NINO3 SST, interior continental and marine/continental transition area examples will be shown. http://www.dynamicpredictables.com

  9. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China

    PubMed Central

    Liu, Dong-jun; Li, Li

    2015-01-01

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332

  10. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China.

    PubMed

    Liu, Dong-jun; Li, Li

    2015-06-23

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.

  11. Logical-rule models of classification response times: a synthesis of mental-architecture, random-walk, and decision-bound approaches.

    PubMed

    Fific, Mario; Little, Daniel R; Nosofsky, Robert M

    2010-04-01

    We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli along a set of component dimensions. Those independent decisions are then combined via logical rules to determine the overall categorization response. The time course of the independent decisions is modeled via random-walk processes operating along individual dimensions. Alternative mental architectures are used as mechanisms for combining the independent decisions to implement the logical rules. We derive fundamental qualitative contrasts for distinguishing among the predictions of the rule models and major alternative models of classification RT. We also use the models to predict detailed RT-distribution data associated with individual stimuli in tasks of speeded perceptual classification. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  12. Model-Based Prognostics of Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Bregon, Anibal

    2015-01-01

    Model-based prognostics has become a popular approach to solving the prognostics problem. However, almost all work has focused on prognostics of systems with continuous dynamics. In this paper, we extend the model-based prognostics framework to hybrid systems models that combine both continuous and discrete dynamics. In general, most systems are hybrid in nature, including those that combine physical processes with software. We generalize the model-based prognostics formulation to hybrid systems, and describe the challenges involved. We present a general approach for modeling hybrid systems, and overview methods for solving estimation and prediction in hybrid systems. As a case study, we consider the problem of conflict (i.e., loss of separation) prediction in the National Airspace System, in which the aircraft models are hybrid dynamical systems.

  13. Numerical investigation of the flow in axial water turbines and marine propellers with scale-resolving simulations

    NASA Astrophysics Data System (ADS)

    Morgut, Mitja; Jošt, Dragica; Nobile, Enrico; Škerlavaj, Aljaž

    2015-11-01

    The accurate prediction of the performances of axial water turbines and naval propellers is a challenging task, of great practical relevance. In this paper a numerical prediction strategy, based on the combination of a trusted CFD solver and a calibrated mass transfer model, is applied to the turbulent flow in axial turbines and around a model scale naval propeller, under non-cavitating and cavitating conditions. Some selected results for axial water turbines and a marine propeller, and in particular the advantages, in terms of accuracy and fidelity, of ScaleResolving Simulations (SRS), like SAS (Scale Adaptive Simulation) and Zonal-LES (ZLES) compared to standard RANS approaches, are presented. Efficiency prediction for a Kaplan and a bulb turbine was significantly improved by use of the SAS SST model in combination with the ZLES in the draft tube. Size of cavitation cavity and sigma break curve for Kaplan turbine were successfully predicted with SAS model in combination with robust high resolution scheme, while for mass transfer the Zwart model with calibrated constants were used. The results obtained for a marine propeller in non-uniform inflow, under cavitating conditions, compare well with available experimental measurements, and proved that a mass transfer model, previously calibrated for RANS (Reynolds Averaged Navier Stokes), can be successfully applied also within the SRS approaches.

  14. Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra.

    PubMed

    Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi

    2018-03-13

    Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models' performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.

  15. Modeling of transient heat pipe operation

    NASA Technical Reports Server (NTRS)

    Colwell, G. T.; Hartley, J. G.

    1986-01-01

    Mathematical models and associated solution procedures which can be used to design heat pipe cooled structures for use on hypersonic vehicles are being developed. The models should also have the capability to predict off-design performance for a variety of operating conditions. It is expected that the resulting models can be used to predict startup behavior of liquid metal heat pipes to be used in reentry vehicles, hypersonic aircraft, and space nuclear reactors. Work to date related to numerical solutions of governing differential equations for the outer shell and the combination capillary structure and working fluid is summarized. Finite element numerical equations using both implicit, explicit, and combination methods were examined.

  16. Evaluation of Inelastic Constitutive Models for Nonlinear Structural Analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1983-01-01

    The influence of inelastic material models on computed stress-strain states, and therefore predicted lives, was studied for thermomechanically loaded structures. Nonlinear structural analyses were performed on a fatigue specimen which was subjected to thermal cycling in fluidized beds and on a mechanically load cycled benchmark notch specimen. Four incremental plasticity creep models (isotropic, kinematic, combined isotropic-kinematic, combined plus transient creep) were exercised. Of the plasticity models, kinematic hardening gave results most consistent with experimental observations. Life predictions using the computed strain histories at the critical location with a Strainrange Partitioning approach considerably overpredicted the crack initiation life of the thermal fatigue specimen.

  17. A Combined High and Low Cycle Fatigue Model for Life Prediction of Turbine Blades

    PubMed Central

    Yue, Peng; Yu, Zheng-Yong; Wang, Qingyuan

    2017-01-01

    Combined high and low cycle fatigue (CCF) generally induces the failure of aircraft gas turbine attachments. Based on the aero-engine load spectrum, accurate assessment of fatigue damage due to the interaction of high cycle fatigue (HCF) resulting from high frequency vibrations and low cycle fatigue (LCF) from ground-air-ground engine cycles is of critical importance for ensuring structural integrity of engine components, like turbine blades. In this paper, the influence of combined damage accumulation on the expected CCF life are investigated for turbine blades. The CCF behavior of a turbine blade is usually studied by testing with four load-controlled parameters, including high cycle stress amplitude and frequency, and low cycle stress amplitude and frequency. According to this, a new damage accumulation model is proposed based on Miner’s rule to consider the coupled damage due to HCF-LCF interaction by introducing the four load parameters. Five experimental datasets of turbine blade alloys and turbine blades were introduced for model validation and comparison between the proposed Miner, Manson-Halford, and Trufyakov-Kovalchuk models. Results show that the proposed model provides more accurate predictions than others with lower mean and standard deviation values of model prediction errors. PMID:28773064

  18. A Combined High and Low Cycle Fatigue Model for Life Prediction of Turbine Blades.

    PubMed

    Zhu, Shun-Peng; Yue, Peng; Yu, Zheng-Yong; Wang, Qingyuan

    2017-06-26

    Combined high and low cycle fatigue (CCF) generally induces the failure of aircraft gas turbine attachments. Based on the aero-engine load spectrum, accurate assessment of fatigue damage due to the interaction of high cycle fatigue (HCF) resulting from high frequency vibrations and low cycle fatigue (LCF) from ground-air-ground engine cycles is of critical importance for ensuring structural integrity of engine components, like turbine blades. In this paper, the influence of combined damage accumulation on the expected CCF life are investigated for turbine blades. The CCF behavior of a turbine blade is usually studied by testing with four load-controlled parameters, including high cycle stress amplitude and frequency, and low cycle stress amplitude and frequency. According to this, a new damage accumulation model is proposed based on Miner's rule to consider the coupled damage due to HCF-LCF interaction by introducing the four load parameters. Five experimental datasets of turbine blade alloys and turbine blades were introduced for model validation and comparison between the proposed Miner, Manson-Halford, and Trufyakov-Kovalchuk models. Results show that the proposed model provides more accurate predictions than others with lower mean and standard deviation values of model prediction errors.

  19. A model combining age, equivalent uniform dose and IL-8 may predict radiation esophagitis in patients with non-small cell lung cancer.

    PubMed

    Wang, Shulian; Campbell, Jeff; Stenmark, Matthew H; Stanton, Paul; Zhao, Jing; Matuszak, Martha M; Ten Haken, Randall K; Kong, Feng-Ming

    2018-03-01

    To study whether cytokine markers may improve predictive accuracy of radiation esophagitis (RE) in non-small cell lung cancer (NSCLC) patients. A total of 129 patients with stage I-III NSCLC treated with radiotherapy (RT) from prospective studies were included. Thirty inflammatory cytokines were measured in platelet-poor plasma samples. Logistic regression was performed to evaluate the risk factors of RE. Stepwise Akaike information criterion (AIC) and likelihood ratio test were used to assess model predictions. Forty-nine of 129 patients (38.0%) developed grade ≥2 RE. Univariate analysis showed that age, stage, concurrent chemotherapy, and eight dosimetric parameters were significantly associated with grade ≥2 RE (p < 0.05). IL-4, IL-5, IL-8, IL-13, IL-15, IL-1α, TGFα and eotaxin were also associated with grade ≥2 RE (p < 0.1). Age, esophagus generalized equivalent uniform dose (EUD), and baseline IL-8 were independently associated grade ≥2 RE. The combination of these three factors had significantly higher predictive power than any single factor alone. Addition of IL-8 to toxicity model significantly improves RE predictive accuracy (p = 0.019). Combining baseline level of IL-8, age and esophagus EUD may predict RE more accurately. Refinement of this model with larger sample sizes and validation from multicenter database are warranted. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Prediction of hot regions in protein-protein interaction by combining density-based incremental clustering with feature-based classification.

    PubMed

    Hu, Jing; Zhang, Xiaolong; Liu, Xiaoming; Tang, Jinshan

    2015-06-01

    Discovering hot regions in protein-protein interaction is important for drug and protein design, while experimental identification of hot regions is a time-consuming and labor-intensive effort; thus, the development of predictive models can be very helpful. In hot region prediction research, some models are based on structure information, and others are based on a protein interaction network. However, the prediction accuracy of these methods can still be improved. In this paper, a new method is proposed for hot region prediction, which combines density-based incremental clustering with feature-based classification. The method uses density-based incremental clustering to obtain rough hot regions, and uses feature-based classification to remove the non-hot spot residues from the rough hot regions. Experimental results show that the proposed method significantly improves the prediction performance of hot regions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.

    PubMed

    Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2017-10-01

    The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.

  2. Prediction of UT1-UTC, LOD and AAM χ3 by combination of least-squares and multivariate stochastic methods

    NASA Astrophysics Data System (ADS)

    Niedzielski, Tomasz; Kosek, Wiesław

    2008-02-01

    This article presents the application of a multivariate prediction technique for predicting universal time (UT1-UTC), length of day (LOD) and the axial component of atmospheric angular momentum (AAM χ 3). The multivariate predictions of LOD and UT1-UTC are generated by means of the combination of (1) least-squares (LS) extrapolation of models for annual, semiannual, 18.6-year, 9.3-year oscillations and for the linear trend, and (2) multivariate autoregressive (MAR) stochastic prediction of LS residuals (LS + MAR). The MAR technique enables the use of the AAM χ 3 time-series as the explanatory variable for the computation of LOD or UT1-UTC predictions. In order to evaluate the performance of this approach, two other prediction schemes are also applied: (1) LS extrapolation, (2) combination of LS extrapolation and univariate autoregressive (AR) prediction of LS residuals (LS + AR). The multivariate predictions of AAM χ 3 data, however, are computed as a combination of the extrapolation of the LS model for annual and semiannual oscillations and the LS + MAR. The AAM χ 3 predictions are also compared with LS extrapolation and LS + AR prediction. It is shown that the predictions of LOD and UT1-UTC based on LS + MAR taking into account the axial component of AAM are more accurate than the predictions of LOD and UT1-UTC based on LS extrapolation or on LS + AR. In particular, the UT1-UTC predictions based on LS + MAR during El Niño/La Niña events exhibit considerably smaller prediction errors than those calculated by means of LS or LS + AR. The AAM χ 3 time-series is predicted using LS + MAR with higher accuracy than applying LS extrapolation itself in the case of medium-term predictions (up to 100 days in the future). However, the predictions of AAM χ 3 reveal the best accuracy for LS + AR.

  3. Data integration of structured and unstructured sources for assigning clinical codes to patient stays

    PubMed Central

    Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim

    2016-01-01

    Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458

  4. Chemical combination effects predict connectivity in biological systems

    PubMed Central

    Lehár, Joseph; Zimmermann, Grant R; Krueger, Andrew S; Molnar, Raymond A; Ledell, Jebediah T; Heilbut, Adrian M; Short, Glenn F; Giusti, Leanne C; Nolan, Garry P; Magid, Omar A; Lee, Margaret S; Borisy, Alexis A; Stockwell, Brent R; Keith, Curtis T

    2007-01-01

    Efforts to construct therapeutically useful models of biological systems require large and diverse sets of data on functional connections between their components. Here we show that cellular responses to combinations of chemicals reveal how their biological targets are connected. Simulations of pathways with pairs of inhibitors at varying doses predict distinct response surface shapes that are reproduced in a yeast experiment, with further support from a larger screen using human tumour cells. The response morphology yields detailed connectivity constraints between nearby targets, and synergy profiles across many combinations show relatedness between targets in the whole network. Constraints from chemical combinations complement genetic studies, because they probe different cellular components and can be applied to disease models that are not amenable to mutagenesis. Chemical probes also offer increased flexibility, as they can be continuously dosed, temporally controlled, and readily combined. After extending this initial study to cover a wider range of combination effects and pathway topologies, chemical combinations may be used to refine network models or to identify novel targets. This response surface methodology may even apply to non-biological systems where responses to targeted perturbations can be measured. PMID:17332758

  5. Simulation analysis of adaptive cruise prediction control

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Cui, Sheng Min

    2017-09-01

    Predictive control is suitable for multi-variable and multi-constraint system control.In order to discuss the effect of predictive control on the vehicle longitudinal motion, this paper establishes the expected spacing model by combining variable pitch spacing and the of safety distance strategy. The model predictive control theory and the optimization method based on secondary planning are designed to obtain and track the best expected acceleration trajectory quickly. Simulation models are established including predictive and adaptive fuzzy control. Simulation results show that predictive control can realize the basic function of the system while ensuring the safety. The application of predictive and fuzzy adaptive algorithm in cruise condition indicates that the predictive control effect is better.

  6. Multi-model analysis in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been largely corrected on short-term predictions. For the longer term, the addition of the multi-model member has been beneficial to the quality of the predictions, although it is too early to determine whether the gain is related to the addition of a member or if multi-model member has plus-value itself.

  7. Combining Modeling and Gaming for Predictive Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riensche, Roderick M.; Whitney, Paul D.

    2012-08-22

    Many of our most significant challenges involve people. While human behavior has long been studied, there are recent advances in computational modeling of human behavior. With advances in computational capabilities come increases in the volume and complexity of data that humans must understand in order to make sense of and capitalize on these modeling advances. Ultimately, models represent an encapsulation of human knowledge. One inherent challenge in modeling is efficient and accurate transfer of knowledge from humans to models, and subsequent retrieval. The simulated real-world environment of games presents one avenue for these knowledge transfers. In this paper we describemore » our approach of combining modeling and gaming disciplines to develop predictive capabilities, using formal models to inform game development, and using games to provide data for modeling.« less

  8. Adding Recognition Discriminability Index to the Delayed Recall Is Useful to Predict Conversion from Mild Cognitive Impairment to Alzheimer's Disease in the Alzheimer's Disease Neuroimaging Initiative.

    PubMed

    Russo, María J; Campos, Jorge; Vázquez, Silvia; Sevlever, Gustavo; Allegri, Ricardo F

    2017-01-01

    Background: Ongoing research is focusing on the identification of those individuals with mild cognitive impairment (MCI) who are most likely to convert to Alzheimer's disease (AD). We investigated whether recognition memory tasks in combination with delayed recall measure of episodic memory and CSF biomarkers can predict MCI to AD conversion at 24-month follow-up. Methods: A total of 397 amnestic-MCI subjects from Alzheimer's disease Neuroimaging Initiative were included. Logistic regression modeling was done to assess the predictive value of all RAVLT measures, risk factors such as age, sex, education, APOE genotype, and CSF biomarkers for progression to AD. Estimating adjusted odds ratios was used to determine which variables would produce an optimal predictive model, and whether adding tests of interaction between the RAVLT Delayed Recall and recognition measures (traditional score and d-prime) would improve prediction of the conversion from a-MCI to AD. Results: 112 (28.2%) subjects developed dementia and 285 (71.8%) subjects did not. Of the all included variables, CSF Aβ1-42 levels, RAVLT Delayed Recall, and the combination of RAVLT Delayed Recall and d-prime were predictive of progression to AD (χ 2 = 38.23, df = 14, p < 0.001). Conclusions: The combination of RAVLT Delayed Recall and d-prime measures may be predictor of conversion from MCI to AD in the ADNI cohort, especially in combination with amyloid biomarkers. A predictive model to help identify individuals at-risk for dementia should include not only traditional episodic memory measures (delayed recall or recognition), but also additional variables (d-prime) that allow the homogenization of the assessment procedures in the diagnosis of MCI.

  9. Adding Recognition Discriminability Index to the Delayed Recall Is Useful to Predict Conversion from Mild Cognitive Impairment to Alzheimer's Disease in the Alzheimer's Disease Neuroimaging Initiative

    PubMed Central

    Russo, María J.; Campos, Jorge; Vázquez, Silvia; Sevlever, Gustavo; Allegri, Ricardo F.; Weiner, Michael W.

    2017-01-01

    Background: Ongoing research is focusing on the identification of those individuals with mild cognitive impairment (MCI) who are most likely to convert to Alzheimer's disease (AD). We investigated whether recognition memory tasks in combination with delayed recall measure of episodic memory and CSF biomarkers can predict MCI to AD conversion at 24-month follow-up. Methods: A total of 397 amnestic-MCI subjects from Alzheimer's disease Neuroimaging Initiative were included. Logistic regression modeling was done to assess the predictive value of all RAVLT measures, risk factors such as age, sex, education, APOE genotype, and CSF biomarkers for progression to AD. Estimating adjusted odds ratios was used to determine which variables would produce an optimal predictive model, and whether adding tests of interaction between the RAVLT Delayed Recall and recognition measures (traditional score and d-prime) would improve prediction of the conversion from a-MCI to AD. Results: 112 (28.2%) subjects developed dementia and 285 (71.8%) subjects did not. Of the all included variables, CSF Aβ1-42 levels, RAVLT Delayed Recall, and the combination of RAVLT Delayed Recall and d-prime were predictive of progression to AD (χ2 = 38.23, df = 14, p < 0.001). Conclusions: The combination of RAVLT Delayed Recall and d-prime measures may be predictor of conversion from MCI to AD in the ADNI cohort, especially in combination with amyloid biomarkers. A predictive model to help identify individuals at-risk for dementia should include not only traditional episodic memory measures (delayed recall or recognition), but also additional variables (d-prime) that allow the homogenization of the assessment procedures in the diagnosis of MCI. PMID:28344552

  10. Comparison of Neural Network and Linear Regression Models in Statistically Predicting Mental and Physical Health Status of Breast Cancer Survivors

    DTIC Science & Technology

    2015-07-15

    Long-term effects on cancer survivors’ quality of life of physical training versus physical training combined with cognitive-behavioral therapy ...COMPARISON OF NEURAL NETWORK AND LINEAR REGRESSION MODELS IN STATISTICALLY PREDICTING MENTAL AND PHYSICAL HEALTH STATUS OF BREAST...34Comparison of Neural Network and Linear Regression Models in Statistically Predicting Mental and Physical Health Status of Breast Cancer Survivors

  11. Single drug biomarker prediction for ER- breast cancer outcome from chemotherapy.

    PubMed

    Chen, Yong-Zi; Kim, Youngchul; Soliman, Hatem H; Ying, GuoGuang; Lee, Jae K

    2018-06-01

    ER-negative breast cancer includes most aggressive subtypes of breast cancer such as triple negative (TN) breast cancer. Excluded from hormonal and targeted therapies effectively used for other subtypes of breast cancer, standard chemotherapy is one of the primary treatment options for these patients. However, as ER- patients have shown highly heterogeneous responses to different chemotherapies, it has been difficult to select most beneficial chemotherapy treatments for them. In this study, we have simultaneously developed single drug biomarker models for four standard chemotherapy agents: paclitaxel (T), 5-fluorouracil (F), doxorubicin (A) and cyclophosphamide (C) to predict responses and survival of ER- breast cancer patients treated with combination chemotherapies. We then flexibly combined these individual drug biomarkers for predicting patient outcomes of two independent cohorts of ER- breast cancer patients who were treated with different drug combinations of neoadjuvant chemotherapy. These individual and combined drug biomarker models significantly predicted chemotherapy response for 197 ER- patients in the Hatzis cohort (AUC = 0.637, P  = 0.002) and 69 ER- patients in the Hess cohort (AUC = 0.635, P  = 0.056). The prediction was also significant for the TN subgroup of both cohorts (AUC = 0.60, 0.72, P  = 0.043, 0.009). In survival analysis, our predicted responder patients showed significantly improved survival with a >17 months longer median PFS than the predicted non-responder patients for both ER- and TN subgroups (log-rank test P -value = 0.018 and 0.044). This flexible prediction capability based on single drug biomarkers may allow us to even select new drug combinations most beneficial to individual patients with ER- breast cancer. © 2018 The authors.

  12. Prediction and measurement of thermally induced cambial tissue necrosis in tree stems

    Treesearch

    Joshua L. Jones; Brent W. Webb; Bret W. Butler; Matthew B. Dickinson; Daniel Jimenez; James Reardon; Anthony S. Bova

    2006-01-01

    A model for fire-induced heating in tree stems is linked to a recently reported model for tissue necrosis. The combined model produces cambial tissue necrosis predictions in a tree stem as a function of heating rate, heating time, tree species, and stem diameter. Model accuracy is evaluated by comparison with experimental measurements in two hardwood and two softwood...

  13. Spatial working memory capacity predicts bias in estimates of location.

    PubMed

    Crawford, L Elizabeth; Landy, David; Salthouse, Timothy A

    2016-09-01

    Spatial memory research has attributed systematic bias in location estimates to a combination of a noisy memory trace with a prior structure that people impose on the space. Little is known about intraindividual stability and interindividual variation in these patterns of bias. In the current work, we align recent empirical and theoretical work on working memory capacity limits and spatial memory bias to generate the prediction that those with lower working memory capacity will show greater bias in memory of the location of a single item. Reanalyzing data from a large study of cognitive aging, we find support for this prediction. Fitting separate models to individuals' data revealed a surprising variety of strategies. Some were consistent with Bayesian models of spatial category use, however roughly half of participants biased estimates outward in a way not predicted by current models and others seemed to combine these strategies. These analyses highlight the importance of studying individuals when developing general models of cognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Spatial Working Memory Capacity Predicts Bias in Estimates of Location

    PubMed Central

    Crawford, L. Elizabeth; Landy, David H.; Salthouse, Timothy A.

    2016-01-01

    Spatial memory research has attributed systematic bias in location estimates to a combination of a noisy memory trace with a prior structure that people impose on the space. Little is known about intra-individual stability and inter-individual variation in these patterns of bias. In the current work we align recent empirical and theoretical work on working memory capacity limits and spatial memory bias to generate the prediction that those with lower working memory capacity will show greater bias in memory of the location of a single item. Reanalyzing data from a large study of cognitive aging, we find support for this prediction. Fitting separate models to individuals’ data revealed a surprising variety of strategies. Some were consistent with Bayesian models of spatial category use, however roughly half of participants biased estimates outward in a way not predicted by current models and others seemed to combine these strategies. These analyses highlight the importance of studying individuals when developing general models of cognition. PMID:26900708

  15. Does Parsonnet scoring model predict mortality following adult cardiac surgery in India?

    PubMed

    Srilata, Moningi; Padhy, Narmada; Padmaja, Durga; Gopinath, Ramachandran

    2015-01-01

    To validate the Parsonnet scoring model to predict mortality following adult cardiac surgery in Indian scenario. A total of 889 consecutive patients undergoing adult cardiac surgery between January 2010 and April 2011 were included in the study. The Parsonnet score was determined for each patient and its predictive ability for in-hospital mortality was evaluated. The validation of Parsonnet score was performed for the total data and separately for the sub-groups coronary artery bypass grafting (CABG), valve surgery and combined procedures (CABG with valve surgery). The model calibration was performed using Hosmer-Lemeshow goodness of fit test and receiver operating characteristics (ROC) analysis for discrimination. Independent predictors of mortality were assessed from the variables used in the Parsonnet score by multivariate regression analysis. The overall mortality was 6.3% (56 patients), 7.1% (34 patients) for CABG, 4.3% (16 patients) for valve surgery and 16.2% (6 patients) for combined procedures. The Hosmer-Lemeshow statistic was <0.05 for the total data and also within the sub-groups suggesting that the predicted outcome using Parsonnet score did not match the observed outcome. The area under the ROC curve for the total data was 0.699 (95% confidence interval 0.62-0.77) and when tested separately, it was 0.73 (0.64-0.81) for CABG, 0.79 (0.63-0.92) for valve surgery (good discriminatory ability) and only 0.55 (0.26-0.83) for combined procedures. The independent predictors of mortality determined for the total data were low ejection fraction (odds ratio [OR] - 1.7), preoperative intra-aortic balloon pump (OR - 10.7), combined procedures (OR - 5.1), dialysis dependency (OR - 23.4), and re-operation (OR - 9.4). The Parsonnet score yielded a good predictive value for valve surgeries, moderate predictive value for the total data and for CABG and poor predictive value for combined procedures.

  16. Intelligent sensing sensory quality of Chinese rice wine using near infrared spectroscopy and nonlinear tools.

    PubMed

    Ouyang, Qin; Chen, Quansheng; Zhao, Jiewen

    2016-02-05

    The approach presented herein reports the application of near infrared (NIR) spectroscopy, in contrast with human sensory panel, as a tool for estimating Chinese rice wine quality; concretely, to achieve the prediction of the overall sensory scores assigned by the trained sensory panel. Back propagation artificial neural network (BPANN) combined with adaptive boosting (AdaBoost) algorithm, namely BP-AdaBoost, as a novel nonlinear algorithm, was proposed in modeling. First, the optimal spectra intervals were selected by synergy interval partial least square (Si-PLS). Then, BP-AdaBoost model based on the optimal spectra intervals was established, called Si-BP-AdaBoost model. These models were optimized by cross validation, and the performance of each final model was evaluated according to correlation coefficient (Rp) and root mean square error of prediction (RMSEP) in prediction set. Si-BP-AdaBoost showed excellent performance in comparison with other models. The best Si-BP-AdaBoost model was achieved with Rp=0.9180 and RMSEP=2.23 in the prediction set. It was concluded that NIR spectroscopy combined with Si-BP-AdaBoost was an appropriate method for the prediction of the sensory quality in Chinese rice wine. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Designing and benchmarking the MULTICOM protein structure prediction system

    PubMed Central

    2013-01-01

    Background Predicting protein structure from sequence is one of the most significant and challenging problems in bioinformatics. Numerous bioinformatics techniques and tools have been developed to tackle almost every aspect of protein structure prediction ranging from structural feature prediction, template identification and query-template alignment to structure sampling, model quality assessment, and model refinement. How to synergistically select, integrate and improve the strengths of the complementary techniques at each prediction stage and build a high-performance system is becoming a critical issue for constructing a successful, competitive protein structure predictor. Results Over the past several years, we have constructed a standalone protein structure prediction system MULTICOM that combines multiple sources of information and complementary methods at all five stages of the protein structure prediction process including template identification, template combination, model generation, model assessment, and model refinement. The system was blindly tested during the ninth Critical Assessment of Techniques for Protein Structure Prediction (CASP9) in 2010 and yielded very good performance. In addition to studying the overall performance on the CASP9 benchmark, we thoroughly investigated the performance and contributions of each component at each stage of prediction. Conclusions Our comprehensive and comparative study not only provides useful and practical insights about how to select, improve, and integrate complementary methods to build a cutting-edge protein structure prediction system but also identifies a few new sources of information that may help improve the design of a protein structure prediction system. Several components used in the MULTICOM system are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:23442819

  18. Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases

    PubMed Central

    Zhang, Hongpo

    2018-01-01

    Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369

  19. Features in visual search combine linearly

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2014-01-01

    Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328

  20. A predictive pharmacokinetic-pharmacodynamic model of tumor growth kinetics in xenograft mice after administration of anticancer agents given in combination.

    PubMed

    Terranova, Nadia; Germani, Massimiliano; Del Bene, Francesca; Magni, Paolo

    2013-08-01

    In clinical oncology, combination treatments are widely used and increasingly preferred over single drug administrations. A better characterization of the interaction between drug effects and the selection of synergistic combinations represent an open challenge in drug development process. To this aim, preclinical studies are routinely performed, even if they are only qualitatively analyzed due to the lack of generally applicable mathematical models. This paper presents a new pharmacokinetic-pharmacodynamic model that, starting from the well-known single agent Simeoni TGI model, is able to describe tumor growth in xenograft mice after the co-administration of two anticancer agents. Due to the drug action, tumor cells are divided in two groups: damaged and not damaged ones. The damaging rate has two terms proportional to drug concentrations (as in the single drug administration model) and one interaction term proportional to their product. Six of the eight pharmacodynamic parameters assume the same value as in the corresponding single drug models. Only one parameter summarizes the interaction, and it can be used to compute two important indexes that are a clear way to score the synergistic/antagonistic interaction among drug effects. The model was successfully applied to four new compounds co-administered with four drugs already available on the market for the treatment of three different tumor cell lines. It also provided reliable predictions of different combination regimens in which the same drugs were administered at different doses/schedules. A good and quantitative measurement of the intensity and nature of interaction between drug effects, as well as the capability to correctly predict new combination arms, suggest the use of this generally applicable model for supporting the experiment optimal design and the prioritization of different therapies.

  1. Prediction of human pharmacokinetics using physiologically based modeling: a retrospective analysis of 26 clinically tested drugs.

    PubMed

    De Buck, Stefan S; Sinha, Vikash K; Fenu, Luca A; Nijsen, Marjoleen J; Mackie, Claire E; Gilissen, Ron A H J

    2007-10-01

    The aim of this study was to evaluate different physiologically based modeling strategies for the prediction of human pharmacokinetics. Plasma profiles after intravenous and oral dosing were simulated for 26 clinically tested drugs. Two mechanism-based predictions of human tissue-to-plasma partitioning (P(tp)) from physicochemical input (method Vd1) were evaluated for their ability to describe human volume of distribution at steady state (V(ss)). This method was compared with a strategy that combined predicted and experimentally determined in vivo rat P(tp) data (method Vd2). Best V(ss) predictions were obtained using method Vd2, providing that rat P(tp) input was corrected for interspecies differences in plasma protein binding (84% within 2-fold). V(ss) predictions from physicochemical input alone were poor (32% within 2-fold). Total body clearance (CL) was predicted as the sum of scaled rat renal clearance and hepatic clearance projected from in vitro metabolism data. Best CL predictions were obtained by disregarding both blood and microsomal or hepatocyte binding (method CL2, 74% within 2-fold), whereas strong bias was seen using both blood and microsomal or hepatocyte binding (method CL1, 53% within 2-fold). The physiologically based pharmacokinetics (PBPK) model, which combined methods Vd2 and CL2 yielded the most accurate predictions of in vivo terminal half-life (69% within 2-fold). The Gastroplus advanced compartmental absorption and transit model was used to construct an absorption-disposition model and provided accurate predictions of area under the plasma concentration-time profile, oral apparent volume of distribution, and maximum plasma concentration after oral dosing, with 74%, 70%, and 65% within 2-fold, respectively. This evaluation demonstrates that PBPK models can lead to reasonable predictions of human pharmacokinetics.

  2. [Vis-NIR spectroscopic pattern recognition combined with SG smoothing applied to breed screening of transgenic sugarcane].

    PubMed

    Liu, Gui-Song; Guo, Hao-Song; Pan, Tao; Wang, Ji-Hua; Cao, Gan

    2014-10-01

    Based on Savitzky-Golay (SG) smoothing screening, principal component analysis (PCA) combined with separately supervised linear discriminant analysis (LDA) and unsupervised hierarchical clustering analysis (HCA) were used for non-destructive visible and near-infrared (Vis-NIR) detection for breed screening of transgenic sugarcane. A random and stability-dependent framework of calibration, prediction, and validation was proposed. A total of 456 samples of sugarcane leaves planting in the elongating stage were collected from the field, which was composed of 306 transgenic (positive) samples containing Bt and Bar gene and 150 non-transgenic (negative) samples. A total of 156 samples (negative 50 and positive 106) were randomly selected as the validation set; the remaining samples (negative 100 and positive 200, a total of 300 samples) were used as the modeling set, and then the modeling set was subdivided into calibration (negative 50 and positive 100, a total of 150 samples) and prediction sets (negative 50 and positive 100, a total of 150 samples) for 50 times. The number of SG smoothing points was ex- panded, while some modes of higher derivative were removed because of small absolute value, and a total of 264 smoothing modes were used for screening. The pairwise combinations of first three principal components were used, and then the optimal combination of principal components was selected according to the model effect. Based on all divisions of calibration and prediction sets and all SG smoothing modes, the SG-PCA-LDA and SG-PCA-HCA models were established, the model parameters were optimized based on the average prediction effect for all divisions to produce modeling stability. Finally, the model validation was performed by validation set. With SG smoothing, the modeling accuracy and stability of PCA-LDA, PCA-HCA were signif- icantly improved. For the optimal SG-PCA-LDA model, the recognition rate of positive and negative validation samples were 94.3%, 96.0%; and were 92.5%, 98.0% for the optimal SG-PCA-LDA model, respectively. Vis-NIR spectro- scopic pattern recognition combined with SG smoothing could be used for accurate recognition of transgenic sugarcane leaves, and provided a convenient screening method for transgenic sugarcane breeding.

  3. Improved Prediction of Blood-Brain Barrier Permeability Through Machine Learning with Combined Use of Molecular Property-Based Descriptors and Fingerprints.

    PubMed

    Yuan, Yaxia; Zheng, Fang; Zhan, Chang-Guo

    2018-03-21

    Blood-brain barrier (BBB) permeability of a compound determines whether the compound can effectively enter the brain. It is an essential property which must be accounted for in drug discovery with a target in the brain. Several computational methods have been used to predict the BBB permeability. In particular, support vector machine (SVM), which is a kernel-based machine learning method, has been used popularly in this field. For SVM training and prediction, the compounds are characterized by molecular descriptors. Some SVM models were based on the use of molecular property-based descriptors (including 1D, 2D, and 3D descriptors) or fragment-based descriptors (known as the fingerprints of a molecule). The selection of descriptors is critical for the performance of a SVM model. In this study, we aimed to develop a generally applicable new SVM model by combining all of the features of the molecular property-based descriptors and fingerprints to improve the accuracy for the BBB permeability prediction. The results indicate that our SVM model has improved accuracy compared to the currently available models of the BBB permeability prediction.

  4. Physiologically-Based Pharmacokinetic Modeling of Macitentan: Prediction of Drug-Drug Interactions.

    PubMed

    de Kanter, Ruben; Sidharta, Patricia N; Delahaye, Stéphane; Gnerre, Carmela; Segrestaa, Jerome; Buchmann, Stephan; Kohl, Christopher; Treiber, Alexander

    2016-03-01

    Macitentan is a novel dual endothelin receptor antagonist for the treatment of pulmonary arterial hypertension (PAH). It is metabolized by cytochrome P450 (CYP) enzymes, mainly CYP3A4, to its active metabolite ACT-132577. A physiological-based pharmacokinetic (PBPK) model was developed by combining observations from clinical studies and physicochemical parameters as well as absorption, distribution, metabolism and excretion parameters determined in vitro. The model predicted the observed pharmacokinetics of macitentan and its active metabolite ACT-132577 after single and multiple dosing. It performed well in recovering the observed effect of the CYP3A4 inhibitors ketoconazole and cyclosporine, and the CYP3A4 inducer rifampicin, as well as in predicting interactions with S-warfarin and sildenafil. The model was robust enough to allow prospective predictions of macitentan-drug combinations not studied, including an alternative dosing regimen of ketoconazole and nine other CYP3A4-interacting drugs. Among these were the HIV drugs ritonavir and saquinavir, which were included because HIV infection is a known risk factor for the development of PAH. This example of the application of PBPK modeling to predict drug-drug interactions was used to support the labeling of macitentan (Opsumit).

  5. Estimation and prediction under local volatility jump-diffusion model

    NASA Astrophysics Data System (ADS)

    Kim, Namhyoung; Lee, Younhee

    2018-02-01

    Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.

  6. The application of molecular modelling in the safety assessment of chemicals: A case study on ligand-dependent PPARγ dysregulation.

    PubMed

    Al Sharif, Merilin; Tsakovska, Ivanka; Pajeva, Ilza; Alov, Petko; Fioravanzo, Elena; Bassan, Arianna; Kovarich, Simona; Yang, Chihae; Mostrag-Szlichtyng, Aleksandra; Vitcheva, Vessela; Worth, Andrew P; Richarz, Andrea-N; Cronin, Mark T D

    2017-12-01

    The aim of this paper was to provide a proof of concept demonstrating that molecular modelling methodologies can be employed as a part of an integrated strategy to support toxicity prediction consistent with the mode of action/adverse outcome pathway (MoA/AOP) framework. To illustrate the role of molecular modelling in predictive toxicology, a case study was undertaken in which molecular modelling methodologies were employed to predict the activation of the peroxisome proliferator-activated nuclear receptor γ (PPARγ) as a potential molecular initiating event (MIE) for liver steatosis. A stepwise procedure combining different in silico approaches (virtual screening based on docking and pharmacophore filtering, and molecular field analysis) was developed to screen for PPARγ full agonists and to predict their transactivation activity (EC 50 ). The performance metrics of the classification model to predict PPARγ full agonists were balanced accuracy=81%, sensitivity=85% and specificity=76%. The 3D QSAR model developed to predict EC 50 of PPARγ full agonists had the following statistical parameters: q 2 cv =0.610, N opt =7, SEP cv =0.505, r 2 pr =0.552. To support the linkage of PPARγ agonism predictions to prosteatotic potential, molecular modelling was combined with independently performed mechanistic mining of available in vivo toxicity data followed by ToxPrint chemotypes analysis. The approaches investigated demonstrated a potential to predict the MIE, to facilitate the process of MoA/AOP elaboration, to increase the scientific confidence in AOP, and to become a basis for 3D chemotype development. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. In vivo serial MRI-based models and statistical methods to quantify sensitivity and specificity of mechanical predictors for carotid plaque rupture: location and beyond.

    PubMed

    Wu, Zheyang; Yang, Chun; Tang, Dalin

    2011-06-01

    It has been hypothesized that mechanical risk factors may be used to predict future atherosclerotic plaque rupture. Truly predictive methods for plaque rupture and methods to identify the best predictor(s) from all the candidates are lacking in the literature. A novel combination of computational and statistical models based on serial magnetic resonance imaging (MRI) was introduced to quantify sensitivity and specificity of mechanical predictors to identify the best candidate for plaque rupture site prediction. Serial in vivo MRI data of carotid plaque from one patient was acquired with follow-up scan showing ulceration. 3D computational fluid-structure interaction (FSI) models using both baseline and follow-up data were constructed and plaque wall stress (PWS) and strain (PWSn) and flow maximum shear stress (FSS) were extracted from all 600 matched nodal points (100 points per matched slice, baseline matching follow-up) on the lumen surface for analysis. Each of the 600 points was marked "ulcer" or "nonulcer" using follow-up scan. Predictive statistical models for each of the seven combinations of PWS, PWSn, and FSS were trained using the follow-up data and applied to the baseline data to assess their sensitivity and specificity using the 600 data points for ulcer predictions. Sensitivity of prediction is defined as the proportion of the true positive outcomes that are predicted to be positive. Specificity of prediction is defined as the proportion of the true negative outcomes that are correctly predicted to be negative. Using probability 0.3 as a threshold to infer ulcer occurrence at the prediction stage, the combination of PWS and PWSn provided the best predictive accuracy with (sensitivity, specificity) = (0.97, 0.958). Sensitivity and specificity given by PWS, PWSn, and FSS individually were (0.788, 0.968), (0.515, 0.968), and (0.758, 0.928), respectively. The proposed computational-statistical process provides a novel method and a framework to assess the sensitivity and specificity of various risk indicators and offers the potential to identify the optimized predictor for plaque rupture using serial MRI with follow-up scan showing ulceration as the gold standard for method validation. While serial MRI data with actual rupture are hard to acquire, this single-case study suggests that combination of multiple predictors may provide potential improvement to existing plaque assessment schemes. With large-scale patient studies, this predictive modeling process may provide more solid ground for rupture predictor selection strategies and methods for image-based plaque vulnerability assessment.

  8. Climate suitability and human influences combined explain the range expansion of an invasive horticultural plant

    Treesearch

    Carolyn M. Beans; Francis F. Kilkenny; Laura F. Galloway

    2012-01-01

    Ecological niche models are commonly used to identify regions at risk of species invasions. Relying on climate alone may limit a model's success when additional variables contribute to invasion. While a climate-based model may predict the future spread of an invasive plant, we hypothesized that a model that combined climate with human influences would most...

  9. Interaction Between Domperidone and Ketoconazole: Toward Prediction of Consequent QTc Prolongation Using Purely In Vitro Information

    PubMed Central

    Mishra, H; Polak, S; Jamei, M; Rostami-Hodjegan, A

    2014-01-01

    We aimed to investigate the application of combined mechanistic pharmacokinetic (PK) and pharmacodynamic (PD) modeling and simulation in predicting the domperidone (DOM) triggered pseudo-electrocardiogram modification in the presence of a CYP3A inhibitor, ketoconazole (KETO), using in vitro–in vivo extrapolation. In vitro metabolic and inhibitory data were incorporated into physiologically based pharmacokinetic (PBPK) models within Simcyp to simulate time course of plasma DOM and KETO concentrations when administered alone or in combination with KETO (DOM+KETO). Simulated DOM concentrations in plasma were used to predict changes in gender-specific QTcF (Fridericia correction) intervals within the Cardiac Safety Simulator platform taking into consideration DOM, KETO, and DOM+KETO triggered inhibition of multiple ionic currents in population. Combination of in vitro–in vivo extrapolation, PBPK, and systems pharmacology of electric currents in the heart was able to predict the direction and magnitude of PK and PD changes under coadministration of the two drugs although some disparities were detected. PMID:25116274

  10. Predicting the cumulative risk of death during hospitalization by modeling weekend, weekday and diurnal mortality risks.

    PubMed

    Coiera, Enrico; Wang, Ying; Magrabi, Farah; Concha, Oscar Perez; Gallego, Blanca; Runciman, William

    2014-05-21

    Current prognostic models factor in patient and disease specific variables but do not consider cumulative risks of hospitalization over time. We developed risk models of the likelihood of death associated with cumulative exposure to hospitalization, based on time-varying risks of hospitalization over any given day, as well as day of the week. Model performance was evaluated alone, and in combination with simple disease-specific models. Patients admitted between 2000 and 2006 from 501 public and private hospitals in NSW, Australia were used for training and 2007 data for evaluation. The impact of hospital care delivered over different days of the week and or times of the day was modeled by separating hospitalization risk into 21 separate time periods (morning, day, night across the days of the week). Three models were developed to predict death up to 7-days post-discharge: 1/a simple background risk model using age, gender; 2/a time-varying risk model for exposure to hospitalization (admission time, days in hospital); 3/disease specific models (Charlson co-morbidity index, DRG). Combining these three generated a full model. Models were evaluated by accuracy, AUC, Akaike and Bayesian information criteria. There was a clear diurnal rhythm to hospital mortality in the data set, peaking in the evening, as well as the well-known 'weekend-effect' where mortality peaks with weekend admissions. Individual models had modest performance on the test data set (AUC 0.71, 0.79 and 0.79 respectively). The combined model which included time-varying risk however yielded an average AUC of 0.92. This model performed best for stays up to 7-days (93% of admissions), peaking at days 3 to 5 (AUC 0.94). Risks of hospitalization vary not just with the day of the week but also time of the day, and can be used to make predictions about the cumulative risk of death associated with an individual's hospitalization. Combining disease specific models with such time varying- estimates appears to result in robust predictive performance. Such risk exposure models should find utility both in enhancing standard prognostic models as well as estimating the risk of continuation of hospitalization.

  11. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  12. Experimental validation of boundary element methods for noise prediction

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Oswald, Fred B.

    1992-01-01

    Experimental validation of methods to predict radiated noise is presented. A combined finite element and boundary element model was used to predict the vibration and noise of a rectangular box excited by a mechanical shaker. The predicted noise was compared to sound power measured by the acoustic intensity method. Inaccuracies in the finite element model shifted the resonance frequencies by about 5 percent. The predicted and measured sound power levels agree within about 2.5 dB. In a second experiment, measured vibration data was used with a boundary element model to predict noise radiation from the top of an operating gearbox. The predicted and measured sound power for the gearbox agree within about 3 dB.

  13. Predicting the Inflow Distortion Tone Noise of the NASA Glenn Advanced Noise Control Fan with a Combined Quadrupole-Dipole Model

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle

    2012-01-01

    A combined quadrupole-dipole model of fan inflow distortion tone noise has been extended to calculate tone sound power levels generated by obstructions arranged in circumferentially asymmetric locations upstream of a rotor. Trends in calculated sound power level agreed well with measurements from tests conducted in 2007 in the NASA Glenn Advanced Noise Control Fan. Calculated values of sound power levels radiated upstream were demonstrated to be sensitive to the accuracy of the modeled wakes from the cylindrical rods that were placed upstream of the fan to distort the inflow. Results indicate a continued need to obtain accurate aerodynamic predictions and measurements at the fan inlet plane as engineers work towards developing fan inflow distortion tone noise prediction tools.

  14. Dynamic-landscape metapopulation models predict complex response of wildlife populations to climate and landscape change

    Treesearch

    Thomas W. Bonnot; Frank R. Thompson; Joshua J. Millspaugh

    2017-01-01

    The increasing need to predict how climate change will impact wildlife species has exposed limitations in how well current approaches model important biological processes at scales at which those processes interact with climate. We used a comprehensive approach that combined recent advances in landscape and population modeling into dynamic-landscape metapopulation...

  15. Multiplexed Predictive Control of a Large Commercial Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Richter, hanz; Singaraju, Anil; Litt, Jonathan S.

    2008-01-01

    Model predictive control is a strategy well-suited to handle the highly complex, nonlinear, uncertain, and constrained dynamics involved in aircraft engine control problems. However, it has thus far been infeasible to implement model predictive control in engine control applications, because of the combination of model complexity and the time allotted for the control update calculation. In this paper, a multiplexed implementation is proposed that dramatically reduces the computational burden of the quadratic programming optimization that must be solved online as part of the model-predictive-control algorithm. Actuator updates are calculated sequentially and cyclically in a multiplexed implementation, as opposed to the simultaneous optimization taking place in conventional model predictive control. Theoretical aspects are discussed based on a nominal model, and actual computational savings are demonstrated using a realistic commercial engine model.

  16. Fermentation of Saccharomyces cerevisiae - Combining kinetic modeling and optimization techniques points out avenues to effective process design.

    PubMed

    Scheiblauer, Johannes; Scheiner, Stefan; Joksch, Martin; Kavsek, Barbara

    2018-09-14

    A combined experimental/theoretical approach is presented, for improving the predictability of Saccharomyces cerevisiae fermentations. In particular, a mathematical model was developed explicitly taking into account the main mechanisms of the fermentation process, allowing for continuous computation of key process variables, including the biomass concentration and the respiratory quotient (RQ). For model calibration and experimental validation, batch and fed-batch fermentations were carried out. Comparison of the model-predicted biomass concentrations and RQ developments with the corresponding experimentally recorded values shows a remarkably good agreement for both batch and fed-batch processes, confirming the adequacy of the model. Furthermore, sensitivity studies were performed, in order to identify model parameters whose variations have significant effects on the model predictions: our model responds with significant sensitivity to the variations of only six parameters. These studies provide a valuable basis for model reduction, as also demonstrated in this paper. Finally, optimization-based parametric studies demonstrate how our model can be utilized for improving the efficiency of Saccharomyces cerevisiae fermentations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Brain regions engaged by part- and whole-task performance in a video game: a model-based test of the decomposition hypothesis.

    PubMed

    Anderson, John R; Bothell, Daniel; Fincham, Jon M; Anderson, Abraham R; Poole, Ben; Qin, Yulin

    2011-12-01

    Part- and whole-task conditions were created by manipulating the presence of certain components of the Space Fortress video game. A cognitive model was created for two-part games that could be combined into a model that performed the whole game. The model generated predictions both for behavioral patterns and activation patterns in various brain regions. The activation predictions concerned both tonic activation that was constant in these regions during performance of the game and phasic activation that occurred when there was resource competition. The model's predictions were confirmed about how tonic and phasic activation in different regions would vary with condition. These results support the Decomposition Hypothesis that the execution of a complex task can be decomposed into a set of information-processing components and that these components combine unchanged in different task conditions. In addition, individual differences in learning gains were predicted by individual differences in phasic activation in those regions that displayed highest tonic activity. This individual difference pattern suggests that the rate of learning of a complex skill is determined by capacity limits.

  18. Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra

    NASA Astrophysics Data System (ADS)

    Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi

    2018-04-01

    Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models’ performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.

  19. PockDrug: A Model for Predicting Pocket Druggability That Overcomes Pocket Estimation Uncertainties.

    PubMed

    Borrel, Alexandre; Regad, Leslie; Xhaard, Henri; Petitjean, Michel; Camproux, Anne-Claude

    2015-04-27

    Predicting protein druggability is a key interest in the target identification phase of drug discovery. Here, we assess the pocket estimation methods' influence on druggability predictions by comparing statistical models constructed from pockets estimated using different pocket estimation methods: a proximity of either 4 or 5.5 Å to a cocrystallized ligand or DoGSite and fpocket estimation methods. We developed PockDrug, a robust pocket druggability model that copes with uncertainties in pocket boundaries. It is based on a linear discriminant analysis from a pool of 52 descriptors combined with a selection of the most stable and efficient models using different pocket estimation methods. PockDrug retains the best combinations of three pocket properties which impact druggability: geometry, hydrophobicity, and aromaticity. It results in an average accuracy of 87.9% ± 4.7% using a test set and exhibits higher accuracy (∼5-10%) than previous studies that used an identical apo set. In conclusion, this study confirms the influence of pocket estimation on pocket druggability prediction and proposes PockDrug as a new model that overcomes pocket estimation variability.

  20. Using FTIR spectroscopy to model alkaline pretreatment and enzymatic saccharification of six lignocellulosic biomasses.

    PubMed

    Sills, Deborah L; Gossett, James M

    2012-04-01

    Fourier transform infrared, attenuated total reflectance (FTIR-ATR) spectroscopy, combined with partial least squares (PLS) regression, accurately predicted solubilization of plant cell wall constituents and NaOH consumption through pretreatment, and overall sugar productions from combined pretreatment and enzymatic hydrolysis. PLS regression models were constructed by correlating FTIR spectra of six raw biomasses (two switchgrass cultivars, big bluestem grass, a low-impact, high-diversity mixture of prairie biomasses, mixed hardwood, and corn stover), plus alkali loading in pretreatment, to nine dependent variables: glucose, xylose, lignin, and total solids solubilized in pretreatment; NaOH consumed in pretreatment; and overall glucose and xylose conversions and yields from combined pretreatment and enzymatic hydrolysis. PLS models predicted the dependent variables with the following values of coefficient of determination for cross-validation (Q²): 0.86 for glucose, 0.90 for xylose, 0.79 for lignin, and 0.85 for total solids solubilized in pretreatment; 0.83 for alkali consumption; 0.93 for glucose conversion, 0.94 for xylose conversion, and 0.88 for glucose and xylose yields. The sugar yield models are noteworthy for their ability to predict overall saccharification through combined pretreatment and enzymatic hydrolysis per mass dry untreated solids without a priori knowledge of the composition of solids. All wavenumbers with significant variable-important-for-projection (VIP) scores have been attributed to chemical features of lignocellulose, demonstrating the models were based on real chemical information. These models suggest that PLS regression can be applied to FTIR-ATR spectra of raw biomasses to rapidly predict effects of pretreatment on solids and on subsequent enzymatic hydrolysis. Copyright © 2011 Wiley Periodicals, Inc.

  1. Multi-time-scale heat transfer modeling of turbid tissues exposed to short-pulsed irradiations.

    PubMed

    Kim, Kyunghan; Guo, Zhixiong

    2007-05-01

    A combined hyperbolic radiation and conduction heat transfer model is developed to simulate multi-time-scale heat transfer in turbid tissues exposed to short-pulsed irradiations. An initial temperature response of a tissue to an ultrashort pulse irradiation is analyzed by the volume-average method in combination with the transient discrete ordinates method for modeling the ultrafast radiation heat transfer. This response is found to reach pseudo steady state within 1 ns for the considered tissues. The single pulse result is then utilized to obtain the temperature response to pulse train irradiation at the microsecond/millisecond time scales. After that, the temperature field is predicted by the hyperbolic heat conduction model which is solved by the MacCormack's scheme with error terms correction. Finally, the hyperbolic conduction is compared with the traditional parabolic heat diffusion model. It is found that the maximum local temperatures are larger in the hyperbolic prediction than the parabolic prediction. In the modeled dermis tissue, a 7% non-dimensional temperature increase is found. After about 10 thermal relaxation times, thermal waves fade away and the predictions between the hyperbolic and parabolic models are consistent.

  2. [Study on artificial neural network combined with multispectral remote sensing imagery for forest site evaluation].

    PubMed

    Gong, Yin-Xi; He, Cheng; Yan, Fei; Feng, Zhong-Ke; Cao, Meng-Lei; Gao, Yuan; Miao, Jie; Zhao, Jin-Long

    2013-10-01

    Multispectral remote sensing data containing rich site information are not fully used by the classic site quality evaluation system, as it merely adopts artificial ground survey data. In order to establish a more effective site quality evaluation system, a neural network model which combined remote sensing spectra factors with site factors and site index relations was established and used to study the sublot site quality evaluation in the Wangyedian Forest Farm in Inner Mongolia Province, Chifeng City. Based on the improved back propagation artificial neural network (BPANN), this model combined multispectral remote sensing data with sublot survey data, and took larch as example, Through training data set sensitivity analysis weak or irrelevant factor was excluded, the size of neural network was simplified, and the efficiency of network training was improved. This optimal site index prediction model had an accuracy up to 95.36%, which was 9.83% higher than that of the neural network model based on classic sublot survey data, and this shows that using multi-spectral remote sensing and small class survey data to determine the status of larch index prediction model has the highest predictive accuracy. The results fully indicate the effectiveness and superiority of this method.

  3. SPATIAL PREDICTION USING COMBINED SOURCES OF DATA

    EPA Science Inventory

    For improved environmental decision-making, it is important to develop new models for spatial prediction that accurately characterize important spatial and temporal patterns of air pollution. As the U .S. Environmental Protection Agency begins to use spatial prediction in the reg...

  4. Medium- and Long-term Prediction of LOD Change with the Leap-step Autoregressive Model

    NASA Astrophysics Data System (ADS)

    Liu, Q. B.; Wang, Q. J.; Lei, M. F.

    2015-09-01

    It is known that the accuracies of medium- and long-term prediction of changes of length of day (LOD) based on the combined least-square and autoregressive (LS+AR) decrease gradually. The leap-step autoregressive (LSAR) model is more accurate and stable in medium- and long-term prediction, therefore it is used to forecast the LOD changes in this work. Then the LOD series from EOP 08 C04 provided by IERS (International Earth Rotation and Reference Systems Service) is used to compare the effectiveness of the LSAR and traditional AR methods. The predicted series resulted from the two models show that the prediction accuracy with the LSAR model is better than that from AR model in medium- and long-term prediction.

  5. Predicting and understanding law-making with word vectors and an ensemble model.

    PubMed

    Nay, John J

    2017-01-01

    Out of nearly 70,000 bills introduced in the U.S. Congress from 2001 to 2015, only 2,513 were enacted. We developed a machine learning approach to forecasting the probability that any bill will become law. Starting in 2001 with the 107th Congress, we trained models on data from previous Congresses, predicted all bills in the current Congress, and repeated until the 113th Congress served as the test. For prediction we scored each sentence of a bill with a language model that embeds legislative vocabulary into a high-dimensional, semantic-laden vector space. This language representation enables our investigation into which words increase the probability of enactment for any topic. To test the relative importance of text and context, we compared the text model to a context-only model that uses variables such as whether the bill's sponsor is in the majority party. To test the effect of changes to bills after their introduction on our ability to predict their final outcome, we compared using the bill text and meta-data available at the time of introduction with using the most recent data. At the time of introduction context-only predictions outperform text-only, and with the newest data text-only outperforms context-only. Combining text and context always performs best. We conducted a global sensitivity analysis on the combined model to determine important variables predicting enactment.

  6. Predicting and understanding law-making with word vectors and an ensemble model

    PubMed Central

    Nay, John J.

    2017-01-01

    Out of nearly 70,000 bills introduced in the U.S. Congress from 2001 to 2015, only 2,513 were enacted. We developed a machine learning approach to forecasting the probability that any bill will become law. Starting in 2001 with the 107th Congress, we trained models on data from previous Congresses, predicted all bills in the current Congress, and repeated until the 113th Congress served as the test. For prediction we scored each sentence of a bill with a language model that embeds legislative vocabulary into a high-dimensional, semantic-laden vector space. This language representation enables our investigation into which words increase the probability of enactment for any topic. To test the relative importance of text and context, we compared the text model to a context-only model that uses variables such as whether the bill’s sponsor is in the majority party. To test the effect of changes to bills after their introduction on our ability to predict their final outcome, we compared using the bill text and meta-data available at the time of introduction with using the most recent data. At the time of introduction context-only predictions outperform text-only, and with the newest data text-only outperforms context-only. Combining text and context always performs best. We conducted a global sensitivity analysis on the combined model to determine important variables predicting enactment. PMID:28489868

  7. Development of the AFRL Aircrew Perfomance and Protection Data Bank

    DTIC Science & Technology

    2007-12-01

    Growth model and statistical model of hypobaric chamber simulations. It offers a quick and readily accessible online DCS risk assessment tool for...are used for the DCS prediction instead of the original model. ADRAC is based on more than 20 years of hypobaric chamber studies using human...prediction based on the combined Bubble Growth model and statistical model of hypobaric chamber simulations was integrated into the Data Bank. It

  8. Electrochemical carbon dioxide concentrator: Math model

    NASA Technical Reports Server (NTRS)

    Marshall, R. D.; Schubert, F. H.; Carlson, J. N.

    1973-01-01

    A steady state computer simulation model of an Electrochemical Depolarized Carbon Dioxide Concentrator (EDC) has been developed. The mathematical model combines EDC heat and mass balance equations with empirical correlations derived from experimental data to describe EDC performance as a function of the operating parameters involved. The model is capable of accurately predicting performance over EDC operating ranges. Model simulation results agree with the experimental data obtained over the prediction range.

  9. Predicting the safety and efficacy of buffer therapy to raise tumour pHe: an integrative modelling study

    PubMed Central

    Martin, N K; Robey, I F; Gaffney, E A; Gillies, R J; Gatenby, R A; Maini, P K

    2012-01-01

    Background: Clinical positron emission tomography imaging has demonstrated the vast majority of human cancers exhibit significantly increased glucose metabolism when compared with adjacent normal tissue, resulting in an acidic tumour microenvironment. Recent studies demonstrated reducing this acidity through systemic buffers significantly inhibits development and growth of metastases in mouse xenografts. Methods: We apply and extend a previously developed mathematical model of blood and tumour buffering to examine the impact of oral administration of bicarbonate buffer in mice, and the potential impact in humans. We recapitulate the experimentally observed tumour pHe effect of buffer therapy, testing a model prediction in vivo in mice. We parameterise the model to humans to determine the translational safety and efficacy, and predict patient subgroups who could have enhanced treatment response, and the most promising combination or alternative buffer therapies. Results: The model predicts a previously unseen potentially dangerous elevation in blood pHe resulting from bicarbonate therapy in mice, which is confirmed by our in vivo experiments. Simulations predict limited efficacy of bicarbonate, especially in humans with more aggressive cancers. We predict buffer therapy would be most effectual: in elderly patients or individuals with renal impairments; in combination with proton production inhibitors (such as dichloroacetate), renal glomular filtration rate inhibitors (such as non-steroidal anti-inflammatory drugs and angiotensin-converting enzyme inhibitors), or with an alternative buffer reagent possessing an optimal pK of 7.1–7.2. Conclusion: Our mathematical model confirms bicarbonate acts as an effective agent to raise tumour pHe, but potentially induces metabolic alkalosis at the high doses necessary for tumour pHe normalisation. We predict use in elderly patients or in combination with proton production inhibitors or buffers with a pK of 7.1–7.2 is most promising. PMID:22382688

  10. Prediction of Protein Structure by Template-Based Modeling Combined with the UNRES Force Field.

    PubMed

    Krupa, Paweł; Mozolewska, Magdalena A; Joo, Keehyoung; Lee, Jooyoung; Czaplewski, Cezary; Liwo, Adam

    2015-06-22

    A new approach to the prediction of protein structures that uses distance and backbone virtual-bond dihedral angle restraints derived from template-based models and simulations with the united residue (UNRES) force field is proposed. The approach combines the accuracy and reliability of template-based methods for the segments of the target sequence with high similarity to those having known structures with the ability of UNRES to pack the domains correctly. Multiplexed replica-exchange molecular dynamics with restraints derived from template-based models of a given target, in which each restraint is weighted according to the accuracy of the prediction of the corresponding section of the molecule, is used to search the conformational space, and the weighted histogram analysis method and cluster analysis are applied to determine the families of the most probable conformations, from which candidate predictions are selected. To test the capability of the method to recover template-based models from restraints, five single-domain proteins with structures that have been well-predicted by template-based methods were used; it was found that the resulting structures were of the same quality as the best of the original models. To assess whether the new approach can improve template-based predictions with incorrectly predicted domain packing, four such targets were selected from the CASP10 targets; for three of them the new approach resulted in significantly better predictions compared with the original template-based models. The new approach can be used to predict the structures of proteins for which good templates can be found for sections of the sequence or an overall good template can be found for the entire sequence but the prediction quality is remarkably weaker in putative domain-linker regions.

  11. Genomic models with genotype × environment interaction for predicting hybrid performance: an application in maize hybrids.

    PubMed

    Acosta-Pech, Rocío; Crossa, José; de Los Campos, Gustavo; Teyssèdre, Simon; Claustres, Bruno; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino

    2017-07-01

    A new genomic model that incorporates genotype × environment interaction gave increased prediction accuracy of untested hybrid response for traits such as percent starch content, percent dry matter content and silage yield of maize hybrids. The prediction of hybrid performance (HP) is very important in agricultural breeding programs. In plant breeding, multi-environment trials play an important role in the selection of important traits, such as stability across environments, grain yield and pest resistance. Environmental conditions modulate gene expression causing genotype × environment interaction (G × E), such that the estimated genetic correlations of the performance of individual lines across environments summarize the joint action of genes and environmental conditions. This article proposes a genomic statistical model that incorporates G × E for general and specific combining ability for predicting the performance of hybrids in environments. The proposed model can also be applied to any other hybrid species with distinct parental pools. In this study, we evaluated the predictive ability of two HP prediction models using a cross-validation approach applied in extensive maize hybrid data, comprising 2724 hybrids derived from 507 dent lines and 24 flint lines, which were evaluated for three traits in 58 environments over 12 years; analyses were performed for each year. On average, genomic models that include the interaction of general and specific combining ability with environments have greater predictive ability than genomic models without interaction with environments (ranging from 12 to 22%, depending on the trait). We concluded that including G × E in the prediction of untested maize hybrids increases the accuracy of genomic models.

  12. Combined Use of Tissue Morphology, Neural Network Analysis of Chromatin Texture and Clinical Variables to Predict Prostate Cancer Agressiveness from Biopsy Water

    DTIC Science & Technology

    2000-10-01

    Purpose: To combine clinical, serum, pathologic and computer derived information into an artificial neural network to develop/validate a model to...Development of an artificial neural network (year 02). Prospective validation of this model (projected year 03). All models will be tested and

  13. Combined Use of Tissue Morphology, Neural Network Analysis of Chromatin Texture & Clinical Variables to Predict Prostate Cancer Agressiveness from Biopsy Material

    DTIC Science & Technology

    1999-10-01

    THE PURPOSE OF THIS REPORT IS TO COMBINE CLINICAL, SERUM, PATHOLOGICAL AND COMPUTER DERIVED INFORMATION INTO AN ARTIFICIAL NEURAL NETWORK TO DEVELOP...01). Development of a artificial neural network model (year 02). Prospective validation of this model (projected year 03). All models will be tested

  14. Predicting the Future as Bayesian Inference: People Combine Prior Knowledge with Observations when Estimating Duration and Extent

    ERIC Educational Resources Information Center

    Griffiths, Thomas L.; Tenenbaum, Joshua B.

    2011-01-01

    Predicting the future is a basic problem that people have to solve every day and a component of planning, decision making, memory, and causal reasoning. In this article, we present 5 experiments testing a Bayesian model of predicting the duration or extent of phenomena from their current state. This Bayesian model indicates how people should…

  15. Combining turbulent kinetic energy and Haines Index predictions for fire-weather assessments

    Treesearch

    Warren E. Heilman; Xindi Bian

    2007-01-01

    The 24- to 72-hour fire-weather predictions for different regions of the United States are now readily available from the regional Fire Consortia for Advanced Modeling of Meteorology and Smoke (FCAMMS) that were established as part of the U.S. National Fire Plan. These predictions are based on daily real-time MM5 model simulations of atmospheric conditions and fire-...

  16. Modeling breath-enhanced jet nebulizers to estimate pulmonary drug deposition.

    PubMed

    Wee, Wallace B; Leung, Kitty; Coates, Allan L

    2013-12-01

    Predictable delivery of aerosol medication for a given patient and drug-device combination is crucial, both for therapeutic effect and to avoid toxicity. The gold standard for measuring pulmonary drug deposition (PDD) is gamma scintigraphy. However, these techniques expose patients to radiation, are complicated, and are relevant for only one patient and drug-device combination, making them less available. Alternatively, in vitro experiments have been used as a surrogate to estimate in vivo performance, but this is time-consuming and has few "in vitro to in vivo" correlations for therapeutics delivered by inhalation. An alternative method for determining inhaled mass and PDD is proposed by deriving and validating a mathematical model, for the individual breathing patterns of normal subjects and drug-device operating parameters. This model was evaluated for patients with cystic fibrosis (CF). This study is comprised of three stages: mathematical model derivation, in vitro testing, and in vivo validation. The model was derived from an idealized patient's respiration cycle and the steady-state operating characteristics of a drug-device combination. The model was tested under in vitro dynamic conditions that varied tidal volume, inspiration-to-expiration time, and breaths per minute. This approach was then extended to incorporate additional physiological parameters (dead space, aerodynamic particle size distribution) and validated against in vivo nuclear medicine data in predicting PDD in both normal subjects and those with CF. The model shows strong agreement with in vitro testing. In vivo testing with normal subjects yielded good agreement, but less agreement for patients with chronic obstructive lung disease and bronchiectasis from CF. The mathematical model was successful in accommodating a wide range of breathing patterns and drug-device combinations. Furthermore, the model has demonstrated its effectiveness in predicting the amount of aerosol delivered to "normal" subjects. However, challenges remain in predicting deposition in obstructive lung disease.

  17. The prediction of human skin responses by using the combined in vitro fluorescein leakage/Alamar Blue (resazurin) assay.

    PubMed

    Clothier, Richard; Starzec, Gemma; Pradel, Lionel; Baxter, Victoria; Jones, Melanie; Cox, Helen; Noble, Linda

    2002-01-01

    A range of cosmetics formulations with human patch-test data were supplied in a coded form, for the examination of the use of a combined in vitro permeability barrier assay and cell viability assay to generate, and then test, a prediction model for assessing potential human skin patch-test results. The target cells employed were of the Madin Darby canine kidney cell line, which establish tight junctions and adherens junctions able to restrict the permeability of sodium fluorescein across the barrier of the confluent cell layer. The prediction model for interpretation of the in vitro assay results included initial effects and the recovery profile over 72 hours. A set of the hand-wash, surfactant-based formulations were tested to generate the prediction model, and then six others were evaluated. The model system was then also evaluated with powder laundry detergents and hand moisturisers: their effects were predicted by the in vitro test system. The model was under-predictive for two of the ten hand-wash products. It was over-predictive for the moisturisers, (two out of six) and eight out of ten laundry powders. However, the in vivo human patch test data were variable, and 19 of the 26 predictions were correct or within 0.5 on the 0-4.0 scale used for the in vivo scores, i.e. within the same variable range reported for the repeat-test hand-wash in vivo data.

  18. An improved null model for assessing the net effects of multiple stressors on communities.

    PubMed

    Thompson, Patrick L; MacLennan, Megan M; Vinebrooke, Rolf D

    2018-01-01

    Ecological stressors (i.e., environmental factors outside their normal range of variation) can mediate each other through their interactions, leading to unexpected combined effects on communities. Determining whether the net effect of stressors is ecologically surprising requires comparing their cumulative impact to a null model that represents the linear combination of their individual effects (i.e., an additive expectation). However, we show that standard additive and multiplicative null models that base their predictions on the effects of single stressors on community properties (e.g., species richness or biomass) do not provide this linear expectation, leading to incorrect interpretations of antagonistic and synergistic responses by communities. We present an alternative, the compositional null model, which instead bases its predictions on the effects of stressors on individual species, and then aggregates them to the community level. Simulations demonstrate the improved ability of the compositional null model to accurately provide a linear expectation of the net effect of stressors. We simulate the response of communities to paired stressors that affect species in a purely additive fashion and compare the relative abilities of the compositional null model and two standard community property null models (additive and multiplicative) to predict these linear changes in species richness and community biomass across different combinations (both positive, negative, or opposite) and intensities of stressors. The compositional model predicts the linear effects of multiple stressors under almost all scenarios, allowing for proper classification of net effects, whereas the standard null models do not. Our findings suggest that current estimates of the prevalence of ecological surprises on communities based on community property null models are unreliable, and should be improved by integrating the responses of individual species to the community level as does our compositional null model. © 2017 John Wiley & Sons Ltd.

  19. Predicting nucleic acid binding interfaces from structural models of proteins

    PubMed Central

    Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael

    2011-01-01

    The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared to patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. PMID:22086767

  20. A review of statistical updating methods for clinical prediction models.

    PubMed

    Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew

    2018-01-01

    A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.

  1. Incorporating Retention Time to Refine Models Predicting Thermal Regimes of Stream Networks Across New England

    EPA Science Inventory

    Thermal regimes are a critical factor in models predicting effects of watershed management activities on fish habitat suitability. We have assembled a database of lotic temperature time series across New England (> 7000 station-year combinations) from state and Federal data s...

  2. Quantitative predictions of bioconversion of aspen by dilute acid and SPORL pretreatments using a unified combined hydrolysis factor (CHF)

    Treesearch

    W. Zhu; Carl J. Houtman; J.Y. Zhu; Roland Gleisner; K.F. Chen

    2012-01-01

    A combined hydrolysis factor (CHF) was developed to predict xylan hydrolysis during pretreatments of native aspen (Populus tremuloides) wood chips. A natural extension of previously developed kinetic models allowed us to account for the effect of catalysts by dilute acid and two sulfite pretreatments at different pH values....

  3. Association between different combination of measures for obesity and new-onset gallstone disease.

    PubMed

    Liu, Tong; Wang, Wanchao; Ji, Yannan; Wang, Yiming; Liu, Xining; Cao, Liying; Liu, Siqing

    2018-01-01

    Body mass index(BMI) is a calculation index of general obesity. Waist circumference(WC) is a measure of body-fat distribution and always used to estimate abdominal obesity. An important trait of general obesity and abdominal obesity is their propensity to coexist. Using one single measure of obesity could not estimate persons at risk for GSD precisely. This study aimed to compare the predictive values of various combination of measures for obesity(BMI, WC, waist to hip ratio) for new-onset GSD. We prospectively studied the predictive values of various combination of measures for obesity for new-onset GSD in a cohort of 88,947 participants who were free of prior gallstone disease, demographic characteristics and biochemical parameters were recorded. 4,329 participants were identified to have GSD among 88,947 participants during 713 345 person-years of follow-up. Higher BMI, WC and waist to hip ratio (WHtR) were significantly associated with higher risks of GSD in both genders even after adjustment for potential confounders. In males, the hazard ratio for the highest versus lowest BMI, WC, WHtR were 1.63(1.47~1.79), 1.53(1.40~1.68), 1.44(1.31~1.58), respectively. In females, the hazard ratio for the highest versus lowest BMI, WC, WHtR were 2.11(1.79~2.49), 1.85(1.55~2.22), 1.84(1.55~2.19), respectively. In male group, the combination of BMI+WC improved the predictive ability of the model more clearly than other combinations after adding them to the multivariate model in turn, while for females the best predictive combination was BMI+WHtR. Elevated BMI, WC and WHtR were independent risk factors for new-onset GSD in both sex groups after additional adjustment was made for potential confounders. In males, the combination of BMI+WC seemed to be the most predictable model to evaluate the effect of obesity on new-onset GSD, while the best combination in females was BMI+WHtR.

  4. Coupling a Mesoscale Numerical Weather Prediction Model with Large-Eddy Simulation for Realistic Wind Plant Aerodynamics Simulations (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draxl, C.; Churchfield, M.; Mirocha, J.

    Wind plant aerodynamics are influenced by a combination of microscale and mesoscale phenomena. Incorporating mesoscale atmospheric forcing (e.g., diurnal cycles and frontal passages) into wind plant simulations can lead to a more accurate representation of microscale flows, aerodynamics, and wind turbine/plant performance. Our goal is to couple a numerical weather prediction model that can represent mesoscale flow [specifically the Weather Research and Forecasting model] with a microscale LES model (OpenFOAM) that can predict microscale turbulence and wake losses.

  5. Modeling forest biomass and growth: Coupling long-term inventory and LiDAR data

    Treesearch

    Chad Babcock; Andrew O. Finley; Bruce D. Cook; Aaron Weiskittel; Christopher W. Woodall

    2016-01-01

    Combining spatially-explicit long-term forest inventory and remotely sensed information from Light Detection and Ranging (LiDAR) datasets through statistical models can be a powerful tool for predicting and mapping above-ground biomass (AGB) at a range of geographic scales. We present and examine a novel modeling approach to improve prediction of AGB and estimate AGB...

  6. Combining joint models for biomedical event extraction

    PubMed Central

    2012-01-01

    Background We explore techniques for performing model combination between the UMass and Stanford biomedical event extraction systems. Both sub-components address event extraction as a structured prediction problem, and use dual decomposition (UMass) and parsing algorithms (Stanford) to find the best scoring event structure. Our primary focus is on stacking where the predictions from the Stanford system are used as features in the UMass system. For comparison, we look at simpler model combination techniques such as intersection and union which require only the outputs from each system and combine them directly. Results First, we find that stacking substantially improves performance while intersection and union provide no significant benefits. Second, we investigate the graph properties of event structures and their impact on the combination of our systems. Finally, we trace the origins of events proposed by the stacked model to determine the role each system plays in different components of the output. We learn that, while stacking can propose novel event structures not seen in either base model, these events have extremely low precision. Removing these novel events improves our already state-of-the-art F1 to 56.6% on the test set of Genia (Task 1). Overall, the combined system formed via stacking ("FAUST") performed well in the BioNLP 2011 shared task. The FAUST system obtained 1st place in three out of four tasks: 1st place in Genia Task 1 (56.0% F1) and Task 2 (53.9%), 2nd place in the Epigenetics and Post-translational Modifications track (35.0%), and 1st place in the Infectious Diseases track (55.6%). Conclusion We present a state-of-the-art event extraction system that relies on the strengths of structured prediction and model combination through stacking. Akin to results on other tasks, stacking outperforms intersection and union and leads to very strong results. The utility of model combination hinges on complementary views of the data, and we show that our sub-systems capture different graph properties of event structures. Finally, by removing low precision novel events, we show that performance from stacking can be further improved. PMID:22759463

  7. Action Unit Models of Facial Expression of Emotion in the Presence of Speech

    PubMed Central

    Shah, Miraj; Cooper, David G.; Cao, Houwei; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini

    2014-01-01

    Automatic recognition of emotion using facial expressions in the presence of speech poses a unique challenge because talking reveals clues for the affective state of the speaker but distorts the canonical expression of emotion on the face. We introduce a corpus of acted emotion expression where speech is either present (talking) or absent (silent). The corpus is uniquely suited for analysis of the interplay between the two conditions. We use a multimodal decision level fusion classifier to combine models of emotion from talking and silent faces as well as from audio to recognize five basic emotions: anger, disgust, fear, happy and sad. Our results strongly indicate that emotion prediction in the presence of speech from action unit facial features is less accurate when the person is talking. Modeling talking and silent expressions separately and fusing the two models greatly improves accuracy of prediction in the talking setting. The advantages are most pronounced when silent and talking face models are fused with predictions from audio features. In this multi-modal prediction both the combination of modalities and the separate models of talking and silent facial expression of emotion contribute to the improvement. PMID:25525561

  8. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  9. Using demography and movement behavior to predict range expansion of the southern sea otter.

    USGS Publications Warehouse

    Tinker, M.T.; Doak, D.F.; Estes, J.A.

    2008-01-01

    In addition to forecasting population growth, basic demographic data combined with movement data provide a means for predicting rates of range expansion. Quantitative models of range expansion have rarely been applied to large vertebrates, although such tools could be useful for restoration and management of many threatened but recovering populations. Using the southern sea otter (Enhydra lutris nereis) as a case study, we utilized integro-difference equations in combination with a stage-structured projection matrix that incorporated spatial variation in dispersal and demography to make forecasts of population recovery and range recolonization. In addition to these basic predictions, we emphasize how to make these modeling predictions useful in a management context through the inclusion of parameter uncertainty and sensitivity analysis. Our models resulted in hind-cast (1989–2003) predictions of net population growth and range expansion that closely matched observed patterns. We next made projections of future range expansion and population growth, incorporating uncertainty in all model parameters, and explored the sensitivity of model predictions to variation in spatially explicit survival and dispersal rates. The predicted rate of southward range expansion (median = 5.2 km/yr) was sensitive to both dispersal and survival rates; elasticity analysis indicated that changes in adult survival would have the greatest potential effect on the rate of range expansion, while perturbation analysis showed that variation in subadult dispersal contributed most to variance in model predictions. Variation in survival and dispersal of females at the south end of the range contributed most of the variance in predicted southward range expansion. Our approach provides guidance for the acquisition of further data and a means of forecasting the consequence of specific management actions. Similar methods could aid in the management of other recovering populations.

  10. Novel prediction model of renal function after nephrectomy from automated renal volumetry with preoperative multidetector computed tomography (MDCT).

    PubMed

    Isotani, Shuji; Shimoyama, Hirofumi; Yokota, Isao; Noma, Yasuhiro; Kitamura, Kousuke; China, Toshiyuki; Saito, Keisuke; Hisasue, Shin-ichi; Ide, Hisamitsu; Muto, Satoru; Yamaguchi, Raizo; Ukimura, Osamu; Gill, Inderbir S; Horie, Shigeo

    2015-10-01

    The predictive model of postoperative renal function may impact on planning nephrectomy. To develop the novel predictive model using combination of clinical indices with computer volumetry to measure the preserved renal cortex volume (RCV) using multidetector computed tomography (MDCT), and to prospectively validate performance of the model. Total 60 patients undergoing radical nephrectomy from 2011 to 2013 participated, including a development cohort of 39 patients and an external validation cohort of 21 patients. RCV was calculated by voxel count using software (Vincent, FUJIFILM). Renal function before and after radical nephrectomy was assessed via the estimated glomerular filtration rate (eGFR). Factors affecting postoperative eGFR were examined by regression analysis to develop the novel model for predicting postoperative eGFR with a backward elimination method. The predictive model was externally validated and the performance of the model was compared with that of the previously reported models. The postoperative eGFR value was associated with age, preoperative eGFR, preserved renal parenchymal volume (RPV), preserved RCV, % of RPV alteration, and % of RCV alteration (p < 0.01). The significant correlated variables for %eGFR alteration were %RCV preservation (r = 0.58, p < 0.01) and %RPV preservation (r = 0.54, p < 0.01). We developed our regression model as follows: postoperative eGFR = 57.87 - 0.55(age) - 15.01(body surface area) + 0.30(preoperative eGFR) + 52.92(%RCV preservation). Strong correlation was seen between postoperative eGFR and the calculated estimation model (r = 0.83; p < 0.001). The external validation cohort (n = 21) showed our model outperformed previously reported models. Combining MDCT renal volumetry and clinical indices might yield an important tool for predicting postoperative renal function.

  11. Ant colony optimization algorithm for interpretable Bayesian classifiers combination: application to medical predictions.

    PubMed

    Bouktif, Salah; Hanna, Eileen Marie; Zaki, Nazar; Abu Khousa, Eman

    2014-01-01

    Prediction and classification techniques have been well studied by machine learning researchers and developed for several real-word problems. However, the level of acceptance and success of prediction models are still below expectation due to some difficulties such as the low performance of prediction models when they are applied in different environments. Such a problem has been addressed by many researchers, mainly from the machine learning community. A second problem, principally raised by model users in different communities, such as managers, economists, engineers, biologists, and medical practitioners, etc., is the prediction models' interpretability. The latter is the ability of a model to explain its predictions and exhibit the causality relationships between the inputs and the outputs. In the case of classification, a successful way to alleviate the low performance is to use ensemble classiers. It is an intuitive strategy to activate collaboration between different classifiers towards a better performance than individual classier. Unfortunately, ensemble classifiers method do not take into account the interpretability of the final classification outcome. It even worsens the original interpretability of the individual classifiers. In this paper we propose a novel implementation of classifiers combination approach that does not only promote the overall performance but also preserves the interpretability of the resulting model. We propose a solution based on Ant Colony Optimization and tailored for the case of Bayesian classifiers. We validate our proposed solution with case studies from medical domain namely, heart disease and Cardiotography-based predictions, problems where interpretability is critical to make appropriate clinical decisions. The datasets, Prediction Models and software tool together with supplementary materials are available at http://faculty.uaeu.ac.ae/salahb/ACO4BC.htm.

  12. Final Technical Report: Increasing Prediction Accuracy.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Bruce Hardison; Hansen, Clifford; Stein, Joshua

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  13. Predictive modeling of structured electronic health records for adverse drug event detection.

    PubMed

    Zhao, Jing; Henriksson, Aron; Asker, Lars; Boström, Henrik

    2015-01-01

    The digitization of healthcare data, resulting from the increasingly widespread adoption of electronic health records, has greatly facilitated its analysis by computational methods and thereby enabled large-scale secondary use thereof. This can be exploited to support public health activities such as pharmacovigilance, wherein the safety of drugs is monitored to inform regulatory decisions about sustained use. To that end, electronic health records have emerged as a potentially valuable data source, providing access to longitudinal observations of patient treatment and drug use. A nascent line of research concerns predictive modeling of healthcare data for the automatic detection of adverse drug events, which presents its own set of challenges: it is not yet clear how to represent the heterogeneous data types in a manner conducive to learning high-performing machine learning models. Datasets from an electronic health record database are used for learning predictive models with the purpose of detecting adverse drug events. The use and representation of two data types, as well as their combination, are studied: clinical codes, describing prescribed drugs and assigned diagnoses, and measurements. Feature selection is conducted on the various types of data to reduce dimensionality and sparsity, while allowing for an in-depth feature analysis of the usefulness of each data type and representation. Within each data type, combining multiple representations yields better predictive performance compared to using any single representation. The use of clinical codes for adverse drug event detection significantly outperforms the use of measurements; however, there is no significant difference over datasets between using only clinical codes and their combination with measurements. For certain adverse drug events, the combination does, however, outperform using only clinical codes. Feature selection leads to increased predictive performance for both data types, in isolation and combined. We have demonstrated how machine learning can be applied to electronic health records for the purpose of detecting adverse drug events and proposed solutions to some of the challenges this presents, including how to represent the various data types. Overall, clinical codes are more useful than measurements and, in specific cases, it is beneficial to combine the two.

  14. Predictive modeling of structured electronic health records for adverse drug event detection

    PubMed Central

    2015-01-01

    Background The digitization of healthcare data, resulting from the increasingly widespread adoption of electronic health records, has greatly facilitated its analysis by computational methods and thereby enabled large-scale secondary use thereof. This can be exploited to support public health activities such as pharmacovigilance, wherein the safety of drugs is monitored to inform regulatory decisions about sustained use. To that end, electronic health records have emerged as a potentially valuable data source, providing access to longitudinal observations of patient treatment and drug use. A nascent line of research concerns predictive modeling of healthcare data for the automatic detection of adverse drug events, which presents its own set of challenges: it is not yet clear how to represent the heterogeneous data types in a manner conducive to learning high-performing machine learning models. Methods Datasets from an electronic health record database are used for learning predictive models with the purpose of detecting adverse drug events. The use and representation of two data types, as well as their combination, are studied: clinical codes, describing prescribed drugs and assigned diagnoses, and measurements. Feature selection is conducted on the various types of data to reduce dimensionality and sparsity, while allowing for an in-depth feature analysis of the usefulness of each data type and representation. Results Within each data type, combining multiple representations yields better predictive performance compared to using any single representation. The use of clinical codes for adverse drug event detection significantly outperforms the use of measurements; however, there is no significant difference over datasets between using only clinical codes and their combination with measurements. For certain adverse drug events, the combination does, however, outperform using only clinical codes. Feature selection leads to increased predictive performance for both data types, in isolation and combined. Conclusions We have demonstrated how machine learning can be applied to electronic health records for the purpose of detecting adverse drug events and proposed solutions to some of the challenges this presents, including how to represent the various data types. Overall, clinical codes are more useful than measurements and, in specific cases, it is beneficial to combine the two. PMID:26606038

  15. Streamflow Prediction based on Chaos Theory

    NASA Astrophysics Data System (ADS)

    Li, X.; Wang, X.; Babovic, V. M.

    2015-12-01

    Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.

  16. Strontium-90 Biokinetics from Simulated Wound Intakes in Non-human Primates Compared with Combined Model Predictions from National Council on Radiation Protection and Measurements Report 156 and International Commission on Radiological Protection Publication 67.

    PubMed

    Allen, Mark B; Brey, Richard R; Gesell, Thomas; Derryberry, Dewayne; Poudel, Deepesh

    2016-01-01

    This study had a goal to evaluate the predictive capabilities of the National Council on Radiation Protection and Measurements (NCRP) wound model coupled to the International Commission on Radiological Protection (ICRP) systemic model for 90Sr-contaminated wounds using non-human primate data. Studies were conducted on 13 macaque (Macaca mulatta) monkeys, each receiving one-time intramuscular injections of 90Sr solution. Urine and feces samples were collected up to 28 d post-injection and analyzed for 90Sr activity. Integrated Modules for Bioassay Analysis (IMBA) software was configured with default NCRP and ICRP model transfer coefficients to calculate predicted 90Sr intake via the wound based on the radioactivity measured in bioassay samples. The default parameters of the combined models produced adequate fits of the bioassay data, but maximum likelihood predictions of intake were overestimated by a factor of 1.0 to 2.9 when bioassay data were used as predictors. Skeletal retention was also over-predicted, suggesting an underestimation of the excretion fraction. Bayesian statistics and Monte Carlo sampling were applied using IMBA to vary the default parameters, producing updated transfer coefficients for individual monkeys that improved model fit and predicted intake and skeletal retention. The geometric means of the optimized transfer rates for the 11 cases were computed, and these optimized sample population parameters were tested on two independent monkey cases and on the 11 monkeys from which the optimized parameters were derived. The optimized model parameters did not improve the model fit in most cases, and the predicted skeletal activity produced improvements in three of the 11 cases. The optimized parameters improved the predicted intake in all cases but still over-predicted the intake by an average of 50%. The results suggest that the modified transfer rates were not always an improvement over the default NCRP and ICRP model values.

  17. Impact of data assimilation on ocean current forecasts in the Angola Basin

    NASA Astrophysics Data System (ADS)

    Phillipson, Luke; Toumi, Ralf

    2017-06-01

    The ocean current predictability in the data limited Angola Basin was investigated using the Regional Ocean Modelling System (ROMS) with four-dimensional variational data assimilation. Six experiments were undertaken comprising a baseline case of the assimilation of salinity/temperature profiles and satellite sea surface temperature, with the subsequent addition of altimetry, OSCAR (satellite-derived sea surface currents), drifters, altimetry and drifters combined, and OSCAR and drifters combined. The addition of drifters significantly improves Lagrangian predictability in comparison to the baseline case as well as the addition of either altimetry or OSCAR. OSCAR assimilation only improves Lagrangian predictability as much as altimetry assimilation. On average the assimilation of either altimetry or OSCAR with drifter velocities does not significantly improve Lagrangian predictability compared to the drifter assimilation alone, even degrading predictability in some cases. When the forecast current speed is large, it is more likely that the combination improves trajectory forecasts. Conversely, when the currents are weaker, it is more likely that the combination degrades the trajectory forecast.

  18. A new predictive dynamic model describing the effect of the ambient temperature and the convective heat transfer coefficient on bacterial growth.

    PubMed

    Ben Yaghlene, H; Leguerinel, I; Hamdi, M; Mafart, P

    2009-07-31

    In this study, predictive microbiology and food engineering were combined in order to develop a new analytical model predicting the bacterial growth under dynamic temperature conditions. The proposed model associates a simplified primary bacterial growth model without lag, the secondary Ratkowsky "square root" model and a simplified two-parameter heat transfer model regarding an infinite slab. The model takes into consideration the product thickness, its thermal properties, the ambient air temperature, the convective heat transfer coefficient and the growth parameters of the micro organism of concern. For the validation of the overall model, five different combinations of ambient air temperature (ranging from 8 degrees C to 12 degrees C), product thickness (ranging from 1 cm to 6 cm) and convective heat transfer coefficient (ranging from 8 W/(m(2) K) to 60 W/(m(2) K)) were tested during a cooling procedure. Moreover, three different ambient air temperature scenarios assuming alternated cooling and heating stages, drawn from real refrigerated food processes, were tested. General agreement between predicted and observed bacterial growth was obtained and less than 5% of the experimental data fell outside the 95% confidence bands estimated by the bootstrap percentile method, at all the tested conditions. Accordingly, the overall model was successfully validated for isothermal and dynamic refrigeration cycles allowing for temperature dynamic changes at the centre and at the surface of the product. The major impact of the convective heat transfer coefficient and the product thickness on bacterial growth during the product cooling was demonstrated. For instance, the time needed for the same level of bacterial growth to be reached at the product's half thickness was estimated to be 5 and 16.5 h at low and high convection level, respectively. Moreover, simulation results demonstrated that the predicted bacterial growth at the air ambient temperature cannot be assumed to be equivalent to the bacterial growth occurring at the product's surface or centre when convection heat transfer is taken into account. Our results indicate that combining food engineering and predictive microbiology models is an interesting approach providing very useful tools for food safety and process optimisation.

  19. Development of a globally applicable model for near real-time prediction of seismically induced landslides

    USGS Publications Warehouse

    Nowicki, M. Anna; Wald, David J.; Hamburger, Michael W.; Hearne, Mike; Thompson, Eric M.

    2014-01-01

    Substantial effort has been invested to understand where seismically induced landslides may occur in the future, as they are a costly and frequently fatal threat in mountainous regions. The goal of this work is to develop a statistical model for estimating the spatial distribution of landslides in near real-time around the globe for use in conjunction with the U.S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system. This model uses standardized outputs of ground shaking from the USGS ShakeMap Atlas 2.0 to develop an empirical landslide probability model, combining shaking estimates with broadly available landslide susceptibility proxies, i.e., topographic slope, surface geology, and climate parameters. We focus on four earthquakes for which digitally mapped landslide inventories and well-constrainedShakeMaps are available. The resulting database is used to build a predictive model of the probability of landslide occurrence. The landslide database includes the Guatemala (1976), Northridge (1994), Chi-Chi (1999), and Wenchuan (2008) earthquakes. Performance of the regression model is assessed using statistical goodness-of-fit metrics and a qualitative review to determine which combination of the proxies provides both the optimum prediction of landslide-affected areas and minimizes the false alarms in non-landslide zones. Combined with near real-time ShakeMaps, these models can be used to make generalized predictions of whether or not landslides are likely to occur (and if so, where) for earthquakes around the globe, and eventually to inform loss estimates within the framework of the PAGER system.

  20. Simulating physiological interactions in a hybrid system of mathematical models.

    PubMed

    Kretschmer, Jörn; Haunsberger, Thomas; Drost, Erick; Koch, Edmund; Möller, Knut

    2014-12-01

    Mathematical models can be deployed to simulate physiological processes of the human organism. Exploiting these simulations, reactions of a patient to changes in the therapy regime can be predicted. Based on these predictions, medical decision support systems (MDSS) can help in optimizing medical therapy. An MDSS designed to support mechanical ventilation in critically ill patients should not only consider respiratory mechanics but should also consider other systems of the human organism such as gas exchange or blood circulation. A specially designed framework allows combining three model families (respiratory mechanics, cardiovascular dynamics and gas exchange) to predict the outcome of a therapy setting. Elements of the three model families are dynamically combined to form a complex model system with interacting submodels. Tests revealed that complex model combinations are not computationally feasible. In most patients, cardiovascular physiology could be simulated by simplified models decreasing computational costs. Thus, a simplified cardiovascular model that is able to reproduce basic physiological behavior is introduced. This model purely consists of difference equations and does not require special algorithms to be solved numerically. The model is based on a beat-to-beat model which has been extended to react to intrathoracic pressure levels that are present during mechanical ventilation. The introduced reaction to intrathoracic pressure levels as found during mechanical ventilation has been tuned to mimic the behavior of a complex 19-compartment model. Tests revealed that the model is able to represent general system behavior comparable to the 19-compartment model closely. Blood pressures were calculated with a maximum deviation of 1.8 % in systolic pressure and 3.5 % in diastolic pressure, leading to a simulation error of 0.3 % in cardiac output. The gas exchange submodel being reactive to changes in cardiac output showed a resulting deviation of less than 0.1 %. Therefore, the proposed model is usable in combinations where cardiovascular simulation does not have to be detailed. Computing costs have been decreased dramatically by a factor 186 compared to a model combination employing the 19-compartment model.

  1. Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model

    NASA Astrophysics Data System (ADS)

    Wang, Qijie

    2015-08-01

    The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.

  2. Robust current control-based generalized predictive control with sliding mode disturbance compensation for PMSM drives.

    PubMed

    Liu, Xudong; Zhang, Chenghui; Li, Ke; Zhang, Qi

    2017-11-01

    This paper addresses the current control of permanent magnet synchronous motor (PMSM) for electric drives with model uncertainties and disturbances. A generalized predictive current control method combined with sliding mode disturbance compensation is proposed to satisfy the requirement of fast response and strong robustness. Firstly, according to the generalized predictive control (GPC) theory based on the continuous time model, a predictive current control method is presented without considering the disturbance, which is convenient to be realized in the digital controller. In fact, it's difficult to derive the exact motor model and parameters in the practical system. Thus, a sliding mode disturbance compensation controller is studied to improve the adaptiveness and robustness of the control system. The designed controller attempts to combine the merits of both predictive control and sliding mode control, meanwhile, the controller parameters are easy to be adjusted. Lastly, the proposed controller is tested on an interior PMSM by simulation and experiment, and the results indicate that it has good performance in both current tracking and disturbance rejection. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Analysis and forecast of railway coal transportation volume based on BP neural network combined forecasting model

    NASA Astrophysics Data System (ADS)

    Xu, Yongbin; Xie, Haihong; Wu, Liuyi

    2018-05-01

    The share of coal transportation in the total railway freight volume is about 50%. As is widely acknowledged, coal industry is vulnerable to the economic situation and national policies. Coal transportation volume fluctuates significantly under the new economic normal. Grasp the overall development trend of railway coal transportation market, have important reference and guidance significance to the railway and coal industry decision-making. By analyzing the economic indicators and policy implications, this paper expounds the trend of the coal transportation volume, and further combines the economic indicators with the high correlation with the coal transportation volume with the traditional traffic prediction model to establish a combined forecasting model based on the back propagation neural network. The error of the prediction results is tested, which proves that the method has higher accuracy and has practical application.

  4. Limited improvement of incorporating primary circulating prostate cells with the CAPRA score to predict biochemical failure-free outcome of radical prostatectomy for prostate cancer.

    PubMed

    Murray, Nigel P; Aedo, Socrates; Fuentealba, Cynthia; Jacob, Omar; Reyes, Eduardo; Novoa, Camilo; Orellana, Sebastian; Orellana, Nelson

    2016-10-01

    To establish a prediction model for early biochemical failure based on the Cancer of the Prostate Risk Assessment (CAPRA) score, the presence or absence of primary circulating prostate cells (CPC) and the number of primary CPC (nCPC)/8ml blood sample is detected before surgery. A prospective single-center study of men who underwent radical prostatectomy as monotherapy for prostate cancer. Clinical-pathological findings were used to calculate the CAPRA score. Before surgery blood was taken for CPC detection, mononuclear cells were obtained using differential gel centrifugation, and CPCs identified using immunocytochemistry. A CPC was defined as a cell expressing prostate-specific antigen and P504S, and the presence or absence of CPCs and the number of cells detected/8ml blood sample was registered. Patients were followed up for up to 5 years; biochemical failure was defined as a prostate-specific antigen>0.2ng/ml. The validity of the CAPRA score was calibrated using partial validation, and the fractional polynomial Cox proportional hazard regression was used to build 3 models, which underwent a decision analysis curve to determine the predictive value of the 3 models with respect to biochemical failure. A total of 267 men participated, mean age 65.80 years, and after 5 years of follow-up the biochemical-free survival was 67.42%. The model using CAPRA score showed a hazards ratio (HR) of 5.76 between low and high-risk groups, that of CPC with a HR of 26.84 between positive and negative groups, and the combined model showed a HR of 4.16 for CAPRA score and 19.93 for CPC. Using the continuous variable nCPC, there was no improvement in the predictive value of the model compared with the model using a positive-negative result of CPC detection. The combined CAPRA-nCPC model showed an improvement of the predictive performance for biochemical failure using the Harrell׳s C concordance test and a net benefit on DCA in comparison with either model used separately. The use of primary CPC as a predictive factor based on their presence or absence did not predict aggressive disease or biochemical failure. Although the use of a combined CAPRA-nCPC model improves the prediction of biochemical failure in patients undergoing radical prostatectomy for prostate cancer, this is minimal. The use of the presence or absence of primary CPCs alone did not predict aggressive disease or biochemical failure. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. DeepSynergy: predicting anti-cancer drug synergy with Deep Learning

    PubMed Central

    Preuer, Kristina; Lewis, Richard P I; Hochreiter, Sepp; Bender, Andreas; Bulusu, Krishna C; Klambauer, Günter

    2018-01-01

    Abstract Motivation While drug combination therapies are a well-established concept in cancer treatment, identifying novel synergistic combinations is challenging due to the size of combinatorial space. However, computational approaches have emerged as a time- and cost-efficient way to prioritize combinations to test, based on recently available large-scale combination screening data. Recently, Deep Learning has had an impact in many research areas by achieving new state-of-the-art model performance. However, Deep Learning has not yet been applied to drug synergy prediction, which is the approach we present here, termed DeepSynergy. DeepSynergy uses chemical and genomic information as input information, a normalization strategy to account for input data heterogeneity, and conical layers to model drug synergies. Results DeepSynergy was compared to other machine learning methods such as Gradient Boosting Machines, Random Forests, Support Vector Machines and Elastic Nets on the largest publicly available synergy dataset with respect to mean squared error. DeepSynergy significantly outperformed the other methods with an improvement of 7.2% over the second best method at the prediction of novel drug combinations within the space of explored drugs and cell lines. At this task, the mean Pearson correlation coefficient between the measured and the predicted values of DeepSynergy was 0.73. Applying DeepSynergy for classification of these novel drug combinations resulted in a high predictive performance of an AUC of 0.90. Furthermore, we found that all compared methods exhibit low predictive performance when extrapolating to unexplored drugs or cell lines, which we suggest is due to limitations in the size and diversity of the dataset. We envision that DeepSynergy could be a valuable tool for selecting novel synergistic drug combinations. Availability and implementation DeepSynergy is available via www.bioinf.jku.at/software/DeepSynergy. Contact klambauer@bioinf.jku.at Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253077

  6. Modeling the eco-physiology of the purple mauve stinger, Pelagia noctiluca using Dynamic Energy Budget theory

    NASA Astrophysics Data System (ADS)

    Augustine, Starrlight; Rosa, Sara; Kooijman, Sebastiaan A. L. M.; Carlotti, François; Poggiale, Jean-Christophe

    2014-11-01

    Parameters for the standard Dynamic Energy Budget (DEB) model were estimated for the purple mauve stinger, Pelagia noctiluca, using literature data. Overall, the model predictions are in good agreement with data covering the full life-cycle. The parameter set we obtain suggests that P. noctiluca is well adapted to survive long periods of starvation since the predicted maximum reserve capacity is extremely high. Moreover we predict that the reproductive output of larger individuals is relatively insensitive to changes in food level while wet mass and length are. Furthermore, the parameters imply that even if food were scarce (ingestion levels only 14% of the maximum for a given size) an individual would still mature and be able to reproduce. We present detailed model predictions for embryo development and discuss the developmental energetics of the species such as the fact that the metabolism of ephyrae accelerates for several days after birth. Finally we explore a number of concrete testable model predictions which will help to guide future research. The application of DEB theory to the collected data allowed us to conclude that P. noctiluca combines maximizing allocation to reproduction with rather extreme capabilities to survive starvation. The combination of these properties might explain why P. noctiluca is a rapidly growing concern to fisheries and tourism.

  7. Livestock Helminths in a Changing Climate: Approaches and Restrictions to Meaningful Predictions.

    PubMed

    Fox, Naomi J; Marion, Glenn; Davidson, Ross S; White, Piran C L; Hutchings, Michael R

    2012-03-06

    Climate change is a driving force for livestock parasite risk. This is especially true for helminths including the nematodes Haemonchus contortus, Teladorsagia circumcincta, Nematodirus battus, and the trematode Fasciola hepatica, since survival and development of free-living stages is chiefly affected by temperature and moisture. The paucity of long term predictions of helminth risk under climate change has driven us to explore optimal modelling approaches and identify current bottlenecks to generating meaningful predictions. We classify approaches as correlative or mechanistic, exploring their strengths and limitations. Climate is one aspect of a complex system and, at the farm level, husbandry has a dominant influence on helminth transmission. Continuing environmental change will necessitate the adoption of mitigation and adaptation strategies in husbandry. Long term predictive models need to have the architecture to incorporate these changes. Ultimately, an optimal modelling approach is likely to combine mechanistic processes and physiological thresholds with correlative bioclimatic modelling, incorporating changes in livestock husbandry and disease control. Irrespective of approach, the principal limitation to parasite predictions is the availability of active surveillance data and empirical data on physiological responses to climate variables. By combining improved empirical data and refined models with a broad view of the livestock system, robust projections of helminth risk can be developed.

  8. A fuzzy set preference model for market share analysis

    NASA Technical Reports Server (NTRS)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share prediction).

  9. A MIXTURE OF SEVEN ANTIANDROGENIC COMPOUNDS ELICITS ADDITIVE EFFECTS ON THE MALE RAT REPRODUCTIVE TRACT THAT CORRESPOND TO MODELED PREDICTIONS

    EPA Science Inventory

    The main objectives of this study were to: (1) determine whether dissimilar antiandrogenic compounds display additive effects when present in combination and (2) to assess the ability of modelling approaches to accurately predict these mixture effects based on data from single ch...

  10. Application of a Curriculum Hierarchy Evaluation (CHE) Model to Sequentially Arranged Tasks.

    ERIC Educational Resources Information Center

    O'Malley, J. Michael

    A curriculum hierarchy evaluation (CHE) model was developed by combining a transfer paradigm with an aptitude-treatment-task interaction (ATTI) paradigm. Positive transfer was predicted between sequentially arranged tasks, and a programed or nonprogramed treatment was predicted to interact with aptitude and with tasks. Eighteen four and five…

  11. Using Machine Learning Models to Predict Functional Use (Interagency Alternatives Assessment Workshop)

    EPA Science Inventory

    This presentation will outline how data was collected on how chemicals are used in products, models were built using this data to then predict how chemicals can be used in products, and, finally, how combining this information with Tox21 in vitro assays can be use to rapidly scre...

  12. Combining Satellite Measurements and Numerical Flood Prediction Models to Save Lives and Property from Flooding

    NASA Astrophysics Data System (ADS)

    Saleh, F.; Garambois, P. A.; Biancamaria, S.

    2017-12-01

    Floods are considered the major natural threats to human societies across all continents. Consequences of floods in highly populated areas are more dramatic with losses of human lives and substantial property damage. This risk is projected to increase with the effects of climate change, particularly sea-level rise, increasing storm frequencies and intensities and increasing population and economic assets in such urban watersheds. Despite the advances in computational resources and modeling techniques, significant gaps exist in predicting complex processes and accurately representing the initial state of the system. Improving flood prediction models and data assimilation chains through satellite has become an absolute priority to produce accurate flood forecasts with sufficient lead times. The overarching goal of this work is to assess the benefits of the Surface Water Ocean Topography SWOT satellite data from a flood prediction perspective. The near real time methodology is based on combining satellite data from a simulator that mimics the future SWOT data, numerical models, high resolution elevation data and real-time local measurement in the New York/New Jersey area.

  13. Identifying a predictive model for response to atypical antipsychotic monotherapy treatment in south Indian schizophrenia patients.

    PubMed

    Gupta, Meenal; Moily, Nagaraj S; Kaur, Harpreet; Jajodia, Ajay; Jain, Sanjeev; Kukreti, Ritushree

    2013-08-01

    Atypical antipsychotic (AAP) drugs are the preferred choice of treatment for schizophrenia patients. Patients who do not show favorable response to AAP monotherapy are subjected to random prolonged therapeutic treatment with AAP multitherapy, typical antipsychotics or a combination of both. Therefore, prior identification of patients' response to drugs can be an important step in providing efficacious and safe therapeutic treatment. We thus attempted to elucidate a genetic signature which could predict patients' response to AAP monotherapy. Our logistic regression analyses indicated the probability that 76% patients carrying combination of four SNPs will not show favorable response to AAP therapy. The robustness of this prediction model was assessed using repeated 10-fold cross validation method, and the results across n-fold cross-validations (mean accuracy=71.91%; 95%CI=71.47-72.35) suggest high accuracy and reliability of the prediction model. Further validations of these results in large sample sets are likely to establish their clinical applicability. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Sugar and acid content of Citrus prediction modeling using FT-IR fingerprinting in combination with multivariate statistical analysis.

    PubMed

    Song, Seung Yeob; Lee, Young Koung; Kim, In-Jung

    2016-01-01

    A high-throughput screening system for Citrus lines were established with higher sugar and acid contents using Fourier transform infrared (FT-IR) spectroscopy in combination with multivariate analysis. FT-IR spectra confirmed typical spectral differences between the frequency regions of 950-1100 cm(-1), 1300-1500 cm(-1), and 1500-1700 cm(-1). Principal component analysis (PCA) and subsequent partial least square-discriminant analysis (PLS-DA) were able to discriminate five Citrus lines into three separate clusters corresponding to their taxonomic relationships. The quantitative predictive modeling of sugar and acid contents from Citrus fruits was established using partial least square regression algorithms from FT-IR spectra. The regression coefficients (R(2)) between predicted values and estimated sugar and acid content values were 0.99. These results demonstrate that by using FT-IR spectra and applying quantitative prediction modeling to Citrus sugar and acid contents, excellent Citrus lines can be early detected with greater accuracy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Comparison of in silico models for prediction of mutagenicity.

    PubMed

    Bakhtyari, Nazanin G; Raitano, Giuseppa; Benfenati, Emilio; Martin, Todd; Young, Douglas

    2013-01-01

    Using a dataset with more than 6000 compounds, the performance of eight quantitative structure activity relationships (QSAR) models was evaluated: ACD/Tox Suite, Absorption, Distribution, Metabolism, Elimination, and Toxicity of chemical substances (ADMET) predictor, Derek, Toxicity Estimation Software Tool (T.E.S.T.), TOxicity Prediction by Komputer Assisted Technology (TOPKAT), Toxtree, CEASAR, and SARpy (SAR in python). In general, the results showed a high level of performance. To have a realistic estimate of the predictive ability, the results for chemicals inside and outside the training set for each model were considered. The effect of applicability domain tools (when available) on the prediction accuracy was also evaluated. The predictive tools included QSAR models, knowledge-based systems, and a combination of both methods. Models based on statistical QSAR methods gave better results.

  16. Predicting risk of trace element pollution from municipal roads using site-specific soil samples and remotely sensed data.

    PubMed

    Reeves, Mari Kathryn; Perdue, Margaret; Munk, Lee Ann; Hagedorn, Birgit

    2018-07-15

    Studies of environmental processes exhibit spatial variation within data sets. The ability to derive predictions of risk from field data is a critical path forward in understanding the data and applying the information to land and resource management. Thanks to recent advances in predictive modeling, open source software, and computing, the power to do this is within grasp. This article provides an example of how we predicted relative trace element pollution risk from roads across a region by combining site specific trace element data in soils with regional land cover and planning information in a predictive model framework. In the Kenai Peninsula of Alaska, we sampled 36 sites (191 soil samples) adjacent to roads for trace elements. We then combined this site specific data with freely-available land cover and urban planning data to derive a predictive model of landscape scale environmental risk. We used six different model algorithms to analyze the dataset, comparing these in terms of their predictive abilities and the variables identified as important. Based on comparable predictive abilities (mean R 2 from 30 to 35% and mean root mean square error from 65 to 68%), we averaged all six model outputs to predict relative levels of trace element deposition in soils-given the road surface, traffic volume, sample distance from the road, land cover category, and impervious surface percentage. Mapped predictions of environmental risk from toxic trace element pollution can show land managers and transportation planners where to prioritize road renewal or maintenance by each road segment's relative environmental and human health risk. Published by Elsevier B.V.

  17. Genomic Prediction with Pedigree and Genotype × Environment Interaction in Spring Wheat Grown in South and West Asia, North Africa, and Mexico.

    PubMed

    Sukumaran, Sivakumar; Crossa, Jose; Jarquin, Diego; Lopes, Marta; Reynolds, Matthew P

    2017-02-09

    Developing genomic selection (GS) models is an important step in applying GS to accelerate the rate of genetic gain in grain yield in plant breeding. In this study, seven genomic prediction models under two cross-validation (CV) scenarios were tested on 287 advanced elite spring wheat lines phenotyped for grain yield (GY), thousand-grain weight (GW), grain number (GN), and thermal time for flowering (TTF) in 18 international environments (year-location combinations) in major wheat-producing countries in 2010 and 2011. Prediction models with genomic and pedigree information included main effects and interaction with environments. Two random CV schemes were applied to predict a subset of lines that were not observed in any of the 18 environments (CV1), and a subset of lines that were not observed in a set of the environments, but were observed in other environments (CV2). Genomic prediction models, including genotype × environment (G×E) interaction, had the highest average prediction ability under the CV1 scenario for GY (0.31), GN (0.32), GW (0.45), and TTF (0.27). For CV2, the average prediction ability of the model including the interaction terms was generally high for GY (0.38), GN (0.43), GW (0.63), and TTF (0.53). Wheat lines in site-year combinations in Mexico and India had relatively high prediction ability for GY and GW. Results indicated that prediction ability of lines not observed in certain environments could be relatively high for genomic selection when predicting G×E interaction in multi-environment trials. Copyright © 2017 Sukumaran et al.

  18. Prediction of in vivo developmental toxicity by combination of Hand1-Luc embryonic stem cell test and metabolic stability test with clarification of metabolically inapplicable candidates.

    PubMed

    Nagahori, Hirohisa; Suzuki, Noriyuki; Le Coz, Florian; Omori, Takashi; Saito, Koichi

    2016-09-30

    Hand1-Luc Embryonic Stem Cell Test (Hand1-Luc EST) is a promising alternative method for evaluation of developmental toxicity. However, the problems of predictivity have remained due to appropriateness of the solubility, metabolic system, and prediction model. Therefore, we assessed the usefulness of rat liver S9 metabolic stability test using LC-MS/MS to develop new prediction model. A total of 71 chemicals were analyzed by measuring cytotoxicity and differentiation toxicity, and highly reproducible (CV=20%) results were obtained. The first prediction model was developed by discriminant analysis performed on a full dataset using Hand1-Luc EST, and 66.2% of the chemicals were correctly classified by the cross-validated classification. A second model was developed with additional descriptors obtained from the metabolic stability test to calculate hepatic availability, and an accuracy of 83.3% was obtained with applicability domain of 50.7% (=36/71) after exclusion of 22 metabolically inapplicable candidates, which potentially have a metabolic activation property. A step-wise prediction scheme with combination of Hand1-Luc EST and metabolic stability test was therefore proposed. The current results provide a promising in vitro test method for accurately predicting in vivo developmental toxicity. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Systematic interpolation method predicts protein chromatographic elution with salt gradients, pH gradients and combined salt/pH gradients.

    PubMed

    Creasy, Arch; Barker, Gregory; Carta, Giorgio

    2017-03-01

    A methodology is presented to predict protein elution behavior from an ion exchange column using both individual or combined pH and salt gradients based on high-throughput batch isotherm data. The buffer compositions are first optimized to generate linear pH gradients from pH 5.5 to 7 with defined concentrations of sodium chloride. Next, high-throughput batch isotherm data are collected for a monoclonal antibody on the cation exchange resin POROS XS over a range of protein concentrations, salt concentrations, and solution pH. Finally, a previously developed empirical interpolation (EI) method is extended to describe protein binding as a function of the protein and salt concentration and solution pH without using an explicit isotherm model. The interpolated isotherm data are then used with a lumped kinetic model to predict the protein elution behavior. Experimental results obtained for laboratory scale columns show excellent agreement with the predicted elution curves for both individual or combined pH and salt gradients at protein loads up to 45 mg/mL of column. Numerical studies show that the model predictions are robust as long as the isotherm data cover the range of mobile phase compositions where the protein actually elutes from the column. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. An iterative fullwave simulation approach to multiple scattering in media with randomly distributed microbubbles

    NASA Astrophysics Data System (ADS)

    Joshi, Aditya; Lindsey, Brooks D.; Dayton, Paul A.; Pinton, Gianmarco; Muller, Marie

    2017-05-01

    Ultrasound contrast agents (UCA), such as microbubbles, enhance the scattering properties of blood, which is otherwise hypoechoic. The multiple scattering interactions of the acoustic field with UCA are poorly understood due to the complexity of the multiple scattering theories and the nonlinear microbubble response. The majority of bubble models describe the behavior of UCA as single, isolated microbubbles suspended in infinite medium. Multiple scattering models such as the independent scattering approximation can approximate phase velocity and attenuation for low scatterer volume fractions. However, all current models and simulation approaches only describe multiple scattering and nonlinear bubble dynamics separately. Here we present an approach that combines two existing models: (1) a full-wave model that describes nonlinear propagation and scattering interactions in a heterogeneous attenuating medium and (2) a Paul-Sarkar model that describes the nonlinear interactions between an acoustic field and microbubbles. These two models were solved numerically and combined with an iterative approach. The convergence of this combined model was explored in silico for 0.5 × 106 microbubbles ml-1, 1% and 2% bubble concentration by volume. The backscattering predicted by our modeling approach was verified experimentally with water tank measurements performed with a 128-element linear array transducer. An excellent agreement in terms of the fundamental and harmonic acoustic fields is shown. Additionally, our model correctly predicts the phase velocity and attenuation measured using through transmission and predicted by the independent scattering approximation.

  1. Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Santoso, Noviyanti; Wibowo, Wahyu

    2018-03-01

    A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.

  2. Human discomfort response to noise combined with vertical vibration

    NASA Technical Reports Server (NTRS)

    Leatherwood, J. D.

    1979-01-01

    An experimental investigation was conducted (1) to determine the effects of combined environmental noise and vertical vibration upon human subjective discomfort response, (2) to develop a model for the prediction of passenger discomfort response to the combined environment, and (3) to develop a set of noise-vibration curves for use as criteria in ride quality design. Subjects were exposed to parametric combinations of noise and vibrations through the use of a realistic laboratory simulator. Results indicated that accurate prediction of passenger ride comfort requires knowledge of both the level and frequency content of the noise and vibration components of a ride environment as well as knowledge of the interactive effects of combined noise and vibration. A design tool in the form of an empirical model of passenger discomfort response to combined noise and vertical vibration was developed and illustrated by several computational examples. Finally, a set of noise-vibration criteria curves were generated to illustrate the fundamental design trade-off possible between passenger discomfort and the noise-vibration levels that produce the discomfort.

  3. IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.

    PubMed

    Huang, Lihan

    2017-12-04

    The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.

  4. Multi-time-step ahead daily and hourly intermittent reservoir inflow prediction by artificial intelligent techniques using lumped and distributed data

    NASA Astrophysics Data System (ADS)

    Jothiprakash, V.; Magar, R. B.

    2012-07-01

    SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.

  5. Comparison of modeled backscatter with SAR data at P-band

    NASA Technical Reports Server (NTRS)

    Wang, Yong; Davis, Frank W.; Melack, John M.

    1992-01-01

    In recent years several analytical models were developed to predict microwave scattering by trees and forest canopies. These models contribute to the understanding of radar backscatter over forested regions to the extent that they capture the basic interactions between microwave radiation and tree canopies, understories, and ground layers as functions of incidence angle, wavelength, and polarization. The Santa Barbara microwave model backscatter model for woodland (i.e. with discontinuous tree canopies) combines a single-tree backscatter model and a gap probability model. Comparison of model predictions with synthetic aperture radar (SAR) data and L-band (lambda = 0.235 m) is promising, but much work is still needed to test the validity of model predictions at other wavelengths. The validity of the model predictions at P-band (lambda = 0.68 m) for woodland stands at our Mt. Shasta test site was tested.

  6. Downscaler Model for predicting daily air pollution

    EPA Pesticide Factsheets

    This model combines daily ozone and particulate matter monitoring and modeling data from across the U.S. to provide improved fine-scale estimates of air quality in communities and other specific locales.

  7. RNA 3D Structure Modeling by Combination of Template-Based Method ModeRNA, Template-Free Folding with SimRNA, and Refinement with QRNAS.

    PubMed

    Piatkowski, Pawel; Kasprzak, Joanna M; Kumar, Deepak; Magnus, Marcin; Chojnowski, Grzegorz; Bujnicki, Janusz M

    2016-01-01

    RNA encompasses an essential part of all known forms of life. The functions of many RNA molecules are dependent on their ability to form complex three-dimensional (3D) structures. However, experimental determination of RNA 3D structures is laborious and challenging, and therefore, the majority of known RNAs remain structurally uncharacterized. To address this problem, computational structure prediction methods were developed that either utilize information derived from known structures of other RNA molecules (by way of template-based modeling) or attempt to simulate the physical process of RNA structure formation (by way of template-free modeling). All computational methods suffer from various limitations that make theoretical models less reliable than high-resolution experimentally determined structures. This chapter provides a protocol for computational modeling of RNA 3D structure that overcomes major limitations by combining two complementary approaches: template-based modeling that is capable of predicting global architectures based on similarity to other molecules but often fails to predict local unique features, and template-free modeling that can predict the local folding, but is limited to modeling the structure of relatively small molecules. Here, we combine the use of a template-based method ModeRNA with a template-free method SimRNA. ModeRNA requires a sequence alignment of the target RNA sequence to be modeled with a template of the known structure; it generates a model that predicts the structure of a conserved core and provides a starting point for modeling of variable regions. SimRNA can be used to fold small RNAs (<80 nt) without any additional structural information, and to refold parts of models for larger RNAs that have a correctly modeled core. ModeRNA can be either downloaded, compiled and run locally or run through a web interface at http://genesilico.pl/modernaserver/ . SimRNA is currently available to download for local use as a precompiled software package at http://genesilico.pl/software/stand-alone/simrna and as a web server at http://genesilico.pl/SimRNAweb . For model optimization we use QRNAS, available at http://genesilico.pl/qrnas .

  8. Performance of combined fragmentation and retention prediction for the identification of organic micropollutants by LC-HRMS.

    PubMed

    Hu, Meng; Müller, Erik; Schymanski, Emma L; Ruttkies, Christoph; Schulze, Tobias; Brack, Werner; Krauss, Martin

    2018-03-01

    In nontarget screening, structure elucidation of small molecules from high resolution mass spectrometry (HRMS) data is challenging, particularly the selection of the most likely candidate structure among the many retrieved from compound databases. Several fragmentation and retention prediction methods have been developed to improve this candidate selection. In order to evaluate their performance, we compared two in silico fragmenters (MetFrag and CFM-ID) and two retention time prediction models (based on the chromatographic hydrophobicity index (CHI) and on log D). A set of 78 known organic micropollutants was analyzed by liquid chromatography coupled to a LTQ Orbitrap HRMS with electrospray ionization (ESI) in positive and negative mode using two fragmentation techniques with different collision energies. Both fragmenters (MetFrag and CFM-ID) performed well for most compounds, with average ranking the correct candidate structure within the top 25% and 22 to 37% for ESI+ and ESI- mode, respectively. The rank of the correct candidate structure slightly improved when MetFrag and CFM-ID were combined. For unknown compounds detected in both ESI+ and ESI-, generally positive mode mass spectra were better for further structure elucidation. Both retention prediction models performed reasonably well for more hydrophobic compounds but not for early eluting hydrophilic substances. The log D prediction showed a better accuracy than the CHI model. Although the two fragmentation prediction methods are more diagnostic and sensitive for candidate selection, the inclusion of retention prediction by calculating a consensus score with optimized weighting can improve the ranking of correct candidates as compared to the individual methods. Graphical abstract Consensus workflow for combining fragmentation and retention prediction in LC-HRMS-based micropollutant identification.

  9. Modeling the effects of anthropogenic habitat change on savanna snake invasions into African rainforest.

    PubMed

    Freedman, Adam H; Buermann, Wolfgang; Lebreton, Matthew; Chirio, Laurent; Smith, Thomas B

    2009-02-01

    We used a species-distribution modeling approach, ground-based climate data sets, and newly available remote-sensing data on vegetation from the MODIS and Quick Scatterometer sensors to investigate the combined effects of human-caused habitat alterations and climate on potential invasions of rainforest by 3 savanna snake species in Cameroon, Central Africa: the night adder (Causus maculatus), olympic lined snake (Dromophis lineatus), and African house snake (Lamprophis fuliginosus). Models with contemporary climate variables and localities from native savanna habitats showed that the current climate in undisturbed rainforest was unsuitable for any of the snake species due to high precipitation. Limited availability of thermally suitable nest sites and mismatches between important life-history events and prey availability are a likely explanation for the predicted exclusion from undisturbed rainforest. Models with only MODIS-derived vegetation variables and savanna localities predicted invasion in disturbed areas within the rainforest zone, which suggests that human removal of forest cover creates suitable microhabitats that facilitate invasions into rainforest. Models with a combination of contemporary climate, MODIS- and Quick Scatterometer-derived vegetation variables, and forest and savanna localities predicted extensive invasion into rainforest caused by rainforest loss. In contrast, a projection of the present-day species-climate envelope on future climate suggested a reduction in invasion potential within the rainforest zone as a consequence of predicted increases in precipitation. These results emphasize that the combined responses of deforestation and climate change will likely be complex in tropical rainforest systems.

  10. The Impact of Model and Rainfall Forcing Errors on Characterizing Soil Moisture Uncertainty in Land Surface Modeling

    NASA Technical Reports Server (NTRS)

    Maggioni, V.; Anagnostou, E. N.; Reichle, R. H.

    2013-01-01

    The contribution of rainfall forcing errors relative to model (structural and parameter) uncertainty in the prediction of soil moisture is investigated by integrating the NASA Catchment Land Surface Model (CLSM), forced with hydro-meteorological data, in the Oklahoma region. Rainfall-forcing uncertainty is introduced using a stochastic error model that generates ensemble rainfall fields from satellite rainfall products. The ensemble satellite rain fields are propagated through CLSM to produce soil moisture ensembles. Errors in CLSM are modeled with two different approaches: either by perturbing model parameters (representing model parameter uncertainty) or by adding randomly generated noise (representing model structure and parameter uncertainty) to the model prognostic variables. Our findings highlight that the method currently used in the NASA GEOS-5 Land Data Assimilation System to perturb CLSM variables poorly describes the uncertainty in the predicted soil moisture, even when combined with rainfall model perturbations. On the other hand, by adding model parameter perturbations to rainfall forcing perturbations, a better characterization of uncertainty in soil moisture simulations is observed. Specifically, an analysis of the rank histograms shows that the most consistent ensemble of soil moisture is obtained by combining rainfall and model parameter perturbations. When rainfall forcing and model prognostic perturbations are added, the rank histogram shows a U-shape at the domain average scale, which corresponds to a lack of variability in the forecast ensemble. The more accurate estimation of the soil moisture prediction uncertainty obtained by combining rainfall and parameter perturbations is encouraging for the application of this approach in ensemble data assimilation systems.

  11. A Fatigue Life Prediction Model of Welded Joints under Combined Cyclic Loading

    NASA Astrophysics Data System (ADS)

    Goes, Keurrie C.; Camarao, Arnaldo F.; Pereira, Marcos Venicius S.; Ferreira Batalha, Gilmar

    2011-01-01

    A practical and robust methodology is developed to evaluate the fatigue life in seam welded joints when subjected to combined cyclic loading. The fatigue analysis was conducted in virtual environment. The FE stress results from each loading were imported to fatigue code FE-Fatigue and combined to perform the fatigue life prediction using the S x N (stress x life) method. The measurement or modelling of the residual stresses resulting from the welded process is not part of this work. However, the thermal and metallurgical effects, such as distortions and residual stresses, were considered indirectly through fatigue curves corrections in the samples investigated. A tube-plate specimen was submitted to combined cyclic loading (bending and torsion) with constant amplitude. The virtual durability analysis result was calibrated based on these laboratory tests and design codes such as BS7608 and Eurocode 3. The feasibility and application of the proposed numerical-experimental methodology and contributions for the technical development are discussed. Major challenges associated with this modelling and improvement proposals are finally presented.

  12. Discovering novel phenotypes with automatically inferred dynamic models: a partial melanocyte conversion in Xenopus

    NASA Astrophysics Data System (ADS)

    Lobo, Daniel; Lobikin, Maria; Levin, Michael

    2017-01-01

    Progress in regenerative medicine requires reverse-engineering cellular control networks to infer perturbations with desired systems-level outcomes. Such dynamic models allow phenotypic predictions for novel perturbations to be rapidly assessed in silico. Here, we analyzed a Xenopus model of conversion of melanocytes to a metastatic-like phenotype only previously observed in an all-or-none manner. Prior in vivo genetic and pharmacological experiments showed that individual animals either fully convert or remain normal, at some characteristic frequency after a given perturbation. We developed a Machine Learning method which inferred a model explaining this complex, stochastic all-or-none dataset. We then used this model to ask how a new phenotype could be generated: animals in which only some of the melanocytes converted. Systematically performing in silico perturbations, the model predicted that a combination of altanserin (5HTR2 inhibitor), reserpine (VMAT inhibitor), and VP16-XlCreb1 (constitutively active CREB) would break the all-or-none concordance. Remarkably, applying the predicted combination of three reagents in vivo revealed precisely the expected novel outcome, resulting in partial conversion of melanocytes within individuals. This work demonstrates the capability of automated analysis of dynamic models of signaling networks to discover novel phenotypes and predictively identify specific manipulations that can reach them.

  13. Predicting mortality rates: Comparison of an administrative predictive model (hospital standardized mortality ratio) with a physiological predictive model (Acute Physiology and Chronic Health Evaluation IV)--A cross-sectional study.

    PubMed

    Toua, Rene Elaine; de Kock, Jacques Erasmus; Welzel, Tyson

    2016-02-01

    Direct comparison of mortality rates has limited value because most deaths are due to the disease process. Predicting the risk of death accurately remains a challenge. A cross-sectional study compared the expected mortality rate as calculated with an administrative model to a physiological model, Acute Physiology and Chronic Health Evaluation IV. The combined cohort and stratified samples (<0.1, 0.1-0.5, or >0.5 predicted mortality) were considered. A total of 47,982 patients were scored from 1 July 2013 to 30 June 2014, and 46,061 records were included in the analysis. A moderate correlation was shown for the combined cohort (Pearson correlation index, 0.618; 95% confidence interval [CI], 0.380-0.779; R(2) = 0.38). A very good correlation for the less than 10% stratum (Pearson correlation index, 0.884; R(2) = 0.78; 95% CI, 0.79-0.937) and a moderate correlation for 0.1 to 0.5 predicted mortality rates (Pearson correlation index, 0.782; R(2) = 0.61; 95% CI, 0.623-0.879). There was no significant positive correlation for the greater than 50% predicted mortality stratum (Pearson correlation index, 0.087; R(2) = 0.007; 95% CI, -0.23 to 0.387). At less than 0.1, the models are interchangeable, but in spite of a moderate correlation, greater than 0.1 hospital standardized mortality ratio cannot be used to predict mortality. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Combination of searches for WW, WZ, and ZZ resonances in pp collisions at √{ s} = 8 TeV with the ATLAS detector

    NASA Astrophysics Data System (ADS)

    Aad, G.; Abbott, B.; Abdallah, J.; Abdinov, O.; Aben, R.; Abolins, M.; Abouzeid, O. S.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Affolder, A. A.; Agatonovic-Jovin, T.; Agricola, J.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Alimonti, G.; Alio, L.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Altheimer, A.; Alvarez Gonzalez, B.; Álvarez Piqueras, D.; Alviggi, M. G.; Amadio, B. T.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Arabidze, G.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Artz, S.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Aurousseau, M.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baak, M. A.; Baas, A. E.; Baca, M. J.; Bacci, C.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J. T.; Baker, O. K.; Baldin, E. M.; Balek, P.; Balestri, T.; Balli, F.; Balunas, W. K.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Basye, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, M.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, J. K.; Belanger-Champagne, C.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Benary, O.; Benchekroun, D.; Bender, M.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez Garcia, J. A.; Benjamin, D. P.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Beringer, J.; Bernard, C.; Bernard, N. R.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertsche, C.; Bertsche, D.; Besana, M. I.; Besjes, G. J.; Bessidskaia Bylund, O.; Bessner, M.; Besson, N.; Betancourt, C.; Bethke, S.; Bevan, A. J.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Biedermann, D.; Biesuz, N. V.; Biglietti, M.; Bilbao de Mendizabal, J.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biondi, S.; Bjergaard, D. M.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blanco, J. E.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Blunier, S.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Bogaerts, J. A.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Borroni, S.; Bortfeldt, J.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozic, I.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Breaden Madden, W. D.; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Bronner, J.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Bruni, A.; Bruni, G.; Bruschi, M.; Bruscino, N.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Buehrer, F.; Bugge, L.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burckhart, H.; Burdin, S.; Burgard, C. D.; Burghgrave, B.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Butt, A. I.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Cabrera Urbán, S.; Caforio, D.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Caloba, L. P.; Calvet, D.; Calvet, S.; Camacho Toro, R.; Camarda, S.; Camarri, P.; Cameron, D.; Caminal Armadans, R.; Campana, S.; Campanelli, M.; Campoverde, A.; Canale, V.; Canepa, A.; Cano Bret, M.; Cantero, J.; Cantrill, R.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Casper, D. W.; Castaneda-Miranda, E.; Castelli, A.; Castillo Gimenez, V.; Castro, N. F.; Catastini, P.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerda Alberich, L.; Cerio, B. C.; Cerny, K.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chalupkova, I.; Chan, Y. L.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chau, C. C.; Chavez Barajas, C. A.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, L.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, Y.; Cheplakov, A.; Cheremushkina, E.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chislett, R. T.; Chitan, A.; Chizhov, M. V.; Choi, K.; Chouridou, S.; Chow, B. K. B.; Christodoulou, V.; Chromek-Burckhart, D.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coffey, L.; Cogan, J. G.; Colasurdo, L.; Cole, B.; Cole, S.; Colijn, A. P.; Collot, J.; Colombo, T.; Compostella, G.; Conde Muiño, P.; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Consorti, V.; Constantinescu, S.; Conta, C.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Côté, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Crispin Ortuzar, M.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cuhadar Donszelmann, T.; Cummings, J.; Curatolo, M.; Cúth, J.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; D'Auria, S.; D'Onofrio, M.; da Cunha Sargedas de Sousa, M. J.; da Via, C.; Dabrowski, W.; Dafinca, A.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Dang, N. P.; Daniells, A. C.; Danninger, M.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, E.; Davies, M.; Davison, P.; Davygora, Y.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; de, K.; de Asmundis, R.; de Benedetti, A.; de Castro, S.; de Cecco, S.; de Groot, N.; de Jong, P.; de la Torre, H.; de Lorenzi, F.; de Pedis, D.; de Salvo, A.; de Sanctis, U.; de Santo, A.; de Vivie de Regie, J. B.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Deigaard, I.; Del Peso, J.; Del Prete, T.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; Della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; Demarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Dette, K.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; di Ciaccio, A.; di Ciaccio, L.; di Domenico, A.; di Donato, C.; di Girolamo, A.; di Girolamo, B.; di Mattia, A.; di Micco, B.; di Nardo, R.; di Simone, A.; di Sipio, R.; di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; Do Vale, M. A. B.; Dobos, D.; Dobre, M.; Doglioni, C.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Du, Y.; Dubreuil, E.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Duflot, L.; Duguid, L.; Dührssen, M.; Dunford, M.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Dyndal, M.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Edson, W.; Edwards, N. C.; Ehrenfeld, W.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Erdmann, J.; Ereditato, A.; Ernis, G.; Ernst, J.; Ernst, M.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Faucci Giannelli, M.; Favareto, A.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Feremenga, L.; Fernandez Martinez, P.; Fernandez Perez, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Ferretto Parodi, A.; Fiascaris, M.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, G. T.; Fletcher, G.; Fletcher, R. R. M.; Flick, T.; Floderus, A.; Flores Castillo, L. R.; Flowerdew, M. J.; Formica, A.; Forti, A.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; French, S. T.; Fressard-Batraneanu, S. M.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fulsom, B. G.; Fusayasu, T.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gao, J.; Gao, Y.; Gao, Y. S.; Garay Walls, F. M.; Garberson, F.; García, C.; García Navarro, J. E.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gauzzi, P.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Ge, P.; Gecse, Z.; Gee, C. N. P.; Geich-Gimbel, Ch.; Geisler, M. P.; Gemme, C.; Genest, M. H.; Geng, C.; Gentile, S.; George, M.; George, S.; Gerbaudo, D.; Gershon, A.; Ghasemi, S.; Ghazlane, H.; Giacobbe, B.; Giagu, S.; Giangiobbe, V.; Giannetti, P.; Gibbard, B.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugni, D.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Goddard, J. R.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Goncalves Pinto Firmino da Costa, J.; Gonella, L.; González de La Hoz, S.; Gonzalez Parra, G.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Grabas, H. M. X.; Graber, L.; Grabowska-Bold, I.; Gradin, P. O. J.; Grafström, P.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gray, H. M.; Graziani, E.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Groh, S.; Grohs, J. P.; Grohsjean, A.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Guan, L.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, Y.; Gupta, S.; Gustavino, G.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Haefner, P.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Haley, J.; Hall, D.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Haney, B.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrington, R. D.; Harrison, P. F.; Hartjes, F.; Hasegawa, M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, A. D.; Hayashi, T.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, L.; Hejbal, J.; Helary, L.; Hellman, S.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Hengler, C.; Henkelmann, S.; Henrichs, A.; Henriques Correia, A. M.; Henrot-Versille, S.; Herbert, G. H.; Hernández Jiménez, Y.; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hinman, R. R.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohlfeld, M.; Hohn, D.; Holmes, T. R.; Homann, M.; Hong, T. M.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howard, J.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, C.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, Q.; Hu, X.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikematsu, K.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Ince, T.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Irles Quiles, A.; Isaksson, C.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Iturbe Ponce, J. M.; Iuppa, R.; Ivarsson, J.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, B.; Jackson, M.; Jackson, P.; Jaekel, M. R.; Jain, V.; Jakobi, K. B.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jakubek, J.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansky, R.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanty, L.; Jejelava, J.; Jeng, G.-Y.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, Y.; Jiggins, S.; Jimenez Pena, J.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Joergensen, M. D.; Johansson, P.; Johns, K. A.; Johnson, W. J.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Joshi, K. D.; Jovicevic, J.; Ju, X.; Juste Rozas, A.; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kajomovitz, E.; Kalderon, C. W.; Kaluza, A.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneti, S.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kapliy, A.; Kar, D.; Karakostas, K.; Karamaoun, A.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karnevskiy, M.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawade, K.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kempster, J. J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khalil-Zada, F.; Khandanyan, H.; Khanov, A.; Kharlamov, A. G.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; King, M.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kiuchi, K.; Kivernyk, O.; Kladiva, E.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Knapik, J.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Kogan, L. A.; Kohlmann, S.; Kohout, Z.; Kohriki, T.; Koi, T.; Kolanoski, H.; Kolb, M.; Koletsou, I.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotov, V. M.; Kotwal, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kouskoura, V.; Koutsman, A.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kreiss, S.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunigo, T.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; La Rosa, A.; La Rosa Navarro, J. L.; La Rotonda, L.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lambourne, L.; Lammers, S.; Lampen, C. L.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lange, J. C.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Lasagni Manghi, F.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; Leblanc, M.; Lecompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Lehmann Miotto, G.; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Leontsinis, S.; Leroy, C.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, A.; Leyko, A. M.; Leyton, M.; Li, B.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liao, H.; Liberti, B.; Liblong, A.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limbach, C.; Limosani, A.; Lin, S. C.; Lin, T. H.; Linde, F.; Lindquist, B. E.; Linnemann, J. T.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, H.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y.; Livan, M.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo Sterzo, F.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loew, K. M.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Looper, K. A.; Lopes, L.; Lopez Mateos, D.; Lopez Paredes, B.; Lopez Paz, I.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, H.; Lu, N.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lynn, D.; Lysak, R.; Lytken, E.; Ma, H.; Ma, L. L.; Maccarrone, G.; Macchiolo, A.; MacDonald, C. M.; Maček, B.; Machado Miguens, J.; Macina, D.; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A.; Magradze, E.; Mahboubi, K.; Mahlstedt, J.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maier, T.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyshev, V. M.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandelli, B.; Mandelli, L.; Mandić, I.; Mandrysch, R.; Maneira, J.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J.; Mann, A.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mantifel, R.; Mantoani, M.; Mapelli, L.; March, L.; Marchiori, G.; Marcisovsky, M.; Marino, C. P.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti, L. F.; Marti-Garcia, S.; Martin, B.; Martin, T. A.; Martin, V. J.; Martin Dit Latour, B.; Martinez, M.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazza, S. M.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McCubbin, N. A.; McFarlane, K. W.; McFayden, J. A.; McHedlidze, G.; McMahon, S. J.; McPherson, R. A.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Mellado Garcia, B. R.; Meloni, F.; Mengarelli, A.; Menke, S.; Meoni, E.; Mercurio, K. M.; Mergelmeyer, S.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Meyer Zu Theenhausen, H.; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Mohr, W.; Molander, S.; Moles-Valls, R.; Monden, R.; Mondragon, M. C.; Mönig, K.; Monini, C.; Monk, J.; Monnier, E.; Montalbano, A.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Mortensen, S. S.; Morton, A.; Morvaj, L.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Mueller, T.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Munoz Sanchez, F. J.; Murillo Quijada, J. A.; Murray, W. J.; Musheghyan, H.; Musto, E.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nadal, J.; Nagai, K.; Nagai, R.; Nagai, Y.; Nagano, K.; Nagarkar, A.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Naranjo Garcia, R. F.; Narayan, R.; Narrias Villar, D. I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Nef, P. D.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Nickerson, R. B.; Nicolaidou, R.; Nicquevert, B.; Nielsen, J.; Nikiforou, N.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, J. K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nomachi, M.; Nomidis, I.; Nooney, T.; Norberg, S.; Nordberg, M.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; Nuti, F.; O'Grady, F.; O'Neil, D. C.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okamura, W.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Olivares Pino, S. A.; Oliveira Damazio, D.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oram, C. J.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Oropeza Barrera, C.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero Y Garzon, G.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Ovcharova, A.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Padilla Aranda, C.; Pagáčová, M.; Pagan Griso, S.; Paganis, E.; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Palka, M.; Pallin, D.; Palma, A.; Pan, Y. B.; Panagiotopoulou, E. St.; Pandini, C. E.; Panduro Vazquez, J. G.; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pasqualucci, E.; Passaggio, S.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Patel, N. D.; Pater, J. R.; Pauly, T.; Pearce, J.; Pearson, B.; Pedersen, L. E.; Pedersen, M.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Penc, O.; Peng, C.; Peng, H.; Penning, B.; Penwell, J.; Perepelitsa, D. V.; Perez Codina, E.; Pérez García-Estañ, M. T.; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrucci, F.; Pettersson, N. E.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Pickering, M. A.; Piegaia, R.; Pignotti, D. T.; Pilcher, J. E.; Pilkington, A. D.; Pin, A. W. J.; Pina, J.; Pinamonti, M.; Pinfold, J. L.; Pingel, A.; Pires, S.; Pirumov, H.; Pitt, M.; Pizio, C.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Pluth, D.; Poettgen, R.; Poggioli, L.; Pohl, D.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Pozo Astigarraga, M. E.; Pralavorio, P.; Pranko, A.; Prasad, S.; Prell, S.; Price, D.; Price, L. E.; Primavera, M.; Prince, S.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopapadaki, E.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Ptacek, E.; Puddu, D.; Pueschel, E.; Puldon, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Rajagopalan, S.; Rammensee, M.; Rangel-Smith, C.; Rauscher, F.; Rave, S.; Ravenscroft, T.; Raymond, M.; Read, A. L.; Readioff, N. P.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reisin, H.; Rembser, C.; Ren, H.; Renaud, A.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Roe, S.; Røhne, O.; Romaniouk, A.; Romano, M.; Romano Saez, S. M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, P.; Rosenthal, O.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rubinskiy, I.; Rud, V. I.; Rudolph, C.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryder, N. C.; Ryzhov, A.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Saddique, A.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Salazar Loyola, J. E.; Saleem, M.; Salek, D.; Sales de Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Sanchez Martinez, V.; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sannino, M.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Santoyo Castillo, I.; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sasaki, Y.; Sato, K.; Sauvage, G.; Sauvan, E.; Savage, G.; Savard, P.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schaefer, D.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Schiavi, C.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitt, S.; Schmitz, S.; Schneider, B.; Schnellbach, Y. J.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schorlemmer, A. L. S.; Schott, M.; Schouten, D.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwarz, T. A.; Schwegler, Ph.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Scifo, E.; Sciolla, G.; Scuri, F.; Scutti, F.; Searcy, J.; Sedov, G.; Sedykh, E.; Seema, P.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Seliverstov, D. M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Serre, T.; Sessa, M.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Shoaleh Saadi, D.; Shochet, M. J.; Shojaii, S.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Sicho, P.; Sidebo, P. E.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silver, Y.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, D.; Simon, M.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Sisakyan, A. N.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skinner, M. B.; Skottowe, H. P.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snidero, G.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Sokhrannyi, G.; Solans, C. A.; Solar, M.; Solc, J.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Song, H. Y.; Soni, N.; Sood, A.; Sopczak, A.; Sopko, B.; Sopko, V.; Sorin, V.; Sosa, D.; Sosebee, M.; Sotiropoulou, C. L.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Spearman, W. R.; Sperlich, D.; Spettel, F.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; St. Denis, R. D.; Stabile, A.; Staerz, S.; Stahlman, J.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, J.; Staroba, P.; Starovoitov, P.; Staszewski, R.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, E.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Subramaniam, R.; Succurro, A.; Suchek, S.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tam, J. Y. C.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tannenwald, B. B.; Tapia Araya, S.; Tapprogge, S.; Tarem, S.; Tarrade, F.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, A. C.; Taylor, F. E.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teischinger, F. A.; Teixeira Dias Castanheira, M.; Teixeira-Dias, P.; Temming, K. K.; Temple, D.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, R. J.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Thun, R. P.; Tibbetts, M. J.; Ticse Torres, R. E.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tiouchichine, E.; Tipton, P.; Tisserant, S.; Todome, K.; Todorov, T.; Todorova-Nova, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tollefson, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tsarouchas, C.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsionou, D.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsui, K. M.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turra, R.; Turvey, A. J.; Tuts, P. M.; Tykhonov, A.; Tylmad, M.; Tyndel, M.; Ueda, I.; Ueno, R.; Ughetto, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valderanis, C.; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Vallecorsa, S.; Valls Ferrer, J. A.; van den Wollenberg, W.; van der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; van Eldik, N.; van Gemmeren, P.; van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vannucci, F.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vazeille, F.; Vazquez Schroeder, T.; Veatch, J.; Veloce, L. M.; Veloso, F.; Velz, T.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigne, R.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vivarelli, I.; Vlachos, S.; Vladoiu, D.; Vlasak, M.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, T.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Wasicki, C.; Watkins, P. M.; Watson, A. T.; Watson, I. J.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; Wharton, A. M.; White, A.; White, M. J.; White, R.; White, S.; Whiteson, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, A.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winter, B. T.; Wittgen, M.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wu, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yakabe, R.; Yamada, M.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yao, W.-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yen, A. L.; Yildirim, E.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yuen, S. P. Y.; Yurkewicz, A.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zeng, J. C.; Zeng, Q.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, R.; Zhang, X.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, L.; Zhou, M.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; Zur Nedden, M.; Zurzolo, G.; Zwalinski, L.; Atlas Collaboration

    2016-04-01

    The ATLAS experiment at the CERN Large Hadron Collider has performed searches for new, heavy bosons decaying to WW, WZ and ZZ final states in multiple decay channels using 20.3 fb-1 of pp collision data at √{ s} = 8 TeV. In the current study, the results of these searches are combined to provide a more stringent test of models predicting heavy resonances with couplings to vector bosons. Direct searches for a charged diboson resonance decaying to WZ in the ℓνℓ‧ℓ‧ (ℓ = μ , e), ℓℓq q bar , ℓνq q bar and fully hadronic final states are combined and upper limits on the rate of production times branching ratio to the WZ bosons are compared with predictions of an extended gauge model with a heavy W‧ boson. In addition, direct searches for a neutral diboson resonance decaying to WW and ZZ in the ℓℓq q bar , ℓνq q bar , and fully hadronic final states are combined and upper limits on the rate of production times branching ratio to the WW and ZZ bosons are compared with predictions for a heavy, spin-2 graviton in an extended Randall-Sundrum model where the Standard Model fields are allowed to propagate in the bulk of the extra dimension.

  15. Combination of searches for WW, WZ, and ZZ resonances in pp collisions at s = 8  TeV with the ATLAS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.

    2016-02-11

    In this study, the ATLAS experiment at the CERN Large Hadron Collider has performed searches for new, heavy bosons decaying to WW, WZ, and ZZ final states in multiple decay channels using 20.3 fb -12 of pp collision data at √s=8 TeV. In the current study, the results of these searches are combined to provide a more stringent test of models predicting heavy resonances with couplings to vector bosons. Direct searches for a charged diboson resonance decaying to WZ in the ℓνℓ'ℓ' (ℓ=μ,e), ℓℓqq¯,ℓνqq¯ and fully hadronic final states are combined and upper limits on the rate of production timesmore » branching ratio to the WZ bosons are compared with predictions of an extended gauge model with a heavy W' boson. Also, direct searches for a neutral diboson resonance decaying to WW and ZZ in the ℓℓqq¯, ℓνqq¯, and fully hadronic final states are combined and upper limits on the rate of production times branching ratio to the WW and ZZ bosons are compared with predictions for a heavy, spin-2 graviton in an extended Randall–Sundrum model where the Standard Model fields are allowed to propagate in the bulk of the extra dimension.« less

  16. Clinical responses to ERK inhibition in BRAFV600E-mutant colorectal cancer predicted using a computational model.

    PubMed

    Kirouac, Daniel C; Schaefer, Gabriele; Chan, Jocelyn; Merchant, Mark; Orr, Christine; Huang, Shih-Min A; Moffat, John; Liu, Lichuan; Gadkar, Kapil; Ramanujan, Saroja

    2017-01-01

    Approximately 10% of colorectal cancers harbor BRAF V600E mutations, which constitutively activate the MAPK signaling pathway. We sought to determine whether ERK inhibitor (GDC-0994)-containing regimens may be of clinical benefit to these patients based on data from in vitro (cell line) and in vivo (cell- and patient-derived xenograft) studies of cetuximab (EGFR), vemurafenib (BRAF), cobimetinib (MEK), and GDC-0994 (ERK) combinations. Preclinical data was used to develop a mechanism-based computational model linking cell surface receptor (EGFR) activation, the MAPK signaling pathway, and tumor growth. Clinical predictions of anti-tumor activity were enabled by the use of tumor response data from three Phase 1 clinical trials testing combinations of EGFR, BRAF, and MEK inhibitors. Simulated responses to GDC-0994 monotherapy (overall response rate = 17%) accurately predicted results from a Phase 1 clinical trial regarding the number of responding patients (2/18) and the distribution of tumor size changes ("waterfall plot"). Prospective simulations were then used to evaluate potential drug combinations and predictive biomarkers for increasing responsiveness to MEK/ERK inhibitors in these patients.

  17. Modeling plant interspecific interactions from experiments with perennial crop mixtures to predict optimal combinations.

    PubMed

    Halty, Virginia; Valdés, Matías; Tejera, Mauricio; Picasso, Valentín; Fort, Hugo

    2017-12-01

    The contribution of plant species richness to productivity and ecosystem functioning is a longstanding issue in ecology, with relevant implications for both conservation and agriculture. Both experiments and quantitative modeling are fundamental to the design of sustainable agroecosystems and the optimization of crop production. We modeled communities of perennial crop mixtures by using a generalized Lotka-Volterra model, i.e., a model such that the interspecific interactions are more general than purely competitive. We estimated model parameters -carrying capacities and interaction coefficients- from, respectively, the observed biomass of monocultures and bicultures measured in a large diversity experiment of seven perennial forage species in Iowa, United States. The sign and absolute value of the interaction coefficients showed that the biological interactions between species pairs included amensalism, competition, and parasitism (asymmetric positive-negative interaction), with various degrees of intensity. We tested the model fit by simulating the combinations of more than two species and comparing them with the polycultures experimental data. Overall, theoretical predictions are in good agreement with the experiments. Using this model, we also simulated species combinations that were not sown. From all possible mixtures (sown and not sown) we identified which are the most productive species combinations. Our results demonstrate that a combination of experiments and modeling can contribute to the design of sustainable agricultural systems in general and to the optimization of crop production in particular. © 2017 by the Ecological Society of America.

  18. Simulation of Semi-Solid Material Mechanical Behavior Using a Combined Discrete/Finite Element Method

    NASA Astrophysics Data System (ADS)

    Sistaninia, M.; Phillion, A. B.; Drezet, J.-M.; Rappaz, M.

    2011-01-01

    As a necessary step toward the quantitative prediction of hot tearing defects, a three-dimensional stress-strain simulation based on a combined finite element (FE)/discrete element method (DEM) has been developed that is capable of predicting the mechanical behavior of semisolid metallic alloys during solidification. The solidification model used for generating the initial solid-liquid structure is based on a Voronoi tessellation of randomly distributed nucleation centers and a solute diffusion model for each element of this tessellation. At a given fraction of solid, the deformation is then simulated with the solid grains being modeled using an elastoviscoplastic constitutive law, whereas the remaining liquid layers at grain boundaries are approximated by flexible connectors, each consisting of a spring element and a damper element acting in parallel. The model predictions have been validated against Al-Cu alloy experimental data from the literature. The results show that a combined FE/DEM approach is able to express the overall mechanical behavior of semisolid alloys at the macroscale based on the morphology of the grain structure. For the first time, the localization of strain in the intergranular regions is taken into account. Thus, this approach constitutes an indispensible step towards the development of a comprehensive model of hot tearing.

  19. A New Navigation Satellite Clock Bias Prediction Method Based on Modified Clock-bias Quadratic Polynomial Model

    NASA Astrophysics Data System (ADS)

    Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.

    2016-01-01

    In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.

  20. Predicting nucleic acid binding interfaces from structural models of proteins.

    PubMed

    Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael

    2012-02-01

    The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However, the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three-dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared with patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. Copyright © 2011 Wiley Periodicals, Inc.

  1. Predicting Salt Permeability Coefficients in Highly Swollen, Highly Charged Ion Exchange Membranes.

    PubMed

    Kamcev, Jovan; Paul, Donald R; Manning, Gerald S; Freeman, Benny D

    2017-02-01

    This study presents a framework for predicting salt permeability coefficients in ion exchange membranes in contact with an aqueous salt solution. The model, based on the solution-diffusion mechanism, was tested using experimental salt permeability data for a series of commercial ion exchange membranes. Equilibrium salt partition coefficients were calculated using a thermodynamic framework (i.e., Donnan theory), incorporating Manning's counterion condensation theory to calculate ion activity coefficients in the membrane phase and the Pitzer model to calculate ion activity coefficients in the solution phase. The model predicted NaCl partition coefficients in a cation exchange membrane and two anion exchange membranes, as well as MgCl 2 partition coefficients in a cation exchange membrane, remarkably well at higher external salt concentrations (>0.1 M) and reasonably well at lower external salt concentrations (<0.1 M) with no adjustable parameters. Membrane ion diffusion coefficients were calculated using a combination of the Mackie and Meares model, which assumes ion diffusion in water-swollen polymers is affected by a tortuosity factor, and a model developed by Manning to account for electrostatic effects. Agreement between experimental and predicted salt diffusion coefficients was good with no adjustable parameters. Calculated salt partition and diffusion coefficients were combined within the framework of the solution-diffusion model to predict salt permeability coefficients. Agreement between model and experimental data was remarkably good. Additionally, a simplified version of the model was used to elucidate connections between membrane structure (e.g., fixed charge group concentration) and salt transport properties.

  2. Ensemble predictive model for more accurate soil organic carbon spectroscopic estimation

    NASA Astrophysics Data System (ADS)

    Vašát, Radim; Kodešová, Radka; Borůvka, Luboš

    2017-07-01

    A myriad of signal pre-processing strategies and multivariate calibration techniques has been explored in attempt to improve the spectroscopic prediction of soil organic carbon (SOC) over the last few decades. Therefore, to come up with a novel, more powerful, and accurate predictive approach to beat the rank becomes a challenging task. However, there may be a way, so that combine several individual predictions into a single final one (according to ensemble learning theory). As this approach performs best when combining in nature different predictive algorithms that are calibrated with structurally different predictor variables, we tested predictors of two different kinds: 1) reflectance values (or transforms) at each wavelength and 2) absorption feature parameters. Consequently we applied four different calibration techniques, two per each type of predictors: a) partial least squares regression and support vector machines for type 1, and b) multiple linear regression and random forest for type 2. The weights to be assigned to individual predictions within the ensemble model (constructed as a weighted average) were determined by an automated procedure that ensured the best solution among all possible was selected. The approach was tested at soil samples taken from surface horizon of four sites differing in the prevailing soil units. By employing the ensemble predictive model the prediction accuracy of SOC improved at all four sites. The coefficient of determination in cross-validation (R2cv) increased from 0.849, 0.611, 0.811 and 0.644 (the best individual predictions) to 0.864, 0.650, 0.824 and 0.698 for Site 1, 2, 3 and 4, respectively. Generally, the ensemble model affected the final prediction so that the maximal deviations of predicted vs. observed values of the individual predictions were reduced, and thus the correlation cloud became thinner as desired.

  3. Cross-trial prediction of treatment outcome in depression: a machine learning approach.

    PubMed

    Chekroud, Adam Mourad; Zotti, Ryan Joseph; Shehzad, Zarrar; Gueorguieva, Ralitza; Johnson, Marcia K; Trivedi, Madhukar H; Cannon, Tyrone D; Krystal, John Harrison; Corlett, Philip Robert

    2016-03-01

    Antidepressant treatment efficacy is low, but might be improved by matching patients to interventions. At present, clinicians have no empirically validated mechanisms to assess whether a patient with depression will respond to a specific antidepressant. We aimed to develop an algorithm to assess whether patients will achieve symptomatic remission from a 12-week course of citalopram. We used patient-reported data from patients with depression (n=4041, with 1949 completers) from level 1 of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D; ClinicalTrials.gov, number NCT00021528) to identify variables that were most predictive of treatment outcome, and used these variables to train a machine-learning model to predict clinical remission. We externally validated the model in the escitalopram treatment group (n=151) of an independent clinical trial (Combining Medications to Enhance Depression Outcomes [COMED]; ClinicalTrials.gov, number NCT00590863). We identified 25 variables that were most predictive of treatment outcome from 164 patient-reportable variables, and used these to train the model. The model was internally cross-validated, and predicted outcomes in the STAR*D cohort with accuracy significantly above chance (64·6% [SD 3·2]; p<0·0001). The model was externally validated in the escitalopram treatment group (N=151) of COMED (accuracy 59·6%, p=0.043). The model also performed significantly above chance in a combined escitalopram-buproprion treatment group in COMED (n=134; accuracy 59·7%, p=0·023), but not in a combined venlafaxine-mirtazapine group (n=140; accuracy 51·4%, p=0·53), suggesting specificity of the model to underlying mechanisms. Building statistical models by mining existing clinical trial data can enable prospective identification of patients who are likely to respond to a specific antidepressant. Yale University. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A strategy to improve the identification reliability of the chemical constituents by high-resolution mass spectrometry-based isomer structure prediction combined with a quantitative structure retention relationship analysis: Phthalide compounds in Chuanxiong as a test case.

    PubMed

    Zhang, Qingqing; Huo, Mengqi; Zhang, Yanling; Qiao, Yanjiang; Gao, Xiaoyan

    2018-06-01

    High-resolution mass spectrometry (HRMS) provides a powerful tool for the rapid analysis and identification of compounds in herbs. However, the diversity and large differences in the content of the chemical constituents in herbal medicines, especially isomerisms, are a great challenge for mass spectrometry-based structural identification. In the current study, a new strategy for the structural characterization of potential new phthalide compounds was proposed by isomer structure predictions combined with a quantitative structure-retention relationship (QSRR) analysis using phthalide compounds in Chuanxiong as an example. This strategy consists of three steps. First, the structures of phthalide compounds were reasonably predicted on the basis of the structure features and MS/MS fragmentation patterns: (1) the collected raw HRMS data were preliminarily screened by an in-house database; (2) the MS/MS fragmentation patterns of the analogous compounds were summarized; (3) the reported phthalide compounds were identified, and the structures of the isomers were reasonably predicted. Second, the QSRR model was established and verified using representative phthalide compound standards. Finally, the retention times of the predicted isomers were calculated by the QSRR model, and the structures of these peaks were rationally characterized by matching retention times of the detected chromatographic peaks and the predicted isomers. A multiple linear regression QSRR model in which 6 physicochemical variables were screened was built using 23 phthalide standards. The retention times of the phthalide isomers in Chuanxiong were well predicted by the QSRR model combined with reasonable structure predictions (R 2 =0.955). A total of 81 peaks were detected from Chuanxiong and assigned to reasonable structures, and 26 potential new phthalide compounds were structurally characterized. This strategy can improve the identification efficiency and reliability of homologues in complex materials. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. High-resolution spatiotemporal mapping of PM2.5 concentrations at Mainland China using a combined BME-GWR technique

    NASA Astrophysics Data System (ADS)

    Xiao, Lu; Lang, Yichao; Christakos, George

    2018-01-01

    With rapid economic development, industrialization and urbanization, the ambient air PM2.5 has become a major pollutant linked to respiratory, heart and lung diseases. In China, PM2.5 pollution constitutes an extreme environmental and social problem of widespread public concern. In this work we estimate ground-level PM2.5 from satellite-derived aerosol optical depth (AOD), topography data, meteorological data, and pollutant emission using an integrative technique. In particular, Geographically Weighted Regression (GWR) analysis was combined with Bayesian Maximum Entropy (BME) theory to assess the spatiotemporal characteristics of PM2.5 exposure in a large region of China and generate informative PM2.5 space-time predictions (estimates). It was found that, due to its integrative character, the combined BME-GWR method offers certain improvements in the space-time prediction of PM2.5 concentrations over China compared to previous techniques. The combined BME-GWR technique generated realistic maps of space-time PM2.5 distribution, and its performance was superior to that of seven previous studies of satellite-derived PM2.5 concentrations in China in terms of prediction accuracy. The purely spatial GWR model can only be used at a fixed time, whereas the integrative BME-GWR approach accounts for cross space-time dependencies and can predict PM2.5 concentrations in the composite space-time domain. The 10-fold results of BME-GWR modeling (R2 = 0.883, RMSE = 11.39 μg /m3) demonstrated a high level of space-time PM2.5 prediction (estimation) accuracy over China, revealing a definite trend of severe PM2.5 levels from the northern coast toward inland China (Nov 2015-Feb 2016). Future work should focus on the addition of higher resolution AOD data, developing better satellite-based prediction models, and related air pollutants for space-time PM2.5 prediction purposes.

  6. Visual Prediction Error Spreads Across Object Features in Human Visual Cortex

    PubMed Central

    Summerfield, Christopher; Egner, Tobias

    2016-01-01

    Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. SIGNIFICANCE STATEMENT We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that independently varies expectation of (and attention to) multiple object features with computational modeling and fMRI, we demonstrate that behavior and fMRI activity patterns in visual cortex are best accounted for by a model in which prediction error in one object feature spreads to other object features. These results demonstrate how predictive vision forms object-level expectations out of multiple independent features. PMID:27810936

  7. GeneSilico protein structure prediction meta-server.

    PubMed

    Kurowski, Michal A; Bujnicki, Janusz M

    2003-07-01

    Rigorous assessments of protein structure prediction have demonstrated that fold recognition methods can identify remote similarities between proteins when standard sequence search methods fail. It has been shown that the accuracy of predictions is improved when refined multiple sequence alignments are used instead of single sequences and if different methods are combined to generate a consensus model. There are several meta-servers available that integrate protein structure predictions performed by various methods, but they do not allow for submission of user-defined multiple sequence alignments and they seldom offer confidentiality of the results. We developed a novel WWW gateway for protein structure prediction, which combines the useful features of other meta-servers available, but with much greater flexibility of the input. The user may submit an amino acid sequence or a multiple sequence alignment to a set of methods for primary, secondary and tertiary structure prediction. Fold-recognition results (target-template alignments) are converted into full-atom 3D models and the quality of these models is uniformly assessed. A consensus between different FR methods is also inferred. The results are conveniently presented on-line on a single web page over a secure, password-protected connection. The GeneSilico protein structure prediction meta-server is freely available for academic users at http://genesilico.pl/meta.

  8. GeneSilico protein structure prediction meta-server

    PubMed Central

    Kurowski, Michal A.; Bujnicki, Janusz M.

    2003-01-01

    Rigorous assessments of protein structure prediction have demonstrated that fold recognition methods can identify remote similarities between proteins when standard sequence search methods fail. It has been shown that the accuracy of predictions is improved when refined multiple sequence alignments are used instead of single sequences and if different methods are combined to generate a consensus model. There are several meta-servers available that integrate protein structure predictions performed by various methods, but they do not allow for submission of user-defined multiple sequence alignments and they seldom offer confidentiality of the results. We developed a novel WWW gateway for protein structure prediction, which combines the useful features of other meta-servers available, but with much greater flexibility of the input. The user may submit an amino acid sequence or a multiple sequence alignment to a set of methods for primary, secondary and tertiary structure prediction. Fold-recognition results (target-template alignments) are converted into full-atom 3D models and the quality of these models is uniformly assessed. A consensus between different FR methods is also inferred. The results are conveniently presented on-line on a single web page over a secure, password-protected connection. The GeneSilico protein structure prediction meta-server is freely available for academic users at http://genesilico.pl/meta. PMID:12824313

  9. Contrail Tracking and ARM Data Product Development

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Russell, James, III

    2005-01-01

    A contrail tracking system was developed to help in the assessment of the effect of commercial jet contrails on the Earth's radiative budget. The tracking system was built by combining meteorological data from the Rapid Update Cycle (RUC) numerical weather prediction model with commercial air traffic flight track data and satellite imagery. A statistical contrail-forecasting model was created a combination of surface-based contrail observations and numerical weather analyses and forecasts. This model allows predictions of widespread contrail occurrences for contrail research on either a real-time basis or for long-term time scales. Satellite-derived cirrus cloud properties in polluted and unpolluted regions were compared to determine the impact of air traffic on cirrus.

  10. The combining of multiple hemispheric resources in learning-disabled and skilled readers' recall of words: a test of three information-processing models.

    PubMed

    Swanson, H L

    1987-01-01

    Three theoretical models (additive, independence, maximum rule) that characterize and predict the influence of independent hemispheric resources on learning-disabled and skilled readers' simultaneous processing were tested. Predictions related to word recall performance during simultaneous encoding conditions (dichotic listening task) were made from unilateral (dichotic listening task) presentations. The maximum rule model best characterized both ability groups in that simultaneous encoding produced no better recall than unilateral presentations. While the results support the hypothesis that both ability groups use similar processes in the combining of hemispheric resources (i.e., weak/dominant processing), ability group differences do occur in the coordination of such resources.

  11. Nonstationary time series prediction combined with slow feature analysis

    NASA Astrophysics Data System (ADS)

    Wang, G.; Chen, X.

    2015-07-01

    Almost all climate time series have some degree of nonstationarity due to external driving forces perturbing the observed system. Therefore, these external driving forces should be taken into account when constructing the climate dynamics. This paper presents a new technique of obtaining the driving forces of a time series from the slow feature analysis (SFA) approach, and then introduces them into a predictive model to predict nonstationary time series. The basic theory of the technique is to consider the driving forces as state variables and to incorporate them into the predictive model. Experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted to test the model. The results showed improved prediction skills.

  12. Finite element based model predictive control for active vibration suppression of a one-link flexible manipulator.

    PubMed

    Dubay, Rickey; Hassan, Marwan; Li, Chunying; Charest, Meaghan

    2014-09-01

    This paper presents a unique approach for active vibration control of a one-link flexible manipulator. The method combines a finite element model of the manipulator and an advanced model predictive controller to suppress vibration at its tip. This hybrid methodology improves significantly over the standard application of a predictive controller for vibration control. The finite element model used in place of standard modelling in the control algorithm provides a more accurate prediction of dynamic behavior, resulting in enhanced control. Closed loop control experiments were performed using the flexible manipulator, instrumented with strain gauges and piezoelectric actuators. In all instances, experimental and simulation results demonstrate that the finite element based predictive controller provides improved active vibration suppression in comparison with using a standard predictive control strategy. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Action perception as hypothesis testing.

    PubMed

    Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni

    2017-04-01

    We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Estimation of the Viscosities of Liquid Sn-Based Binary Lead-Free Solder Alloys

    NASA Astrophysics Data System (ADS)

    Wu, Min; Li, Jinquan

    2018-01-01

    The viscosity of a binary Sn-based lead-free solder alloy was calculated by combining the predicted model with the Miedema model. The viscosity factor was proposed and the relationship between the viscosity and surface tension was analyzed as well. The investigation result shows that the viscosity of Sn-based lead-free solders predicted from the predicted model shows excellent agreement with the reported values. The viscosity factor is determined by three physical parameters: atomic volume, electronic density, and electro-negativity. In addition, the apparent correlation between the surface tension and viscosity of the binary Sn-based Pb-free solder was obtained based on the predicted model.

  15. Domestic estimated breeding values and genomic enhanced breeding values of bulls in comparison with their foreign genomic enhanced breeding values.

    PubMed

    Přibyl, J; Bauer, J; Čermák, V; Pešek, P; Přibylová, J; Šplíchal, J; Vostrá-Vydrová, H; Vostrý, L; Zavadilová, L

    2015-10-01

    Estimated breeding values (EBVs) and genomic enhanced breeding values (GEBVs) for milk production of young genotyped Holstein bulls were predicted using a conventional BLUP - Animal Model, a method fitting regression coefficients for loci (RRBLUP), a method utilizing the realized genomic relationship matrix (GBLUP), by a single-step procedure (ssGBLUP) and by a one-step blending procedure. Information sources for prediction were the nation-wide database of domestic Czech production records in the first lactation combined with deregressed proofs (DRP) from Interbull files (August 2013) and domestic test-day (TD) records for the first three lactations. Data from 2627 genotyped bulls were used, of which 2189 were already proven under domestic conditions. Analyses were run that used Interbull values for genotyped bulls only or that used Interbull values for all available sires. Resultant predictions were compared with GEBV of 96 young foreign bulls evaluated abroad and whose proofs were from Interbull method GMACE (August 2013) on the Czech scale. Correlations of predictions with GMACE values of foreign bulls ranged from 0.33 to 0.75. Combining domestic data with Interbull EBVs improved prediction of both EBV and GEBV. Predictions by Animal Model (traditional EBV) using only domestic first lactation records and GMACE values were correlated by only 0.33. Combining the nation-wide domestic database with all available DRP for genotyped and un-genotyped sires from Interbull resulted in an EBV correlation of 0.60, compared with 0.47 when only Interbull data were used. In all cases, GEBVs had higher correlations than traditional EBVs, and the highest correlations were for predictions from the ssGBLUP procedure using combined data (0.75), or with all available DRP from Interbull records only (one-step blending approach, 0.69). The ssGBLUP predictions using the first three domestic lactation records in the TD model were correlated with GMACE predictions by 0.69, 0.64 and 0.61 for milk yield, protein yield and fat yield, respectively.

  16. In silico platform for xenobiotics ADME-T pharmacological properties modeling and prediction. Part II: The body in a Hilbertian space.

    PubMed

    Jacob, Alexandre; Pratuangdejkul, Jaturong; Buffet, Sébastien; Launay, Jean-Marie; Manivet, Philippe

    2009-04-01

    We have broken old surviving dogmas and concepts used in computational chemistry and created an efficient in silico ADME-T pharmacological properties modeling and prediction toolbox for any xenobiotic. With the help of an innovative and pragmatic approach combining various in silico techniques, like molecular modeling, quantum chemistry and in-house developed algorithms, the interactions between drugs and those enzymes, transporters and receptors involved in their biotransformation can be studied. ADME-T pharmacological parameters can then be predicted after in vitro and in vivo validations of in silico models.

  17. Rapid Method Development in Hydrophilic Interaction Liquid Chromatography for Pharmaceutical Analysis Using a Combination of Quantitative Structure-Retention Relationships and Design of Experiments.

    PubMed

    Taraji, Maryam; Haddad, Paul R; Amos, Ruth I J; Talebi, Mohammad; Szucs, Roman; Dolan, John W; Pohl, Chris A

    2017-02-07

    A design-of-experiment (DoE) model was developed, able to describe the retention times of a mixture of pharmaceutical compounds in hydrophilic interaction liquid chromatography (HILIC) under all possible combinations of acetonitrile content, salt concentration, and mobile-phase pH with R 2 > 0.95. Further, a quantitative structure-retention relationship (QSRR) model was developed to predict retention times for new analytes, based only on their chemical structures, with a root-mean-square error of prediction (RMSEP) as low as 0.81%. A compound classification based on the concept of similarity was applied prior to QSRR modeling. Finally, we utilized a combined QSRR-DoE approach to propose an optimal design space in a quality-by-design (QbD) workflow to facilitate the HILIC method development. The mathematical QSRR-DoE model was shown to be highly predictive when applied to an independent test set of unseen compounds in unseen conditions with a RMSEP value of 5.83%. The QSRR-DoE computed retention time of pharmaceutical test analytes and subsequently calculated separation selectivity was used to optimize the chromatographic conditions for efficient separation of targets. A Monte Carlo simulation was performed to evaluate the risk of uncertainty in the model's prediction, and to define the design space where the desired quality criterion was met. Experimental realization of peak selectivity between targets under the selected optimal working conditions confirmed the theoretical predictions. These results demonstrate how discovery of optimal conditions for the separation of new analytes can be accelerated by the use of appropriate theoretical tools.

  18. PREOPERATIVE MRI IMPROVES PREDICTION OF EXTENSIVE OCCULT AXILLARY LYMPH NODE METASTASES IN BREAST CANCER PATIENTS WITH A POSITIVE SENTINEL LYMPH NODE BIOPSY

    PubMed Central

    Loiselle, Christopher; Eby, Peter R.; Kim, Janice N.; Calhoun, Kristine E.; Allison, Kimberly H.; Gadi, Vijayakrishna K.; Peacock, Sue; Storer, Barry; Mankoff, David A.; Partridge, Savannah C.; Lehman, Constance D.

    2014-01-01

    Rationale and Objectives To test the ability of quantitative measures from preoperative Dynamic Contrast Enhanced MRI (DCE-MRI) to predict, independently and/or with the Katz pathologic nomogram, which breast cancer patients with a positive sentinel lymph node biopsy will have ≥ 4 positive axillary lymph nodes upon completion axillary dissection. Methods and Materials A retrospective review was conducted to identify clinically node-negative invasive breast cancer patients who underwent preoperative DCE-MRI, followed by sentinel node biopsy with positive findings and complete axillary dissection (6/2005 – 1/2010). Clinical/pathologic factors, primary lesion size and quantitative DCE-MRI kinetics were collected from clinical records and prospective databases. DCE-MRI parameters with univariate significance (p < 0.05) to predict ≥ 4 positive axillary nodes were modeled with stepwise regression and compared to the Katz nomogram alone and to a combined MRI-Katz nomogram model. Results Ninety-eight patients with 99 positive sentinel biopsies met study criteria. Stepwise regression identified DCE-MRI total persistent enhancement and volume adjusted peak enhancement as significant predictors of ≥4 metastatic nodes. Receiver operating characteristic (ROC) curves demonstrated an area under the curve (AUC) of 0.78 for the Katz nomogram, 0.79 for the DCE-MRI multivariate model, and 0.87 for the combined MRI-Katz model. The combined model was significantly more predictive than the Katz nomogram alone (p = 0.003). Conclusion Integration of DCE-MRI primary lesion kinetics significantly improved the Katz pathologic nomogram accuracy to predict presence of metastases in ≥ 4 nodes. DCE-MRI may help identify sentinel node positive patients requiring further localregional therapy. PMID:24331270

  19. A combined ultrasound and clinical scoring model for the prediction of peripartum complications in pregnancies complicated by placenta previa.

    PubMed

    Yoon, So-Yeon; You, Ji Yeon; Choi, Suk-Joo; Oh, Soo-Young; Kim, Jong-Hwa; Roh, Cheong-Rae

    2014-09-01

    To generate a combined ultrasound and clinical model predictive for peripartum complications in pregnancies complicated by placenta previa. This study included 110 singleton pregnant women with placenta previa delivered by cesarean section (CS) from July 2011 to November 2013. We prospectively collected ultrasound and clinical data before CS and observed the occurrence of blood transfusion, uterine artery embolization and cesarean hysterectomy. We formulated a scoring model including type of previa (0: partials, 2: totalis), lacunae (0: none, 1: 1-3, 2: 4-6, 3: whole), uteroplacental hypervascularity (0: normal, 1: moderate, 2: severe), multiparity (0: no, 1: yes), history of CS (0: none, 1: once, 2: ≥ twice) and history of placenta previa (0: no, 1: yes) to predict the risk of peripartum complications. In our study population, the risk of perioperative transfusion, uterine artery embolization, and cesarean hysterectomy were 26.4, 1.8 and 6.4%, respectively. The type of previa, lacunae, uteroplacental hypervascularity, parity, history of CS, and history of placenta previa were associated with complications in univariable analysis. However, no factor was independently predictive for any complication in exact logistic regression analysis. Using the scoring model, we found that total score significantly correlated with perioperative transfusion, cesarean hysterectomy and composite complication (p<0.0001, Cochrane Armitage test). Notably, all patients with total score ≥7 needed cesarean hysterectomy. When total score was ≥6, three fourths of patients needed blood transfusion. This combined scoring model may provide useful information for prediction of peripartum complications in women with placenta previa. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Does taking endurance into account improve the prediction of weaning outcome in mechanically ventilated children?

    PubMed Central

    Noizet, Odile; Leclerc, Francis; Sadik, Ahmed; Grandbastien, Bruno; Riou, Yvon; Dorkenoo, Aimée; Fourier, Catherine; Cremer, Robin; Leteurtre, Stephane

    2005-01-01

    Introduction We conducted the present study to determine whether a combination of the mechanical ventilation weaning predictors proposed by the collective Task Force of the American College of Chest Physicians (TF) and weaning endurance indices enhance prediction of weaning success. Method Conducted in a tertiary paediatric intensive care unit at a university hospital, this prospective study included 54 children receiving mechanical ventilation (≥6 hours) who underwent 57 episodes of weaning. We calculated the indices proposed by the TF (spontaneous respiratory rate, paediatric rapid shallow breathing, rapid shallow breathing occlusion pressure [ROP] and maximal inspiratory pressure during an occlusion test [Pimax]) and weaning endurance indices (pressure-time index, tension-time index obtained from P0.1 [TTI1] and from airway pressure [TTI2]) during spontaneous breathing. Performances of each TF index and combinations of them were calculated, and the best single index and combination were identified. Weaning endurance parameters (TTI1 and TTI2) were calculated and the best index was determined using a logistic regression model. Regression coefficients were estimated using the maximum likelihood ratio (LR) method. Hosmer–Lemeshow test was used to estimate goodness-of-fit of the model. An equation was constructed to predict weaning success. Finally, we calculated the performances of combinations of best TF indices and best endurance index. Results The best single TF index was ROP, the best TF combination was represented by the expression (0.66 × ROP) + (0.34 × Pimax), and the best endurance index was the TTI2, although their performance was poor. The best model resulting from the combination of these indices was defined by the following expression: (0.6 × ROP) – (0.1 × Pimax) + (0.5 × TTI2). This integrated index was a good weaning predictor (P < 0.01), with a LR+ of 6.4 and LR+/LR- ratio of 12.5. However, at a threshold value <1.3 it was only predictive of weaning success (LR- = 0.5). Conclusion The proposed combined index, incorporating endurance, was of modest value in predicting weaning outcome. This is the first report of the value of endurance parameters in predicting weaning success in children. Currently, clinical judgement associated with spontaneous breathing trials apparently remain superior. PMID:16356229

  1. A Critical Assessment of Combined Ligand-based and Structure-based Approaches to hERG Channel Blocker Modeling

    PubMed Central

    Du-Cuny, Lei; Chen, Lu; Zhang, Shuxing

    2014-01-01

    Blockade of hERG channel prolongs the duration of the cardiac action potential and is a common reason for drug failure in preclinical safety trials. Therefore, it is of great importance to develop robust in silico tools to predict potential hERG blockers in the early stages of drug discovery and development. Herein we described comprehensive approaches to assess the discrimination of hERG-active and -inactive compounds by combining QSAR modeling, pharmacophore analysis, and molecular docking. Our consensus models demonstrated high predictive capacity and improved enrichment, and they could correctly classify 91.8% of 147 hERG blockers from 351 inactives. To further enhance our modeling effort, hERG homology models were constructed and molecular docking studies were conducted, resulting in high correlations (R2=0.81) between predicted and experimental binding affinities. We expect our unique models can be applied to efficient screening for hERG blockades, and our extensive understanding of the hERG-inhibitor interactions will facilitate the rational design of drugs devoid of hERG channel activity and hence with reduced cardiac toxicities. PMID:21902220

  2. The effect of binary mixtures of zinc, copper, cadmium, and nickel on the growth of the freshwater diatom Navicula pelliculosa and comparison with mixture toxicity model predictions.

    PubMed

    Nagai, Takashi; De Schamphelaere, Karel A C

    2016-11-01

    The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.

  3. Combining Frequency Doubling Technology Perimetry and Scanning Laser Polarimetry for Glaucoma Detection.

    PubMed

    Mwanza, Jean-Claude; Warren, Joshua L; Hochberg, Jessica T; Budenz, Donald L; Chang, Robert T; Ramulu, Pradeep Y

    2015-01-01

    To determine the ability of frequency doubling technology (FDT) and scanning laser polarimetry with variable corneal compensation (GDx-VCC) to detect glaucoma when used individually and in combination. One hundred ten normal and 114 glaucomatous subjects were tested with FDT C-20-5 screening protocol and the GDx-VCC. The discriminating ability was tested for each device individually and for both devices combined using GDx-NFI, GDx-TSNIT, number of missed points of FDT, and normal or abnormal FDT. Measures of discrimination included sensitivity, specificity, area under the curve (AUC), Akaike's information criterion (AIC), and prediction confidence interval lengths. For detecting glaucoma regardless of severity, the multivariable model resulting from the combination of GDx-TSNIT, number of abnormal points on FDT (NAP-FDT), and the interaction GDx-TSNIT×NAP-FDT (AIC: 88.28, AUC: 0.959, sensitivity: 94.6%, specificity: 89.5%) outperformed the best single-variable model provided by GDx-NFI (AIC: 120.88, AUC: 0.914, sensitivity: 87.8%, specificity: 84.2%). The multivariable model combining GDx-TSNIT, NAP-FDT, and interaction GDx-TSNIT×NAP-FDT consistently provided better discriminating abilities for detecting early, moderate, and severe glaucoma than the best single-variable models. The multivariable model including GDx-TSNIT, NAP-FDT, and the interaction GDx-TSNIT×NAP-FDT provides the best glaucoma prediction compared with all other multivariable and univariable models. Combining the FDT C-20-5 screening protocol and GDx-VCC improves glaucoma detection compared with using GDx or FDT alone.

  4. A systems approach to college drinking: development of a deterministic model for testing alcohol control policies.

    PubMed

    Scribner, Richard; Ackleh, Azmy S; Fitzpatrick, Ben G; Jacquez, Geoffrey; Thibodeaux, Jeremy J; Rommel, Robert; Simonsen, Neal

    2009-09-01

    The misuse and abuse of alcohol among college students remain persistent problems. Using a systems approach to understand the dynamics of student drinking behavior and thus forecasting the impact of campus policy to address the problem represents a novel approach. Toward this end, the successful development of a predictive mathematical model of college drinking would represent a significant advance for prevention efforts. A deterministic, compartmental model of college drinking was developed, incorporating three processes: (1) individual factors, (2) social interactions, and (3) social norms. The model quantifies these processes in terms of the movement of students between drinking compartments characterized by five styles of college drinking: abstainers, light drinkers, moderate drinkers, problem drinkers, and heavy episodic drinkers. Predictions from the model were first compared with actual campus-level data and then used to predict the effects of several simulated interventions to address heavy episodic drinking. First, the model provides a reasonable fit of actual drinking styles of students attending Social Norms Marketing Research Project campuses varying by "wetness" and by drinking styles of matriculating students. Second, the model predicts that a combination of simulated interventions targeting heavy episodic drinkers at a moderately "dry" campus would extinguish heavy episodic drinkers, replacing them with light and moderate drinkers. Instituting the same combination of simulated interventions at a moderately "wet" campus would result in only a moderate reduction in heavy episodic drinkers (i.e., 50% to 35%). A simple, five-state compartmental model adequately predicted the actual drinking patterns of students from a variety of campuses surveyed in the Social Norms Marketing Research Project study. The model predicted the impact on drinking patterns of several simulated interventions to address heavy episodic drinking on various types of campuses.

  5. A Systems Approach to College Drinking: Development of a Deterministic Model for Testing Alcohol Control Policies*

    PubMed Central

    Scribner, Richard; Ackleh, Azmy S.; Fitzpatrick, Ben G.; Jacquez, Geoffrey; Thibodeaux, Jeremy J.; Rommel, Robert; Simonsen, Neal

    2009-01-01

    Objective: The misuse and abuse of alcohol among college students remain persistent problems. Using a systems approach to understand the dynamics of student drinking behavior and thus forecasting the impact of campus policy to address the problem represents a novel approach. Toward this end, the successful development of a predictive mathematical model of college drinking would represent a significant advance for prevention efforts. Method: A deterministic, compartmental model of college drinking was developed, incorporating three processes: (1) individual factors, (2) social interactions, and (3) social norms. The model quantifies these processes in terms of the movement of students between drinking compartments characterized by five styles of college drinking: abstainers, light drinkers, moderate drinkers, problem drinkers, and heavy episodic drinkers. Predictions from the model were first compared with actual campus-level data and then used to predict the effects of several simulated interventions to address heavy episodic drinking. Results: First, the model provides a reasonable fit of actual drinking styles of students attending Social Norms Marketing Research Project campuses varying by “wetness” and by drinking styles of matriculating students. Second, the model predicts that a combination of simulated interventions targeting heavy episodic drinkers at a moderately “dry” campus would extinguish heavy episodic drinkers, replacing them with light and moderate drinkers. Instituting the same combination of simulated interventions at a moderately “wet” campus would result in only a moderate reduction in heavy episodic drinkers (i.e., 50% to 35%). Conclusions: A simple, five-state compartmental model adequately predicted the actual drinking patterns of students from a variety of campuses surveyed in the Social Norms Marketing Research Project study. The model predicted the impact on drinking patterns of several simulated interventions to address heavy episodic drinking on various types of campuses. PMID:19737506

  6. An articulated predictive model for fluid-free artificial basilar membrane as broadband frequency sensor

    NASA Astrophysics Data System (ADS)

    Ahmed, Riaz; Banerjee, Sourav

    2018-02-01

    In this article, an extremely versatile predictive model for a newly developed Basilar meta-Membrane (BM2) sensors is reported with variable engineering parameters that contribute to it's frequency selection capabilities. The predictive model reported herein is for advancement over existing method by incorporating versatile and nonhomogeneous (e.g. functionally graded) model parameters that could not only exploit the possibilities of creating complex combinations of broadband frequency sensors but also explain the unique unexplained physical phenomenon that prevails in BM2, e.g. tailgating waves. In recent years, few notable attempts were made to fabricate the artificial basilar membrane, mimicking the mechanics of the human cochlea within a very short range of frequencies. To explain the operation of these sensors a few models were proposed. But, we fundamentally argue the "fabrication to explanation" approach and proposed the model driven predictive design process for the design any (BM2) as broadband sensors. Inspired by the physics of basilar membrane, frequency domain predictive model is proposed where both the material and geometrical parameters can be arbitrarily varied. Broadband frequency is applicable in many fields of science, engineering and technology, such as, sensors for chemical, biological and acoustic applications. With the proposed model, which is three times faster than its FEM counterpart, it is possible to alter the attributes of the selected length of the designed sensor using complex combinations of model parameters, based on target frequency applications. Finally, the tailgating wave peaks in the artificial basilar membranes that prevails in the previously reported experimental studies are also explained using the proposed model.

  7. Model Update of a Micro Air Vehicle (MAV) Flexible Wing Frame with Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.; Waszak, Martin R.; Morgan, Benjamin G.

    2004-01-01

    This paper describes a procedure to update parameters in the finite element model of a Micro Air Vehicle (MAV) to improve displacement predictions under aerodynamics loads. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with Multidisciplinary Design Optimization (MDO) is used to modify key model parameters. Static test data collected using photogrammetry are used to correlate with model predictions. Results show significant improvements in model predictions after parameters are updated; however, computed probabilities values indicate low confidence in updated values and/or model structure errors. Lessons learned in the areas of wing design, test procedures, modeling approaches with geometric nonlinearities, and uncertainties quantification are all documented.

  8. Modeling the Afferent Dynamics of the Baroreflex Control System

    PubMed Central

    Mahdi, Adam; Sturdy, Jacob; Ottesen, Johnny T.; Olufsen, Mette S.

    2013-01-01

    In this study we develop a modeling framework for predicting baroreceptor firing rate as a function of blood pressure. We test models within this framework both quantitatively and qualitatively using data from rats. The models describe three components: arterial wall deformation, stimulation of mechanoreceptors located in the BR nerve-endings, and modulation of the action potential frequency. The three sub-systems are modeled individually following well-established biological principles. The first submodel, predicting arterial wall deformation, uses blood pressure as an input and outputs circumferential strain. The mechanoreceptor stimulation model, uses circumferential strain as an input, predicting receptor deformation as an output. Finally, the neural model takes receptor deformation as an input predicting the BR firing rate as an output. Our results show that nonlinear dependence of firing rate on pressure can be accounted for by taking into account the nonlinear elastic properties of the artery wall. This was observed when testing the models using multiple experiments with a single set of parameters. We find that to model the response to a square pressure stimulus, giving rise to post-excitatory depression, it is necessary to include an integrate-and-fire model, which allows the firing rate to cease when the stimulus falls below a given threshold. We show that our modeling framework in combination with sensitivity analysis and parameter estimation can be used to test and compare models. Finally, we demonstrate that our preferred model can exhibit all known dynamics and that it is advantageous to combine qualitative and quantitative analysis methods. PMID:24348231

  9. A generalized procedure for the prediction of multicomponent adsorption equilibria

    DOE PAGES

    Ladshaw, Austin; Yiacoumi, Sotira; Tsouris, Costas

    2015-04-07

    Prediction of multicomponent adsorption equilibria has been investigated for several decades. While there are theories available to predict the adsorption behavior of ideal mixtures, there are few purely predictive theories to account for nonidealities in real systems. Most models available for dealing with nonidealities contain interaction parameters that must be obtained through correlation with binary-mixture data. However, as the number of components in a system grows, the number of parameters needed to be obtained increases exponentially. Here, a generalized procedure is proposed, as an extension of the predictive real adsorbed solution theory, for determining the parameters of any activity model,more » for any number of components, without correlation. This procedure is then combined with the adsorbed solution theory to predict the adsorption behavior of mixtures. As this method can be applied to any isotherm model and any activity model, it is referred to as the generalized predictive adsorbed solution theory.« less

  10. Kalman filter to update forest cover estimates

    Treesearch

    Raymond L. Czaplewski

    1990-01-01

    The Kalman filter is a statistical estimator that combines a time-series of independent estimates, using a prediction model that describes expected changes in the state of a system over time. An expensive inventory can be updated using model predictions that are adjusted with more recent, but less expensive and precise, monitoring data. The concepts of the Kalman...

  11. Simulated Annealing Based Hybrid Forecast for Improving Daily Municipal Solid Waste Generation Prediction

    PubMed Central

    Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei

    2014-01-01

    A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508

  12. SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Iyengar, P

    2016-06-15

    Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PETmore » and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining all features.« less

  13. Comparison of different two-pathway models for describing the combined effect of DO and nitrite on the nitrous oxide production by ammonia-oxidizing bacteria.

    PubMed

    Lang, Longqi; Pocquet, Mathieu; Ni, Bing-Jie; Yuan, Zhiguo; Spérandio, Mathieu

    2017-02-01

    The aim of this work is to compare the capability of two recently proposed two-pathway models for predicting nitrous oxide (N 2 O) production by ammonia-oxidizing bacteria (AOB) for varying ranges of dissolved oxygen (DO) and nitrite. The first model includes the electron carriers whereas the second model is based on direct coupling of electron donors and acceptors. Simulations are confronted to extensive sets of experiments (43 batches) from different studies with three different microbial systems. Despite their different mathematical structures, both models could well and similarly describe the combined effect of DO and nitrite on N 2 O production rate and emission factor. The model-predicted contributions for nitrifier denitrification pathway and hydroxylamine pathway also matched well with the available isotopic measurements. Based on sensitivity analysis, calibration procedures are described and discussed for facilitating the future use of those models.

  14. Leatherbacks swimming in silico: modeling and verifying their momentum and heat balance using computational fluid dynamics.

    PubMed

    Dudley, Peter N; Bonazza, Riccardo; Jones, T Todd; Wyneken, Jeanette; Porter, Warren P

    2014-01-01

    As global temperatures increase throughout the coming decades, species ranges will shift. New combinations of abiotic conditions will make predicting these range shifts difficult. Biophysical mechanistic niche modeling places bounds on an animal's niche through analyzing the animal's physical interactions with the environment. Biophysical mechanistic niche modeling is flexible enough to accommodate these new combinations of abiotic conditions. However, this approach is difficult to implement for aquatic species because of complex interactions among thrust, metabolic rate and heat transfer. We use contemporary computational fluid dynamic techniques to overcome these difficulties. We model the complex 3D motion of a swimming neonate and juvenile leatherback sea turtle to find power and heat transfer rates during the stroke. We combine the results from these simulations and a numerical model to accurately predict the core temperature of a swimming leatherback. These results are the first steps in developing a highly accurate mechanistic niche model, which can assists paleontologist in understanding biogeographic shifts as well as aid contemporary species managers about potential range shifts over the coming decades.

  15. Protein model quality assessment prediction by combining fragment comparisons and a consensus Cα contact potential

    PubMed Central

    Zhou, Hongyi; Skolnick, Jeffrey

    2009-01-01

    In this work, we develop a fully automated method for the quality assessment prediction of protein structural models generated by structure prediction approaches such as fold recognition servers, or ab initio methods. The approach is based on fragment comparisons and a consensus Cα contact potential derived from the set of models to be assessed and was tested on CASP7 server models. The average Pearson linear correlation coefficient between predicted quality and model GDT-score per target is 0.83 for the 98 targets which is better than those of other quality assessment methods that participated in CASP7. Our method also outperforms the other methods by about 3% as assessed by the total GDT-score of the selected top models. PMID:18004783

  16. Self organising hypothesis networks: a new approach for representing and structuring SAR knowledge

    PubMed Central

    2014-01-01

    Background Combining different sources of knowledge to build improved structure activity relationship models is not easy owing to the variety of knowledge formats and the absence of a common framework to interoperate between learning techniques. Most of the current approaches address this problem by using consensus models that operate at the prediction level. We explore the possibility to directly combine these sources at the knowledge level, with the aim to harvest potentially increased synergy at an earlier stage. Our goal is to design a general methodology to facilitate knowledge discovery and produce accurate and interpretable models. Results To combine models at the knowledge level, we propose to decouple the learning phase from the knowledge application phase using a pivot representation (lingua franca) based on the concept of hypothesis. A hypothesis is a simple and interpretable knowledge unit. Regardless of its origin, knowledge is broken down into a collection of hypotheses. These hypotheses are subsequently organised into hierarchical network. This unification permits to combine different sources of knowledge into a common formalised framework. The approach allows us to create a synergistic system between different forms of knowledge and new algorithms can be applied to leverage this unified model. This first article focuses on the general principle of the Self Organising Hypothesis Network (SOHN) approach in the context of binary classification problems along with an illustrative application to the prediction of mutagenicity. Conclusion It is possible to represent knowledge in the unified form of a hypothesis network allowing interpretable predictions with performances comparable to mainstream machine learning techniques. This new approach offers the potential to combine knowledge from different sources into a common framework in which high level reasoning and meta-learning can be applied; these latter perspectives will be explored in future work. PMID:24959206

  17. All-atom 3D structure prediction of transmembrane β-barrel proteins from sequences.

    PubMed

    Hayat, Sikander; Sander, Chris; Marks, Debora S; Elofsson, Arne

    2015-04-28

    Transmembrane β-barrels (TMBs) carry out major functions in substrate transport and protein biogenesis but experimental determination of their 3D structure is challenging. Encouraged by successful de novo 3D structure prediction of globular and α-helical membrane proteins from sequence alignments alone, we developed an approach to predict the 3D structure of TMBs. The approach combines the maximum-entropy evolutionary coupling method for predicting residue contacts (EVfold) with a machine-learning approach (boctopus2) for predicting β-strands in the barrel. In a blinded test for 19 TMB proteins of known structure that have a sufficient number of diverse homologous sequences available, this combined method (EVfold_bb) predicts hydrogen-bonded residue pairs between adjacent β-strands at an accuracy of ∼70%. This accuracy is sufficient for the generation of all-atom 3D models. In the transmembrane barrel region, the average 3D structure accuracy [template-modeling (TM) score] of top-ranked models is 0.54 (ranging from 0.36 to 0.85), with a higher (44%) number of residue pairs in correct strand-strand registration than in earlier methods (18%). Although the nonbarrel regions are predicted less accurately overall, the evolutionary couplings identify some highly constrained loop residues and, for FecA protein, the barrel including the structure of a plug domain can be accurately modeled (TM score = 0.68). Lower prediction accuracy tends to be associated with insufficient sequence information and we therefore expect increasing numbers of β-barrel families to become accessible to accurate 3D structure prediction as the number of available sequences increases.

  18. Combined Molecular Dynamics Simulation-Molecular-Thermodynamic Theory Framework for Predicting Surface Tensions.

    PubMed

    Sresht, Vishnu; Lewandowski, Eric P; Blankschtein, Daniel; Jusufi, Arben

    2017-08-22

    A molecular modeling approach is presented with a focus on quantitative predictions of the surface tension of aqueous surfactant solutions. The approach combines classical Molecular Dynamics (MD) simulations with a molecular-thermodynamic theory (MTT) [ Y. J. Nikas, S. Puvvada, D. Blankschtein, Langmuir 1992 , 8 , 2680 ]. The MD component is used to calculate thermodynamic and molecular parameters that are needed in the MTT model to determine the surface tension isotherm. The MD/MTT approach provides the important link between the surfactant bulk concentration, the experimental control parameter, and the surfactant surface concentration, the MD control parameter. We demonstrate the capability of the MD/MTT modeling approach on nonionic alkyl polyethylene glycol surfactants at the air-water interface and observe reasonable agreement of the predicted surface tensions and the experimental surface tension data over a wide range of surfactant concentrations below the critical micelle concentration. Our modeling approach can be extended to ionic surfactants and their mixtures with both ionic and nonionic surfactants at liquid-liquid interfaces.

  19. Building a Better Applicant Pool--A Case Study of the Use of Predictive Modeling and Market Segmentation to Build and Enroll Better Pools of Students

    ERIC Educational Resources Information Center

    Herridge, Bart; Heil, Robert

    2003-01-01

    Predictive modeling has been a popular topic in higher education for the last few years. This case study shows an example of an effective use of modeling combined with market segmentation to strategically divide large, unmanageable prospect and inquiry pools and convert them into applicants, and eventually, enrolled students. (Contains 6 tables.)

  20. Combining PubMed knowledge and EHR data to develop a weighted bayesian network for pancreatic cancer prediction.

    PubMed

    Zhao, Di; Weng, Chunhua

    2011-10-01

    In this paper, we propose a novel method that combines PubMed knowledge and Electronic Health Records to develop a weighted Bayesian Network Inference (BNI) model for pancreatic cancer prediction. We selected 20 common risk factors associated with pancreatic cancer and used PubMed knowledge to weigh the risk factors. A keyword-based algorithm was developed to extract and classify PubMed abstracts into three categories that represented positive, negative, or neutral associations between each risk factor and pancreatic cancer. Then we designed a weighted BNI model by adding the normalized weights into a conventional BNI model. We used this model to extract the EHR values for patients with or without pancreatic cancer, which then enabled us to calculate the prior probabilities for the 20 risk factors in the BNI. The software iDiagnosis was designed to use this weighted BNI model for predicting pancreatic cancer. In an evaluation using a case-control dataset, the weighted BNI model significantly outperformed the conventional BNI and two other classifiers (k-Nearest Neighbor and Support Vector Machine). We conclude that the weighted BNI using PubMed knowledge and EHR data shows remarkable accuracy improvement over existing representative methods for pancreatic cancer prediction. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Combining PubMed Knowledge and EHR Data to Develop a Weighted Bayesian Network for Pancreatic Cancer Prediction

    PubMed Central

    Zhao, Di; Weng, Chunhua

    2011-01-01

    In this paper, we propose a novel method that combines PubMed knowledge and Electronic Health Records to develop a weighted Bayesian Network Inference (BNI) model for pancreatic cancer prediction. We selected 20 common risk factors associated with pancreatic cancer and used PubMed knowledge to weigh the risk factors. A keyword-based algorithm was developed to extract and classify PubMed abstracts into three categories that represented positive, negative, or neutral associations between each risk factor and pancreatic cancer. Then we designed a weighted BNI model by adding the normalized weights into a conventional BNI model. We used this model to extract the EHR values for patients with or without pancreatic cancer, which then enabled us to calculate the prior probabilities for the 20 risk factors in the BNI. The software iDiagnosis was designed to use this weighted BNI model for predicting pancreatic cancer. In an evaluation using a case-control dataset, the weighted BNI model significantly outperformed the conventional BNI and two other classifiers (k-Nearest Neighbor and Support Vector Machine). We conclude that the weighted BNI using PubMed knowledge and EHR data shows remarkable accuracy improvement over existing representative methods for pancreatic cancer prediction. PMID:21642013

  2. Computational predictive models for P-glycoprotein inhibition of in-house chalcone derivatives and drug-bank compounds.

    PubMed

    Ngo, Trieu-Du; Tran, Thanh-Dao; Le, Minh-Tri; Thai, Khac-Minh

    2016-11-01

    The human P-glycoprotein (P-gp) efflux pump is of great interest for medicinal chemists because of its important role in multidrug resistance (MDR). Because of the high polyspecificity as well as the unavailability of high-resolution X-ray crystal structures of this transmembrane protein, ligand-based, and structure-based approaches which were machine learning, homology modeling, and molecular docking were combined for this study. In ligand-based approach, individual two-dimensional quantitative structure-activity relationship models were developed using different machine learning algorithms and subsequently combined into the Ensemble model which showed good performance on both the diverse training set and the validation sets. The applicability domain and the prediction quality of the developed models were also judged using the state-of-the-art methods and tools. In our structure-based approach, the P-gp structure and its binding region were predicted for a docking study to determine possible interactions between the ligands and the receptor. Based on these in silico tools, hit compounds for reversing MDR were discovered from the in-house and DrugBank databases through virtual screening using prediction models and molecular docking in an attempt to restore cancer cell sensitivity to cytotoxic drugs.

  3. Beyond Atomic Sizes and Hume-Rothery Rules: Understanding and Predicting High-Entropy Alloys

    DOE PAGES

    Troparevsky, M. Claudia; Morris, James R.; Daene, Markus; ...

    2015-09-03

    High-entropy alloys constitute a new class of materials that provide an excellent combination of strength, ductility, thermal stability, and oxidation resistance. Although they have attracted extensive attention due to their potential applications, little is known about why these compounds are stable or how to predict which combination of elements will form a single phase. Here, we present a review of the latest research done on these alloys focusing on the theoretical models devised during the last decade. We discuss semiempirical methods based on the Hume-Rothery rules and stability criteria based on enthalpies of mixing and size mismatch. To provide insightsmore » into the electronic and magnetic properties of high-entropy alloys, we show the results of first-principles calculations of the electronic structure of the disordered solid-solution phase based on both Korringa Kohn Rostoker coherent potential approximation and large supercell models of example face-centered cubic and body-centered cubic systems. Furthermore, we discuss in detail a model based on enthalpy considerations that can predict which elemental combinations are most likely to form a single-phase high-entropy alloy. The enthalpies are evaluated via first-principles high-throughput density functional theory calculations of the energies of formation of binary compounds, and therefore it requires no experimental or empirically derived input. Finally, the model correctly accounts for the specific combinations of metallic elements that are known to form single-phase alloys while rejecting similar combinations that have been tried and shown not to be single phase.« less

  4. Privacy-Preserving Predictive Modeling: Harmonization of Contextual Embeddings From Different Sources.

    PubMed

    Huang, Yingxiang; Lee, Junghye; Wang, Shuang; Sun, Jimeng; Liu, Hongfang; Jiang, Xiaoqian

    2018-05-16

    Data sharing has been a big challenge in biomedical informatics because of privacy concerns. Contextual embedding models have demonstrated a very strong representative capability to describe medical concepts (and their context), and they have shown promise as an alternative way to support deep-learning applications without the need to disclose original data. However, contextual embedding models acquired from individual hospitals cannot be directly combined because their embedding spaces are different, and naive pooling renders combined embeddings useless. The aim of this study was to present a novel approach to address these issues and to promote sharing representation without sharing data. Without sacrificing privacy, we also aimed to build a global model from representations learned from local private data and synchronize information from multiple sources. We propose a methodology that harmonizes different local contextual embeddings into a global model. We used Word2Vec to generate contextual embeddings from each source and Procrustes to fuse different vector models into one common space by using a list of corresponding pairs as anchor points. We performed prediction analysis with harmonized embeddings. We used sequential medical events extracted from the Medical Information Mart for Intensive Care III database to evaluate the proposed methodology in predicting the next likely diagnosis of a new patient using either structured data or unstructured data. Under different experimental scenarios, we confirmed that the global model built from harmonized local models achieves a more accurate prediction than local models and global models built from naive pooling. Such aggregation of local models using our unique harmonization can serve as the proxy for a global model, combining information from a wide range of institutions and information sources. It allows information unique to a certain hospital to become available to other sites, increasing the fluidity of information flow in health care. ©Yingxiang Huang, Junghye Lee, Shuang Wang, Jimeng Sun, Hongfang Liu, Xiaoqian Jiang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 16.05.2018.

  5. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  6. Automatic speech recognition using a predictive echo state network classifier.

    PubMed

    Skowronski, Mark D; Harris, John G

    2007-04-01

    We have combined an echo state network (ESN) with a competitive state machine framework to create a classification engine called the predictive ESN classifier. We derive the expressions for training the predictive ESN classifier and show that the model was significantly more noise robust compared to a hidden Markov model in noisy speech classification experiments by 8+/-1 dB signal-to-noise ratio. The simple training algorithm and noise robustness of the predictive ESN classifier make it an attractive classification engine for automatic speech recognition.

  7. Combined visual and motor evoked potentials predict multiple sclerosis disability after 20 years.

    PubMed

    Schlaeger, Regina; Schindler, Christian; Grize, Leticia; Dellas, Sophie; Radue, Ernst W; Kappos, Ludwig; Fuhr, Peter

    2014-09-01

    The development of predictors of multiple sclerosis (MS) disability is difficult due to the complex interplay of pathophysiological and adaptive processes. The purpose of this study was to investigate whether combined evoked potential (EP)-measures allow prediction of MS disability after 20 years. We examined 28 patients with clinically definite MS according to Poser's criteria with Expanded Disability Status Scale (EDSS) scores, combined visual and motor EPs at entry (T0), 6 (T1), 12 (T2) and 24 (T3) months, and a cranial magnetic resonance imaging (MRI) scan at T0 and T2. EDSS testing was repeated at year 14 (T4) and year 20 (T5). Spearman rank correlation was used. We performed a multivariable regression analysis to examine predictive relationships of the sum of z-transformed EP latencies (s-EPT0) and other baseline variables with EDSST5. We found that s-EPT0 correlated with EDSST5 (rho=0.72, p<0.0001) and ΔEDSST5-T0 (rho=0.50, p=0.006). Backward selection resulted in the prediction model: E (EDSST5)=3.91-2.22×therapy+0.079×age+0.057×s-EPT0 (Model 1, R (2)=0.58) with therapy as binary variable (1=any disease-modifying therapy between T3 and T5, 0=no therapy). Neither EDSST0 nor T2-lesion or gadolinium (Gd)-enhancing lesion quantities at T0 improved prediction of EDSST5. The area under the receiver operating characteristic (ROC) curve was 0.89 for model 1. These results further support a role for combined EP-measures as predictors of long-term disability in MS. © The Author(s) 2014.

  8. Pharmacokinetics and Drug Interactions Determine Optimum Combination Strategies in Computational Models of Cancer Evolution.

    PubMed

    Chakrabarti, Shaon; Michor, Franziska

    2017-07-15

    The identification of optimal drug administration schedules to battle the emergence of resistance is a major challenge in cancer research. The existence of a multitude of resistance mechanisms necessitates administering drugs in combination, significantly complicating the endeavor of predicting the evolutionary dynamics of cancers and optimal intervention strategies. A thorough understanding of the important determinants of cancer evolution under combination therapies is therefore crucial for correctly predicting treatment outcomes. Here we developed the first computational strategy to explore pharmacokinetic and drug interaction effects in evolutionary models of cancer progression, a crucial step towards making clinically relevant predictions. We found that incorporating these phenomena into our multiscale stochastic modeling framework significantly changes the optimum drug administration schedules identified, often predicting nonintuitive strategies for combination therapies. We applied our approach to an ongoing phase Ib clinical trial (TATTON) administering AZD9291 and selumetinib to EGFR-mutant lung cancer patients. Our results suggest that the schedules used in the three trial arms have almost identical efficacies, but slight modifications in the dosing frequencies of the two drugs can significantly increase tumor cell eradication. Interestingly, we also predict that drug concentrations lower than the MTD are as efficacious, suggesting that lowering the total amount of drug administered could lower toxicities while not compromising on the effectiveness of the drugs. Our approach highlights the fact that quantitative knowledge of pharmacokinetic, drug interaction, and evolutionary processes is essential for identifying best intervention strategies. Our method is applicable to diverse cancer and treatment types and allows for a rational design of clinical trials. Cancer Res; 77(14); 3908-21. ©2017 AACR . ©2017 American Association for Cancer Research.

  9. Spatiotemporal modeling of PM2.5 concentrations at the national scale combining land use regression and Bayesian maximum entropy in China.

    PubMed

    Chen, Li; Gao, Shuang; Zhang, Hui; Sun, Yanling; Ma, Zhenxing; Vedal, Sverre; Mao, Jian; Bai, Zhipeng

    2018-05-03

    Concentrations of particulate matter with aerodynamic diameter <2.5 μm (PM 2.5 ) are relatively high in China. Estimation of PM 2.5 exposure is complex because PM 2.5 exhibits complex spatiotemporal patterns. To improve the validity of exposure predictions, several methods have been developed and applied worldwide. A hybrid approach combining a land use regression (LUR) model and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals were developed to estimate the PM 2.5 concentrations on a national scale in China. This hybrid model could potentially provide more valid predictions than a commonly-used LUR model. The LUR/BME model had good performance characteristics, with R 2  = 0.82 and root mean square error (RMSE) of 4.6 μg/m 3 . Prediction errors of the LUR/BME model were reduced by incorporating soft data accounting for data uncertainty, with the R 2 increasing by 6%. The performance of LUR/BME is better than OK/BME. The LUR/BME model is the most accurate fine spatial scale PM 2.5 model developed to date for China. Copyright © 2018. Published by Elsevier Ltd.

  10. Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM

    NASA Astrophysics Data System (ADS)

    Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan

    2018-02-01

    The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.

  11. Integrating auxiliary data and geophysical techniques for the estimation of soil clay content using CHAID algorithm

    NASA Astrophysics Data System (ADS)

    Abbaszadeh Afshar, Farideh; Ayoubi, Shamsollah; Besalatpour, Ali Asghar; Khademi, Hossein; Castrignano, Annamaria

    2016-03-01

    This study was conducted to estimate soil clay content in two depths using geophysical techniques (Ground Penetration Radar-GPR and Electromagnetic Induction-EMI) and ancillary variables (remote sensing and topographic data) in an arid region of the southeastern Iran. GPR measurements were performed throughout ten transects of 100 m length with the line spacing of 10 m, and the EMI measurements were done every 10 m on the same transect in six sites. Ten soil cores were sampled randomly in each site and soil samples were taken from the depth of 0-20 and 20-40 cm, and then the clay fraction of each of sixty soil samples was measured in the laboratory. Clay content was predicted using three different sets of properties including geophysical data, ancillary data, and a combination of both as inputs to multiple linear regressions (MLR) and decision tree-based algorithm of Chi-Squared Automatic Interaction Detection (CHAID) models. The results of the CHAID and MLR models with all combined data showed that geophysical data were the most important variables for the prediction of clay content in two depths in the study area. The proposed MLR model, using the combined data, could explain only 0.44 and 0.31% of the total variability of clay content in 0-20 and 20-40 cm depths, respectively. Also, the coefficient of determination (R2) values for the clay content prediction, using the constructed CHAID model with the combined data, was 0.82 and 0.76 in 0-20 and 20-40 cm depths, respectively. CHAID models, therefore, showed a greater potential in predicting soil clay content from geophysical and ancillary data, while traditional regression methods (i.e. the MLR models) did not perform as well. Overall, the results may encourage researchers in using georeferenced GPR and EMI data as ancillary variables and CHAID algorithm to improve the estimation of soil clay content.

  12. Combining Early Coagulation and Inflammatory Status Improves Prediction of Mortality in Burned and Nonburned Trauma Patients

    DTIC Science & Technology

    2008-02-01

    clinician to distinguish between the effects of treatment and the effects of disease. Several different prediction models for multiple or- gan failure...treat- ment protocols and allow a clinician to distinguish the effect of treatment from effect of disease. In this study, our model predicted in...TNF produces a decrease in protein C activation by down regulating the expression of endothelial cell protein C receptor and thrombomodulin, both of

  13. Correlation of sensory bitterness in dairy protein hydrolysates: Comparison of prediction models built using sensory, chromatographic and electronic tongue data.

    PubMed

    Newman, J; Egan, T; Harbourne, N; O'Riordan, D; Jacquier, J C; O'Sullivan, M

    2014-08-01

    Sensory evaluation can be problematic for ingredients with a bitter taste during research and development phase of new food products. In this study, 19 dairy protein hydrolysates (DPH) were analysed by an electronic tongue and their physicochemical characteristics, the data obtained from these methods were correlated with their bitterness intensity as scored by a trained sensory panel and each model was also assessed by its predictive capabilities. The physiochemical characteristics of the DPHs investigated were degree of hydrolysis (DH%), and data relating to peptide size and relative hydrophobicity from size exclusion chromatography (SEC) and reverse phase (RP) HPLC. Partial least square regression (PLS) was used to construct the prediction models. All PLS regressions had good correlations (0.78 to 0.93) with the strongest being the combination of data obtained from SEC and RP HPLC. However, the PLS with the strongest predictive power was based on the e-tongue which had the PLS regression with the lowest root mean predicted residual error sum of squares (PRESS) in the study. The results show that the PLS models constructed with the e-tongue and the combination of SEC and RP-HPLC has potential to be used for prediction of bitterness and thus reducing the reliance on sensory analysis in DPHs for future food research. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Fetal hemoglobin, α1-microglobulin and hemopexin are potential predictive first trimester biomarkers for preeclampsia.

    PubMed

    Anderson, Ulrik Dolberg; Gram, Magnus; Ranstam, Jonas; Thilaganathan, Basky; Kerström, Bo; Hansson, Stefan R

    2016-04-01

    Overproduction of cell-free fetal hemoglobin (HbF) in the preeclamptic placenta has been recently implicated as a new etiological factor of preeclampsia. In this study, maternal serum levels of HbF and the endogenous hemoglobin/heme scavenging systems were evaluated as predictive biomarkers for preeclampsia in combination with uterine artery Doppler ultrasound. Case-control study including 433 women in early pregnancy (mean 13.7weeks of gestation) of which 86 subsequently developed preeclampsia. The serum concentrations of HbF, total cell-free hemoglobin, hemopexin, haptoglobin and α1-microglobulin were measured in maternal serum. All patients were examined with uterine artery Doppler ultrasound. Logistic regression models were developed, which included the biomarkers, ultrasound indices, and maternal risk factors. There were significantly higher serum concentrations of HbF and α1-microglobulin and significantly lower serum concentrations of hemopexin in patients who later developed preeclampsia. The uterine artery Doppler ultrasound results showed significantly higher pulsatility index values in the preeclampsia group. The optimal prediction model was obtained by combining HbF, α1-microglobulin and hemopexin in combination with the maternal characteristics parity, diabetes and pre-pregnancy hypertension. The optimal sensitivity for all preeclampsia was 60% at 95% specificity. Overproduction of placentally derived HbF and depletion of hemoglobin/heme scavenging mechanisms are involved in the pathogenesis of preeclampsia. The combination of HbF and α1-microglobulin and/or hemopexin may serve as a prediction model for preeclampsia in combination with maternal risk factors and/or uterine artery Doppler ultrasound. Copyright © 2016 International Society for the Study of Hypertension in Pregnancy. Published by Elsevier B.V. All rights reserved.

  15. Spatiotemporal prediction of fine particulate matter during the 2008 northern California wildfires using machine learning.

    PubMed

    Reid, Colleen E; Jerrett, Michael; Petersen, Maya L; Pfister, Gabriele G; Morefield, Philip E; Tager, Ira B; Raffuse, Sean M; Balmes, John R

    2015-03-17

    Estimating population exposure to particulate matter during wildfires can be difficult because of insufficient monitoring data to capture the spatiotemporal variability of smoke plumes. Chemical transport models (CTMs) and satellite retrievals provide spatiotemporal data that may be useful in predicting PM2.5 during wildfires. We estimated PM2.5 concentrations during the 2008 northern California wildfires using 10-fold cross-validation (CV) to select an optimal prediction model from a set of 11 statistical algorithms and 29 predictor variables. The variables included CTM output, three measures of satellite aerosol optical depth, distance to the nearest fires, meteorological data, and land use, traffic, spatial location, and temporal characteristics. The generalized boosting model (GBM) with 29 predictor variables had the lowest CV root mean squared error and a CV-R2 of 0.803. The most important predictor variable was the Geostationary Operational Environmental Satellite Aerosol/Smoke Product (GASP) Aerosol Optical Depth (AOD), followed by the CTM output and distance to the nearest fire cluster. Parsimonious models with various combinations of fewer variables also predicted PM2.5 well. Using machine learning algorithms to combine spatiotemporal data from satellites and CTMs can reliably predict PM2.5 concentrations during a major wildfire event.

  16. Development and validation of a simulation method, PeCHREM, for evaluating spatio-temporal concentration changes of paddy herbicides in rivers.

    PubMed

    Imaizumi, Yoshitaka; Suzuki, Noriyuki; Shiraishi, Fujio; Nakajima, Daisuke; Serizawa, Shigeko; Sakurai, Takeo; Shiraishi, Hiroaki

    2018-01-24

    In pesticide risk management in Japan, predicted environmental concentrations are estimated by a tiered approach, and the Ministry of the Environment also performs field surveys to confirm the maximum concentrations of pesticides with risk concerns. To contribute to more efficient and effective field surveys, we developed the Pesticide Chemicals High Resolution Estimation Method (PeCHREM) for estimating spatially and temporally variable emissions of various paddy herbicides from paddy fields to the environment. We used PeCHREM and the G-CIEMS multimedia environmental fate model to predict day-to-day environmental concentration changes of 25 herbicides throughout Japan. To validate the PeCHREM/G-CIEMS model, we also conducted a field survey, in which river waters were sampled at least once every two weeks at seven sites in six prefectures from April to July 2009. In 20 of 139 sampling site-herbicide combinations in which herbicides were detected in at least three samples, all observed concentrations differed from the corresponding prediction by less than one order of magnitude. We also compared peak concentrations and the dates on which the concentrations reached peak values (peak dates) between predictions and observations. The peak concentration differences between predictions and observations were less than one order of magnitude in 66% of the 166 sampling site-herbicide combinations in which herbicide was detected in river water. The observed and predicted peak dates differed by less than two weeks in 79% of these 166 combinations. These results confirm that the PeCHREM/G-CIEMS model can improve the efficiency and effectiveness of surveys by predicting the peak concentrations and peak dates of various herbicides.

  17. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  18. Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia

    NASA Astrophysics Data System (ADS)

    Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg

    2013-03-01

    Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.

  19. Development of Standard Fuel Models in Boreal Forests of Northeast China through Calibration and Validation

    PubMed Central

    Cai, Longyan; He, Hong S.; Wu, Zhiwei; Lewis, Benard L.; Liang, Yu

    2014-01-01

    Understanding the fire prediction capabilities of fuel models is vital to forest fire management. Various fuel models have been developed in the Great Xing'an Mountains in Northeast China. However, the performances of these fuel models have not been tested for historical occurrences of wildfires. Consequently, the applicability of these models requires further investigation. Thus, this paper aims to develop standard fuel models. Seven vegetation types were combined into three fuel models according to potential fire behaviors which were clustered using Euclidean distance algorithms. Fuel model parameter sensitivity was analyzed by the Morris screening method. Results showed that the fuel model parameters 1-hour time-lag loading, dead heat content, live heat content, 1-hour time-lag SAV(Surface Area-to-Volume), live shrub SAV, and fuel bed depth have high sensitivity. Two main sensitive fuel parameters: 1-hour time-lag loading and fuel bed depth, were determined as adjustment parameters because of their high spatio-temporal variability. The FARSITE model was then used to test the fire prediction capabilities of the combined fuel models (uncalibrated fuel models). FARSITE was shown to yield an unrealistic prediction of the historical fire. However, the calibrated fuel models significantly improved the capabilities of the fuel models to predict the actual fire with an accuracy of 89%. Validation results also showed that the model can estimate the actual fires with an accuracy exceeding 56% by using the calibrated fuel models. Therefore, these fuel models can be efficiently used to calculate fire behaviors, which can be helpful in forest fire management. PMID:24714164

  20. Prediction of mutagenic toxicity by combination of Recursive Partitioning and Support Vector Machines.

    PubMed

    Liao, Quan; Yao, Jianhua; Yuan, Shengang

    2007-05-01

    The study of prediction of toxicity is very important and necessary because measurement of toxicity is typically time-consuming and expensive. In this paper, Recursive Partitioning (RP) method was used to select descriptors. RP and Support Vector Machines (SVM) were used to construct structure-toxicity relationship models, RP model and SVM model, respectively. The performances of the two models are different. The prediction accuracies of the RP model are 80.2% for mutagenic compounds in MDL's toxicity database, 83.4% for compounds in CMC and 84.9% for agrochemicals in in-house database respectively. Those of SVM model are 81.4%, 87.0% and 87.3% respectively.

  1. A study of modelling simplifications in ground vibration predictions for railway traffic at grade

    NASA Astrophysics Data System (ADS)

    Germonpré, M.; Degrande, G.; Lombaert, G.

    2017-10-01

    Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.

  2. Predicting the Best Fit: A Comparison of Response Surface Models for Midazolam and Alfentanil Sedation in Procedures With Varying Stimulation.

    PubMed

    Liou, Jing-Yang; Ting, Chien-Kun; Mandell, M Susan; Chang, Kuang-Yi; Teng, Wei-Nung; Huang, Yu-Yin; Tsou, Mei-Yung

    2016-08-01

    Selecting an effective dose of sedative drugs in combined upper and lower gastrointestinal endoscopy is complicated by varying degrees of pain stimulation. We tested the ability of 5 response surface models to predict depth of sedation after administration of midazolam and alfentanil in this complex model. The procedure was divided into 3 phases: esophagogastroduodenoscopy (EGD), colonoscopy, and the time interval between the 2 (intersession). The depth of sedation in 33 adult patients was monitored by Observer Assessment of Alertness/Scores. A total of 218 combinations of midazolam and alfentanil effect-site concentrations derived from pharmacokinetic models were used to test 5 response surface models in each of the 3 phases of endoscopy. Model fit was evaluated with objective function value, corrected Akaike Information Criterion (AICc), and Spearman ranked correlation. A model was arbitrarily defined as accurate if the predicted probability is <0.5 from the observed response. The effect-site concentrations tested ranged from 1 to 76 ng/mL and from 5 to 80 ng/mL for midazolam and alfentanil, respectively. Midazolam and alfentanil had synergistic effects in colonoscopy and EGD, but additivity was observed in the intersession group. Adequate prediction rates were 84% to 85% in the intersession group, 84% to 88% during colonoscopy, and 82% to 87% during EGD. The reduced Greco and Fixed alfentanil concentration required for 50% of the patients to achieve targeted response Hierarchy models performed better with comparable predictive strength. The reduced Greco model had the lowest AICc with strong correlation in all 3 phases of endoscopy. Dynamic, rather than fixed, γ and γalf in the Hierarchy model improved model fit. The reduced Greco model had the lowest objective function value and AICc and thus the best fit. This model was reliable with acceptable predictive ability based on adequate clinical correlation. We suggest that this model has practical clinical value for patients undergoing procedures with varying degrees of stimulation.

  3. Two-length-scale turbulence model for self-similar buoyancy-, shock-, and shear-driven mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, Brandon E.; Schilling, Oleg; Hartland, Tucker A.

    The three-equation k-L-a turbulence model [B. Morgan and M. Wickett, Three-equation model for the self-similar growth of Rayleigh-Taylor and Richtmyer-Meshkov instabilities," Phys. Rev. E 91 (2015)] is extended by the addition of a second length scale equation. It is shown that the separation of turbulence transport and turbulence destruction length scales is necessary for simultaneous prediction of the growth parameter and turbulence intensity of a Kelvin-Helmholtz shear layer when model coeficients are constrained by similarity analysis. Constraints on model coeficients are derived that satisfy an ansatz of self-similarity in the low-Atwood-number limit and allow the determination of model coeficients necessarymore » to recover expected experimental behavior. The model is then applied in one-dimensional simulations of Rayleigh-Taylor, reshocked Richtmyer-Meshkov, Kelvin{Helmholtz, and combined Rayleigh-Taylor/Kelvin-Helmholtz instability mixing layers to demonstrate that the expected growth rates are recovered numerically. Finally, it is shown that model behavior in the case of combined instability is to predict a mixing width that is a linear combination of Rayleigh-Taylor and Kelvin-Helmholtz mixing processes.« less

  4. Two-length-scale turbulence model for self-similar buoyancy-, shock-, and shear-driven mixing

    DOE PAGES

    Morgan, Brandon E.; Schilling, Oleg; Hartland, Tucker A.

    2018-01-10

    The three-equation k-L-a turbulence model [B. Morgan and M. Wickett, Three-equation model for the self-similar growth of Rayleigh-Taylor and Richtmyer-Meshkov instabilities," Phys. Rev. E 91 (2015)] is extended by the addition of a second length scale equation. It is shown that the separation of turbulence transport and turbulence destruction length scales is necessary for simultaneous prediction of the growth parameter and turbulence intensity of a Kelvin-Helmholtz shear layer when model coeficients are constrained by similarity analysis. Constraints on model coeficients are derived that satisfy an ansatz of self-similarity in the low-Atwood-number limit and allow the determination of model coeficients necessarymore » to recover expected experimental behavior. The model is then applied in one-dimensional simulations of Rayleigh-Taylor, reshocked Richtmyer-Meshkov, Kelvin{Helmholtz, and combined Rayleigh-Taylor/Kelvin-Helmholtz instability mixing layers to demonstrate that the expected growth rates are recovered numerically. Finally, it is shown that model behavior in the case of combined instability is to predict a mixing width that is a linear combination of Rayleigh-Taylor and Kelvin-Helmholtz mixing processes.« less

  5. Return to Work After Lumbar Microdiscectomy - Personalizing Approach Through Predictive Modeling.

    PubMed

    Papić, Monika; Brdar, Sanja; Papić, Vladimir; Lončar-Turukalo, Tatjana

    2016-01-01

    Lumbar disc herniation (LDH) is the most common disease among working population requiring surgical intervention. This study aims to predict the return to work after operative treatment of LDH based on the observational study including 153 patients. The classification problem was approached using decision trees (DT), support vector machines (SVM) and multilayer perception (MLP) combined with RELIEF algorithm for feature selection. MLP provided best recall of 0.86 for the class of patients not returning to work, which combined with the selected features enables early identification and personalized targeted interventions towards subjects at risk of prolonged disability. The predictive modeling indicated at the most decisive risk factors in prolongation of work absence: psychosocial factors, mobility of the spine and structural changes of facet joints and professional factors including standing, sitting and microclimate.

  6. Predictive Value of Upper Limb Muscles and Grasp Patterns on Functional Outcome in Cervical Spinal Cord Injury.

    PubMed

    Velstra, Inge-Marie; Bolliger, Marc; Krebs, Jörg; Rietman, Johan S; Curt, Armin

    2016-05-01

    To determine which single or combined upper limb muscles as defined by the International Standards for the Neurological Classification of Spinal Cord Injury (ISNCSCI); upper extremity motor score (UEMS) and the Graded Redefined Assessment of Strength, Sensibility, and Prehension (GRASSP), best predict upper limb function and independence in activities of daily living (ADLs) and to assess the predictive value of qualitative grasp movements (QlG) on upper limb function in individuals with acute tetraplegia. As part of a Europe-wide, prospective, longitudinal, multicenter study ISNCSCI, GRASSP, and Spinal Cord Independence Measure (SCIM III) scores were recorded at 1 and 6 months after SCI. For prediction of upper limb function and ADLs, a logistic regression model and unbiased recursive partitioning conditional inference tree (URP-CTREE) were used. Results: Logistic regression and URP-CTREE revealed that a combination of ISNCSCI and GRASSP muscles (to a maximum of 4) demonstrated the best prediction (specificity and sensitivity ranged from 81.8% to 96.0%) of upper limb function and identified homogenous outcome cohorts at 6 months. The URP-CTREE model with the QlG predictors for upper limb function showed similar results. Prediction of upper limb function can be achieved through a combination of defined, specific upper limb muscles assessed in the ISNCSCI and GRASSP. A combination of a limited number of proximal and distal muscles along with an assessment of grasping movements can be applied for clinical decision making for rehabilitation interventions and clinical trials. © The Author(s) 2015.

  7. Spatial Prediction of Coxiella burnetii Outbreak Exposure via Notified Case Counts in a Dose-Response Model.

    PubMed

    Brooke, Russell J; Kretzschmar, Mirjam E E; Hackert, Volker; Hoebe, Christian J P A; Teunis, Peter F M; Waller, Lance A

    2017-01-01

    We develop a novel approach to study an outbreak of Q fever in 2009 in the Netherlands by combining a human dose-response model with geostatistics prediction to relate probability of infection and associated probability of illness to an effective dose of Coxiella burnetii. The spatial distribution of the 220 notified cases in the at-risk population are translated into a smooth spatial field of dose. Based on these symptomatic cases, the dose-response model predicts a median of 611 asymptomatic infections (95% range: 410, 1,084) for the 220 reported symptomatic cases in the at-risk population; 2.78 (95% range: 1.86, 4.93) asymptomatic infections for each reported case. The low attack rates observed during the outbreak range from (Equation is included in full-text article.)to (Equation is included in full-text article.). The estimated peak levels of exposure extend to the north-east from the point source with an increasing proportion of asymptomatic infections further from the source. Our work combines established methodology from model-based geostatistics and dose-response modeling allowing for a novel approach to study outbreaks. Unobserved infections and the spatially varying effective dose can be predicted using the flexible framework without assuming any underlying spatial structure of the outbreak process. Such predictions are important for targeting interventions during an outbreak, estimating future disease burden, and determining acceptable risk levels.

  8. Quantitative prediction of drug side effects based on drug-related features.

    PubMed

    Niu, Yanqing; Zhang, Wen

    2017-09-01

    Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.

  9. Ngram time series model to predict activity type and energy cost from wrist, hip and ankle accelerometers: implications of age

    PubMed Central

    Strath, Scott J; Kate, Rohit J; Keenan, Kevin G; Welch, Whitney A; Swartz, Ann M

    2016-01-01

    To develop and test time series single site and multi-site placement models, we used wrist, hip and ankle processed accelerometer data to estimate energy cost and type of physical activity in adults. Ninety-nine subjects in three age groups (18–39, 40–64, 65 + years) performed 11 activities while wearing three triaxial accelereometers: one each on the non-dominant wrist, hip, and ankle. During each activity net oxygen cost (METs) was assessed. The time series of accelerometer signals were represented in terms of uniformly discretized values called bins. Support Vector Machine was used for activity classification with bins and every pair of bins used as features. Bagged decision tree regression was used for net metabolic cost prediction. To evaluate model performance we employed the jackknife leave-one-out cross validation method. Single accelerometer and multi-accelerometer site model estimates across and within age group revealed similar accuracy, with a bias range of −0.03 to 0.01 METs, bias percent of −0.8 to 0.3%, and a rMSE range of 0.81–1.04 METs. Multi-site accelerometer location models improved activity type classification over single site location models from a low of 69.3% to a maximum of 92.8% accuracy. For each accelerometer site location model, or combined site location model, percent accuracy classification decreased as a function of age group, or when young age groups models were generalized to older age groups. Specific age group models on average performed better than when all age groups were combined. A time series computation show promising results for predicting energy cost and activity type. Differences in prediction across age group, a lack of generalizability across age groups, and that age group specific models perform better than when all ages are combined needs to be considered as analytic calibration procedures to detect energy cost and type are further developed. PMID:26449155

  10. Combining Thermal And Structural Analyses

    NASA Technical Reports Server (NTRS)

    Winegar, Steven R.

    1990-01-01

    Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.

  11. Predicting site locations for biomass using facilities with Bayesian methods

    Treesearch

    Timothy M. Young; James H. Perdue; Xia Huang

    2017-01-01

    Logistic regression models combined with Bayesian inference were developed to predict locations and quantify factors that influence the siting of biomass-using facilities that use woody biomass in the Southeastern United States. Predictions were developed for two groups of mills, one representing larger capacity mills similar to pulp and paper mills (Group II...

  12. An automated decision-tree approach to predicting protein interaction hot spots.

    PubMed

    Darnell, Steven J; Page, David; Mitchell, Julie C

    2007-09-01

    Protein-protein interactions can be altered by mutating one or more "hot spots," the subset of residues that account for most of the interface's binding free energy. The identification of hot spots requires a significant experimental effort, highlighting the practical value of hot spot predictions. We present two knowledge-based models that improve the ability to predict hot spots: K-FADE uses shape specificity features calculated by the Fast Atomic Density Evaluation (FADE) program, and K-CON uses biochemical contact features. The combined K-FADE/CON (KFC) model displays better overall predictive accuracy than computational alanine scanning (Robetta-Ala). In addition, because these methods predict different subsets of known hot spots, a large and significant increase in accuracy is achieved by combining KFC and Robetta-Ala. The KFC analysis is applied to the calmodulin (CaM)/smooth muscle myosin light chain kinase (smMLCK) interface, and to the bone morphogenetic protein-2 (BMP-2)/BMP receptor-type I (BMPR-IA) interface. The results indicate a strong correlation between KFC hot spot predictions and mutations that significantly reduce the binding affinity of the interface. 2007 Wiley-Liss, Inc.

  13. Uncertainty aggregation and reduction in structure-material performance prediction

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Mahadevan, Sankaran; Ao, Dan

    2018-02-01

    An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.

  14. Toward Big Data Analytics: Review of Predictive Models in Management of Diabetes and Its Complications.

    PubMed

    Cichosz, Simon Lebech; Johansen, Mette Dencker; Hejlesen, Ole

    2015-10-14

    Diabetes is one of the top priorities in medical science and health care management, and an abundance of data and information is available on these patients. Whether data stem from statistical models or complex pattern recognition models, they may be fused into predictive models that combine patient information and prognostic outcome results. Such knowledge could be used in clinical decision support, disease surveillance, and public health management to improve patient care. Our aim was to review the literature and give an introduction to predictive models in screening for and the management of prevalent short- and long-term complications in diabetes. Predictive models have been developed for management of diabetes and its complications, and the number of publications on such models has been growing over the past decade. Often multiple logistic or a similar linear regression is used for prediction model development, possibly owing to its transparent functionality. Ultimately, for prediction models to prove useful, they must demonstrate impact, namely, their use must generate better patient outcomes. Although extensive effort has been put in to building these predictive models, there is a remarkable scarcity of impact studies. © 2015 Diabetes Technology Society.

  15. Use of multivariate linear regression and support vector regression to predict functional outcome after surgery for cervical spondylotic myelopathy.

    PubMed

    Hoffman, Haydn; Lee, Sunghoon I; Garst, Jordan H; Lu, Derek S; Li, Charles H; Nagasawa, Daniel T; Ghalehsari, Nima; Jahanforouz, Nima; Razaghy, Mehrdad; Espinal, Marie; Ghavamrezaii, Amir; Paak, Brian H; Wu, Irene; Sarrafzadeh, Majid; Lu, Daniel C

    2015-09-01

    This study introduces the use of multivariate linear regression (MLR) and support vector regression (SVR) models to predict postoperative outcomes in a cohort of patients who underwent surgery for cervical spondylotic myelopathy (CSM). Currently, predicting outcomes after surgery for CSM remains a challenge. We recruited patients who had a diagnosis of CSM and required decompressive surgery with or without fusion. Fine motor function was tested preoperatively and postoperatively with a handgrip-based tracking device that has been previously validated, yielding mean absolute accuracy (MAA) results for two tracking tasks (sinusoidal and step). All patients completed Oswestry disability index (ODI) and modified Japanese Orthopaedic Association questionnaires preoperatively and postoperatively. Preoperative data was utilized in MLR and SVR models to predict postoperative ODI. Predictions were compared to the actual ODI scores with the coefficient of determination (R(2)) and mean absolute difference (MAD). From this, 20 patients met the inclusion criteria and completed follow-up at least 3 months after surgery. With the MLR model, a combination of the preoperative ODI score, preoperative MAA (step function), and symptom duration yielded the best prediction of postoperative ODI (R(2)=0.452; MAD=0.0887; p=1.17 × 10(-3)). With the SVR model, a combination of preoperative ODI score, preoperative MAA (sinusoidal function), and symptom duration yielded the best prediction of postoperative ODI (R(2)=0.932; MAD=0.0283; p=5.73 × 10(-12)). The SVR model was more accurate than the MLR model. The SVR can be used preoperatively in risk/benefit analysis and the decision to operate. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Research on Nonlinear Time Series Forecasting of Time-Delay NN Embedded with Bayesian Regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Weijin; Xu, Yusheng; Xu, Yuhui; Wang, Jianmin

    Based on the idea of nonlinear prediction of phase space reconstruction, this paper presented a time delay BP neural network model, whose generalization capability was improved by Bayesian regularization. Furthermore, the model is applied to forecast the imp&exp trades in one industry. The results showed that the improved model has excellent generalization capabilities, which not only learned the historical curve, but efficiently predicted the trend of business. Comparing with common evaluation of forecasts, we put on a conclusion that nonlinear forecast can not only focus on data combination and precision improvement, it also can vividly reflect the nonlinear characteristic of the forecasting system. While analyzing the forecasting precision of the model, we give a model judgment by calculating the nonlinear characteristic value of the combined serial and original serial, proved that the forecasting model can reasonably 'catch' the dynamic characteristic of the nonlinear system which produced the origin serial.

  17. Application of third molar development and eruption models in estimating dental age in Malay sub-adults.

    PubMed

    Mohd Yusof, Mohd Yusmiaidil Putera; Cauwels, Rita; Deschepper, Ellen; Martens, Luc

    2015-08-01

    The third molar development (TMD) has been widely utilized as one of the radiographic method for dental age estimation. By using the same radiograph of the same individual, third molar eruption (TME) information can be incorporated to the TMD regression model. This study aims to evaluate the performance of dental age estimation in individual method models and the combined model (TMD and TME) based on the classic regressions of multiple linear and principal component analysis. A sample of 705 digital panoramic radiographs of Malay sub-adults aged between 14.1 and 23.8 years was collected. The techniques described by Gleiser and Hunt (modified by Kohler) and Olze were employed to stage the TMD and TME, respectively. The data was divided to develop three respective models based on the two regressions of multiple linear and principal component analysis. The trained models were then validated on the test sample and the accuracy of age prediction was compared between each model. The coefficient of determination (R²) and root mean square error (RMSE) were calculated. In both genders, adjusted R² yielded an increment in the linear regressions of combined model as compared to the individual models. The overall decrease in RMSE was detected in combined model as compared to TMD (0.03-0.06) and TME (0.2-0.8). In principal component regression, low value of adjusted R(2) and high RMSE except in male were exhibited in combined model. Dental age estimation is better predicted using combined model in multiple linear regression models. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  18. Geostatistical estimation of forest biomass in interior Alaska combining Landsat-derived tree cover, sampled airborne lidar and field observations

    NASA Astrophysics Data System (ADS)

    Babcock, Chad; Finley, Andrew O.; Andersen, Hans-Erik; Pattison, Robert; Cook, Bruce D.; Morton, Douglas C.; Alonzo, Michael; Nelson, Ross; Gregoire, Timothy; Ene, Liviu; Gobakken, Terje; Næsset, Erik

    2018-06-01

    The goal of this research was to develop and examine the performance of a geostatistical coregionalization modeling approach for combining field inventory measurements, strip samples of airborne lidar and Landsat-based remote sensing data products to predict aboveground biomass (AGB) in interior Alaska's Tanana Valley. The proposed modeling strategy facilitates pixel-level mapping of AGB density predictions across the entire spatial domain. Additionally, the coregionalization framework allows for statistically sound estimation of total AGB for arbitrary areal units within the study area---a key advance to support diverse management objectives in interior Alaska. This research focuses on appropriate characterization of prediction uncertainty in the form of posterior predictive coverage intervals and standard deviations. Using the framework detailed here, it is possible to quantify estimation uncertainty for any spatial extent, ranging from pixel-level predictions of AGB density to estimates of AGB stocks for the full domain. The lidar-informed coregionalization models consistently outperformed their counterpart lidar-free models in terms of point-level predictive performance and total AGB precision. Additionally, the inclusion of Landsat-derived forest cover as a covariate further improved estimation precision in regions with lower lidar sampling intensity. Our findings also demonstrate that model-based approaches that do not explicitly account for residual spatial dependence can grossly underestimate uncertainty, resulting in falsely precise estimates of AGB. On the other hand, in a geostatistical setting, residual spatial structure can be modeled within a Bayesian hierarchical framework to obtain statistically defensible assessments of uncertainty for AGB estimates.

  19. Combined Endoscopic/Sonographic-Based Risk Matrix Model for Predicting One-Year Risk of Surgery: A Prospective Observational Study of a Tertiary Center Severe/Refractory Crohn's Disease Cohort.

    PubMed

    Rispo, Antonio; Imperatore, Nicola; Testa, Anna; Bucci, Luigi; Luglio, Gaetano; De Palma, Giovanni Domenico; Rea, Matilde; Nardone, Olga Maria; Caporaso, Nicola; Castiglione, Fabiana

    2018-03-08

    In the management of Crohn's Disease (CD) patients, having a simple score combining clinical, endoscopic and imaging features to predict the risk of surgery could help to tailor treatment more effectively. AIMS: to prospectively evaluate the one-year risk factors for surgery in refractory/severe CD and to generate a risk matrix for predicting the probability of surgery at one year. CD patients needing a disease re-assessment at our tertiary IBD centre underwent clinical, laboratory, endoscopy and bowel sonography (BS) examinations within one week. The optimal cut-off values in predicting surgery were identified using ROC curves for Simple Endoscopic Score for CD (SES-CD), bowel wall thickness (BWT) at BS, and small bowel CD extension at BS. Binary logistic regression and Cox's regression were then carried out. Finally, the probabilities of surgery were calculated for selected baseline levels of covariates and results were arranged in a prediction matrix. Of 100 CD patients, 30 underwent surgery within one year. SES-CD©9 (OR 15.3; p<0.001), BWT©7 mm (OR 15.8; p<0.001), small bowel CD extension at BS©33 cm (OR 8.23; p<0.001) and stricturing/penetrating behavior (OR 4.3; p<0.001) were the only independent factors predictive of surgery at one-year based on binary logistic and Cox's regressions. Our matrix model combined these risk factors and the probability of surgery ranged from 0.48% to 87.5% (sixteen combinations). Our risk matrix combining clinical, endoscopic and ultrasonographic findings can accurately predict the one-year risk of surgery in patients with severe/refractory CD requiring a disease re-evaluation. This tool could be of value in clinical practice, serving as the basis for a tailored management of CD patients.

  20. An enhanced Petri-net model to predict synergistic effects of pairwise drug combinations from gene microarray data.

    PubMed

    Jin, Guangxu; Zhao, Hong; Zhou, Xiaobo; Wong, Stephen T C

    2011-07-01

    Prediction of synergistic effects of drug combinations has traditionally been relied on phenotypic response data. However, such methods cannot be used to identify molecular signaling mechanisms of synergistic drug combinations. In this article, we propose an enhanced Petri-Net (EPN) model to recognize the synergistic effects of drug combinations from the molecular response profiles, i.e. drug-treated microarray data. We addressed the downstream signaling network of the targets for the two individual drugs used in the pairwise combinations and applied EPN to the identified targeted signaling network. In EPN, drugs and signaling molecules are assigned to different types of places, while drug doses and molecular expressions are denoted by color tokens. The changes of molecular expressions caused by treatments of drugs are simulated by two actions of EPN: firing and blasting. Firing is to transit the drug and molecule tokens from one node or place to another, and blasting is to reduce the number of molecule tokens by drug tokens in a molecule node. The goal of EPN is to mediate the state characterized by control condition without any treatment to that of treatment and to depict the drug effects on molecules by the drug tokens. We applied EPN to our generated pairwise drug combination microarray data. The synergistic predictions using EPN are consistent with those predicted using phenotypic response data. The molecules responsible for the synergistic effects with their associated feedback loops display the mechanisms of synergism. The software implemented in Python 2.7 programming language is available from request. stwong@tmhs.org.

  1. Predicting FLDs Using a Multiscale Modeling Scheme

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.

    2017-09-01

    The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.

  2. Combining inferred regulatory and reconstructed metabolic networks enhances phenotype prediction in yeast.

    PubMed

    Wang, Zhuo; Danziger, Samuel A; Heavner, Benjamin D; Ma, Shuyi; Smith, Jennifer J; Li, Song; Herricks, Thurston; Simeonidis, Evangelos; Baliga, Nitin S; Aitchison, John D; Price, Nathan D

    2017-05-01

    Gene regulatory and metabolic network models have been used successfully in many organisms, but inherent differences between them make networks difficult to integrate. Probabilistic Regulation Of Metabolism (PROM) provides a partial solution, but it does not incorporate network inference and underperforms in eukaryotes. We present an Integrated Deduced And Metabolism (IDREAM) method that combines statistically inferred Environment and Gene Regulatory Influence Network (EGRIN) models with the PROM framework to create enhanced metabolic-regulatory network models. We used IDREAM to predict phenotypes and genetic interactions between transcription factors and genes encoding metabolic activities in the eukaryote, Saccharomyces cerevisiae. IDREAM models contain many fewer interactions than PROM and yet produce significantly more accurate growth predictions. IDREAM consistently outperformed PROM using any of three popular yeast metabolic models and across three experimental growth conditions. Importantly, IDREAM's enhanced accuracy makes it possible to identify subtle synthetic growth defects. With experimental validation, these novel genetic interactions involving the pyruvate dehydrogenase complex suggested a new role for fatty acid-responsive factor Oaf1 in regulating acetyl-CoA production in glucose grown cells.

  3. Building and verifying a severity prediction model of acute pancreatitis (AP) based on BISAP, MEWS and routine test indexes.

    PubMed

    Ye, Jiang-Feng; Zhao, Yu-Xin; Ju, Jian; Wang, Wei

    2017-10-01

    To discuss the value of the Bedside Index for Severity in Acute Pancreatitis (BISAP), Modified Early Warning Score (MEWS), serum Ca2+, similarly hereinafter, and red cell distribution width (RDW) for predicting the severity grade of acute pancreatitis and to develop and verify a more accurate scoring system to predict the severity of AP. In 302 patients with AP, we calculated BISAP and MEWS scores and conducted regression analyses on the relationships of BISAP scoring, RDW, MEWS, and serum Ca2+ with the severity of AP using single-factor logistics. The variables with statistical significance in the single-factor logistic regression were used in a multi-factor logistic regression model; forward stepwise regression was used to screen variables and build a multi-factor prediction model. A receiver operating characteristic curve (ROC curve) was constructed, and the significance of multi- and single-factor prediction models in predicting the severity of AP using the area under the ROC curve (AUC) was evaluated. The internal validity of the model was verified through bootstrapping. Among 302 patients with AP, 209 had mild acute pancreatitis (MAP) and 93 had severe acute pancreatitis (SAP). According to single-factor logistic regression analysis, we found that BISAP, MEWS and serum Ca2+ are prediction indexes of the severity of AP (P-value<0.001), whereas RDW is not a prediction index of AP severity (P-value>0.05). The multi-factor logistic regression analysis showed that BISAP and serum Ca2+ are independent prediction indexes of AP severity (P-value<0.001), and MEWS is not an independent prediction index of AP severity (P-value>0.05); BISAP is negatively related to serum Ca2+ (r=-0.330, P-value<0.001). The constructed model is as follows: ln()=7.306+1.151*BISAP-4.516*serum Ca2+. The predictive ability of each model for SAP follows the order of the combined BISAP and serum Ca2+ prediction model>Ca2+>BISAP. There is no statistical significance for the predictive ability of BISAP and serum Ca2+ (P-value>0.05); however, there is remarkable statistical significance for the predictive ability using the newly built prediction model as well as BISAP and serum Ca2+ individually (P-value<0.01). Verification of the internal validity of the models by bootstrapping is favorable. BISAP and serum Ca2+ have high predictive value for the severity of AP. However, the model built by combining BISAP and serum Ca2+ is remarkably superior to those of BISAP and serum Ca2+ individually. Furthermore, this model is simple, practical and appropriate for clinical use. Copyright © 2016. Published by Elsevier Masson SAS.

  4. Prediction of Continental-Scale Net Ecosystem Carbon Exchange by Combining MODIS and AmeriFlux Data

    NASA Astrophysics Data System (ADS)

    Xiao, J.; Zhuang, Q.

    2007-12-01

    There is growing interest in scaling up net ecosystem exchange (NEE) measured at eddy covariance flux towers to regional scales. Here we used remote sensing data from the MODIS instrument on board NASA's Terra satellite to extrapolate NEE measured at AmeriFlux sites to the continental scale. We combined MODIS data and NEE measurements from a number of AmeriFlux sites with a variety of vegetation types (e.g., forests, grasslands, shrublands, savannas, and croplands) to develop a predictive NEE model using a regression tree approach. The model was trained using 2000-2003 NEE measurements, and the performance of the model was evaluated using independent data over the period 2004-2006. We found that the model predicted NEE with reasonable accuracy at the continental scale. The R-squared values are 0.50 for all vegetation types combined and 0.72 for deciduous forests. We then applied the model to the conterminous U.S. and predicted NEE for each 500m by 500m cell over the period 2001-2006. Based on the wall-to-wall NEE estimates, we examined the spatial and temporal distributions of annual NEE and interannual variability of annual NEE across the conterminous U.S. over the study period (2001-2006). Our scaling-up approach implicitly considered the effects of climate variability, land use/land cover change, disturbances, extreme climate events, and management practices, and thus our annual NEE estimates represents the net carbon fluxes between the terrestrial biosphere and the atmosphere in the conterminous U.S.

  5. Rapid determination of biogenic amines in cooked beef using hyperspectral imaging with sparse representation algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Dong; Lu, Anxiang; Ren, Dong; Wang, Jihua

    2017-11-01

    This study explored the feasibility of rapid detection of biogenic amines (BAs) in cooked beef during the storage process using hyperspectral imaging technique combined with sparse representation (SR) algorithm. The hyperspectral images of samples were collected in the two spectral ranges of 400-1000 nm and 1000-1800 nm, separately. The spectral data were reduced dimensionality by SR and principal component analysis (PCA) algorithms, and then integrated the least square support vector machine (LS-SVM) to build the SR-LS-SVM and PC-LS-SVM models for the prediction of BAs values in cooked beef. The results showed that the SR-LS-SVM model exhibited the best predictive ability with determination coefficients (RP2) of 0.943 and root mean square errors (RMSEP) of 1.206 in the range of 400-1000 nm of prediction set. The SR and PCA algorithms were further combined to establish the best SR-PC-LS-SVM model for BAs prediction, which had high RP2of 0.969 and low RMSEP of 1.039 in the region of 400-1000 nm. The visual map of the BAs was generated using the best SR-PC-LS-SVM model with imaging process algorithms, which could be used to observe the changes of BAs in cooked beef more intuitively. The study demonstrated that hyperspectral imaging technique combined with sparse representation were able to detect effectively the BAs values in cooked beef during storage and the built SR-PC-LS-SVM model had a potential for rapid and accurate determination of freshness indexes in other meat and meat products.

  6. Gender dimorphic ACL strain in response to combined dynamic 3D knee joint loading: implications for ACL injury risk.

    PubMed

    Mizuno, Kiyonori; Andrish, Jack T; van den Bogert, Antonie J; McLean, Scott G

    2009-12-01

    While gender-based differences in knee joint anatomies/laxities are well documented, the potential for them to precipitate gender-dimorphic ACL loading and resultant injury risk has not been considered. To this end, we generated gender-specific models of ACL strain as a function of any six degrees of freedom (6DOF) knee joint load state via a combined cadaveric and analytical approach. Continuously varying joint forces and torques were applied to five male and five female cadaveric specimens and recorded along with synchronous knee flexion and ACL strain data. All data (approximately 10,000 samples) were submitted to specimen-specific regression analyses, affording ACL strain predictions as a function of the combined 6 DOF knee loads. Following individual model verifications, generalized gender-specific models were generated and subjected to 6 DOF external load scenarios consistent with both a clinical examination and a dynamic sports maneuver. The ensuing model-based strain predictions were subsequently examined for gender-based discrepancies. Male and female specimen-specific models predicted ACL strain within 0.51%+/-0.10% and 0.52%+/-0.07% of the measured data respectively, and explained more than 75% of the associated variance in each case. Predicted female ACL strains were also significantly larger than respective male values for both simulated 6 DOF load scenarios. Outcomes suggest that the female ACL will rupture in response to comparatively smaller external load applications. Future work must address the underlying anatomical/laxity contributions to knee joint mechanical and resultant ACL loading, ultimately affording prevention strategies that may cater to individual joint vulnerabilities.

  7. Patient-Customized Drug Combination Prediction and Testing for T-cell Prolymphocytic Leukemia Patients.

    PubMed

    He, Liye; Tang, Jing; Andersson, Emma I; Timonen, Sanna; Koschmieder, Steffen; Wennerberg, Krister; Mustjoki, Satu; Aittokallio, Tero

    2018-05-01

    The molecular pathways that drive cancer progression and treatment resistance are highly redundant and variable between individual patients with the same cancer type. To tackle this complex rewiring of pathway cross-talk, personalized combination treatments targeting multiple cancer growth and survival pathways are required. Here we implemented a computational-experimental drug combination prediction and testing (DCPT) platform for efficient in silico prioritization and ex vivo testing in patient-derived samples to identify customized synergistic combinations for individual cancer patients. DCPT used drug-target interaction networks to traverse the massive combinatorial search spaces among 218 compounds (a total of 23,653 pairwise combinations) and identified cancer-selective synergies by using differential single-compound sensitivity profiles between patient cells and healthy controls, hence reducing the likelihood of toxic combination effects. A polypharmacology-based machine learning modeling and network visualization made use of baseline genomic and molecular profiles to guide patient-specific combination testing and clinical translation phases. Using T-cell prolymphocytic leukemia (T-PLL) as a first case study, we show how the DCPT platform successfully predicted distinct synergistic combinations for each of the three T-PLL patients, each presenting with different resistance patterns and synergy mechanisms. In total, 10 of 24 (42%) of selective combination predictions were experimentally confirmed to show synergy in patient-derived samples ex vivo The identified selective synergies among approved drugs, including tacrolimus and temsirolimus combined with BCL-2 inhibitor venetoclax, may offer novel drug repurposing opportunities for treating T-PLL. Significance: An integrated use of functional drug screening combined with genomic and molecular profiling enables patient-customized prediction and testing of drug combination synergies for T-PLL patients. Cancer Res; 78(9); 2407-18. ©2018 AACR . ©2018 American Association for Cancer Research.

  8. Hybrid multiscale modeling and prediction of cancer cell behavior

    PubMed Central

    Habibi, Jafar

    2017-01-01

    Background Understanding cancer development crossing several spatial-temporal scales is of great practical significance to better understand and treat cancers. It is difficult to tackle this challenge with pure biological means. Moreover, hybrid modeling techniques have been proposed that combine the advantages of the continuum and the discrete methods to model multiscale problems. Methods In light of these problems, we have proposed a new hybrid vascular model to facilitate the multiscale modeling and simulation of cancer development with respect to the agent-based, cellular automata and machine learning methods. The purpose of this simulation is to create a dataset that can be used for prediction of cell phenotypes. By using a proposed Q-learning based on SVR-NSGA-II method, the cells have the capability to predict their phenotypes autonomously that is, to act on its own without external direction in response to situations it encounters. Results Computational simulations of the model were performed in order to analyze its performance. The most striking feature of our results is that each cell can select its phenotype at each time step according to its condition. We provide evidence that the prediction of cell phenotypes is reliable. Conclusion Our proposed model, which we term a hybrid multiscale modeling of cancer cell behavior, has the potential to combine the best features of both continuum and discrete models. The in silico results indicate that the 3D model can represent key features of cancer growth, angiogenesis, and its related micro-environment and show that the findings are in good agreement with biological tumor behavior. To the best of our knowledge, this paper is the first hybrid vascular multiscale modeling of cancer cell behavior that has the capability to predict cell phenotypes individually by a self-generated dataset. PMID:28846712

  9. Hybrid multiscale modeling and prediction of cancer cell behavior.

    PubMed

    Zangooei, Mohammad Hossein; Habibi, Jafar

    2017-01-01

    Understanding cancer development crossing several spatial-temporal scales is of great practical significance to better understand and treat cancers. It is difficult to tackle this challenge with pure biological means. Moreover, hybrid modeling techniques have been proposed that combine the advantages of the continuum and the discrete methods to model multiscale problems. In light of these problems, we have proposed a new hybrid vascular model to facilitate the multiscale modeling and simulation of cancer development with respect to the agent-based, cellular automata and machine learning methods. The purpose of this simulation is to create a dataset that can be used for prediction of cell phenotypes. By using a proposed Q-learning based on SVR-NSGA-II method, the cells have the capability to predict their phenotypes autonomously that is, to act on its own without external direction in response to situations it encounters. Computational simulations of the model were performed in order to analyze its performance. The most striking feature of our results is that each cell can select its phenotype at each time step according to its condition. We provide evidence that the prediction of cell phenotypes is reliable. Our proposed model, which we term a hybrid multiscale modeling of cancer cell behavior, has the potential to combine the best features of both continuum and discrete models. The in silico results indicate that the 3D model can represent key features of cancer growth, angiogenesis, and its related micro-environment and show that the findings are in good agreement with biological tumor behavior. To the best of our knowledge, this paper is the first hybrid vascular multiscale modeling of cancer cell behavior that has the capability to predict cell phenotypes individually by a self-generated dataset.

  10. Prediction of Ground Vibration from Freight Trains

    NASA Astrophysics Data System (ADS)

    Jones, C. J. C.; Block, J. R.

    1996-05-01

    Heavy freight trains emit ground vibration with predominant frequency components in the range 4-30 Hz. If the amplitude is sufficient, this may be felt by lineside residents, giving rise to disturbance and concern over possible damage to their property. In order to establish the influence of parameters of the track and rolling stock and thereby enable the design of a low vibration railway, a theoretical model of both the generation and propagation of vibration is required. The vibration is generated as a combination of the effects of dynamic forces, due to the unevenness of the track, and the effects of the track deformation under successive axle loads. A prediction scheme, which combines these effects, has been produced. A vehicle model is used to predict the dynamic forces at the wheels. This includes the non-linear effects of friction damped suspensions. The loaded track profile is measured by using a track recording coach. The dynamic loading and the effects of the moving axles are combined in a track response model. The predicted track vibration is compared to measurements. The transfer functions from the track to a point in the ground can be calculated by using a coupled track and a three-dimensional layered ground model. The propagation effects of the ground layers are important but the computation of the transfer function from each sleeper, which would be required for a phase coherent summation of the vibration in the ground, would be prohibitive. A compromise summation is used and results are compared with measurements.

  11. Combining Traditional Cyber Security Audit Data with Psychosocial Data: Towards Predictive Modeling for Insider Threat Mitigation

    NASA Astrophysics Data System (ADS)

    Greitzer, Frank L.; Frincke, Deborah A.

    The purpose of this chapter is to motivate the combination of traditional cyber security audit data with psychosocial data, to support a move from an insider threat detection stance to one that enables prediction of potential insider presence. Twodistinctiveaspects of the approach are the objectiveof predicting or anticipating potential risksandthe useoforganizational datain additiontocyber datato support the analysis. The chapter describes the challenges of this endeavor and reports on progressin definingausablesetof predictiveindicators,developingaframeworkfor integratingthe analysisoforganizationalandcyber securitydatatoyield predictions about possible insider exploits, and developing the knowledge base and reasoning capabilityof the system.We also outline the typesof errors that oneexpectsina predictive system versus a detection system and discuss how those errors can affect the usefulness of the results.

  12. WEPPCAT: An Online tool for assessing and managing the potential impacts of climate change on sediment loading to streams using the Water Erosion Prediction Project (WEPP) Model

    EPA Science Inventory

    WEPPCAT is an on-line tool that provides a flexible capability for creating user-determined climate change scenarios for assessing the potential impacts of climate change on sediment loading to streams using the USDA’s Water Erosion Prediction Project (WEPP) Model. In combination...

  13. Predicting Survival From Large Echocardiography and Electronic Health Record Datasets: Optimization With Machine Learning.

    PubMed

    Samad, Manar D; Ulloa, Alvaro; Wehner, Gregory J; Jing, Linyuan; Hartzel, Dustin; Good, Christopher W; Williams, Brent A; Haggerty, Christopher M; Fornwalt, Brandon K

    2018-06-09

    The goal of this study was to use machine learning to more accurately predict survival after echocardiography. Predicting patient outcomes (e.g., survival) following echocardiography is primarily based on ejection fraction (EF) and comorbidities. However, there may be significant predictive information within additional echocardiography-derived measurements combined with clinical electronic health record data. Mortality was studied in 171,510 unselected patients who underwent 331,317 echocardiograms in a large regional health system. We investigated the predictive performance of nonlinear machine learning models compared with that of linear logistic regression models using 3 different inputs: 1) clinical variables, including 90 cardiovascular-relevant International Classification of Diseases, Tenth Revision, codes, and age, sex, height, weight, heart rate, blood pressures, low-density lipoprotein, high-density lipoprotein, and smoking; 2) clinical variables plus physician-reported EF; and 3) clinical variables and EF, plus 57 additional echocardiographic measurements. Missing data were imputed with a multivariate imputation by using a chained equations algorithm (MICE). We compared models versus each other and baseline clinical scoring systems by using a mean area under the curve (AUC) over 10 cross-validation folds and across 10 survival durations (6 to 60 months). Machine learning models achieved significantly higher prediction accuracy (all AUC >0.82) over common clinical risk scores (AUC = 0.61 to 0.79), with the nonlinear random forest models outperforming logistic regression (p < 0.01). The random forest model including all echocardiographic measurements yielded the highest prediction accuracy (p < 0.01 across all models and survival durations). Only 10 variables were needed to achieve 96% of the maximum prediction accuracy, with 6 of these variables being derived from echocardiography. Tricuspid regurgitation velocity was more predictive of survival than LVEF. In a subset of studies with complete data for the top 10 variables, multivariate imputation by chained equations yielded slightly reduced predictive accuracies (difference in AUC of 0.003) compared with the original data. Machine learning can fully utilize large combinations of disparate input variables to predict survival after echocardiography with superior accuracy. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  14. Brain Regions Engaged by Part- and Whole-task Performance in a Video Game: A Model-based Test of the Decomposition Hypothesis

    PubMed Central

    Anderson, John R.; Bothell, Daniel; Fincham, Jon M.; Anderson, Abraham R.; Poole, Ben; Qin, Yulin

    2013-01-01

    Part- and whole-task conditions were created by manipulating the presence of certain components of the Space Fortress video game. A cognitive model was created for two-part games that could be combined into a model that performed the whole game. The model generated predictions both for behavioral patterns and activation patterns in various brain regions. The activation predictions concerned both tonic activation that was constant in these regions during performance of the game and phasic activation that occurred when there was resource competition. The model’s predictions were confirmed about how tonic and phasic activation in different regions would vary with condition. These results support the Decomposition Hypothesis that the execution of a complex task can be decomposed into a set of information-processing components and that these components combine unchanged in different task conditions. In addition, individual differences in learning gains were predicted by individual differences in phasic activation in those regions that displayed highest tonic activity. This individual difference pattern suggests that the rate of learning of a complex skill is determined by capacity limits. PMID:21557648

  15. Explanation of non-additive effects in mixtures of similar mode of action chemicals.

    PubMed

    Kamo, Masashi; Yokomizo, Hiroyuki

    2015-09-01

    Many models have been developed to predict the combined effect of drugs and chemicals. Most models are classified into two additive models: independent action (IA) and concentration addition (CA). It is generally considered if the modes of action of chemicals are similar then the combined effect obeys CA; however, many empirical studies report nonlinear effects deviating from the predictions by CA. Such deviations are termed synergism and antagonism. Synergism, which leads to a stronger toxicity, requires more careful management, and hence it is important to understand how and which combinations of chemicals lead to synergism. In this paper, three types of chemical reactions are mathematically modeled and the cause of the nonlinear effects among chemicals with similar modes of action was investigated. Our results show that combined effects obey CA only when the modes of action are exactly the same. Contrary to existing knowledge, combined effects are generally nonlinear even if the modes of action of the chemicals are similar. Our results further show that the nonlinear effects vanish out when the chemical concentrations are low, suggesting that the current management procedure of assuming CA is rarely inappropriate because environmental concentrations of chemicals are generally low. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. A Feature Fusion Based Forecasting Model for Financial Time Series

    PubMed Central

    Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie

    2014-01-01

    Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455

  17. Phosphoproteomic biomarkers predicting histologic nonalcoholic steatohepatitis and fibrosis.

    PubMed

    Younossi, Zobair M; Baranova, Ancha; Stepanova, Maria; Page, Sandra; Calvert, Valerie S; Afendy, Arian; Goodman, Zachary; Chandhoke, Vikas; Liotta, Lance; Petricoin, Emanuel

    2010-06-04

    The progression of nonalcoholic fatty liver disease (NAFLD) has been linked to deregulated exchange of the endocrine signaling between adipose and liver tissue. Proteomic assays for the phosphorylation events that characterize the activated or deactivated state of the kinase-driven signaling cascades in visceral adipose tissue (VAT) could shed light on the pathogenesis of nonalcoholic steatohepatitis (NASH) and related fibrosis. Reverse-phase protein microarrays (RPMA) were used to develop biomarkers for NASH and fibrosis using VAT collected from 167 NAFLD patients (training cohort, N = 117; testing cohort, N = 50). Three types of models were developed for NASH and advanced fibrosis: clinical models, proteomics models, and combination models. NASH was predicted by a model that included measurements of two components of the insulin signaling pathway: AKT kinase and insulin receptor substrate 1 (IRS1). The models for fibrosis were less reliable when predictions were based on phosphoproteomic, clinical, or the combination data. The best performing model relied on levels of the phosphorylation of GSK3 as well as on two subunits of cyclic AMP regulated protein kinase A (PKA). Phosphoproteomics technology could potentially be used to provide pathogenic information about NASH and NASH-related fibrosis. This information can lead to a clinically relevant diagnostic/prognostic biomarker for NASH.

  18. A data-driven prediction method for fast-slow systems

    NASA Astrophysics Data System (ADS)

    Groth, Andreas; Chekroun, Mickael; Kondrashov, Dmitri; Ghil, Michael

    2016-04-01

    In this work, we present a prediction method for processes that exhibit a mixture of variability on low and fast scales. The method relies on combining empirical model reduction (EMR) with singular spectrum analysis (SSA). EMR is a data-driven methodology for constructing stochastic low-dimensional models that account for nonlinearity and serial correlation in the estimated noise, while SSA provides a decomposition of the complex dynamics into low-order components that capture spatio-temporal behavior on different time scales. Our study focuses on the data-driven modeling of partial observations from dynamical systems that exhibit power spectra with broad peaks. The main result in this talk is that the combination of SSA pre-filtering with EMR modeling improves, under certain circumstances, the modeling and prediction skill of such a system, as compared to a standard EMR prediction based on raw data. Specifically, it is the separation into "fast" and "slow" temporal scales by the SSA pre-filtering that achieves the improvement. We show, in particular that the resulting EMR-SSA emulators help predict intermittent behavior such as rapid transitions between specific regions of the system's phase space. This capability of the EMR-SSA prediction will be demonstrated on two low-dimensional models: the Rössler system and a Lotka-Volterra model for interspecies competition. In either case, the chaotic dynamics is produced through a Shilnikov-type mechanism and we argue that the latter seems to be an important ingredient for the good prediction skills of EMR-SSA emulators. Shilnikov-type behavior has been shown to arise in various complex geophysical fluid models, such as baroclinic quasi-geostrophic flows in the mid-latitude atmosphere and wind-driven double-gyre ocean circulation models. This pervasiveness of the Shilnikow mechanism of fast-slow transition opens interesting perspectives for the extension of the proposed EMR-SSA approach to more realistic situations.

  19. Bayesian modeling and inference for diagnostic accuracy and probability of disease based on multiple diagnostic biomarkers with and without a perfect reference standard.

    PubMed

    Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A

    2016-03-15

    The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.

    PubMed

    Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei

    2015-08-01

    In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.

  1. Dataset on predictive compressive strength model for self-compacting concrete.

    PubMed

    Ofuyatan, O M; Edeki, S O

    2018-04-01

    The determination of compressive strength is affected by many variables such as the water cement (WC) ratio, the superplasticizer (SP), the aggregate combination, and the binder combination. In this dataset article, 7, 28, and 90-day compressive strength models are derived using statistical analysis. The response surface methodology is used toinvestigate the effect of the parameters: Varying percentages of ash, cement, WC, and SP on hardened properties-compressive strengthat 7,28 and 90 days. Thelevels of independent parameters are determinedbased on preliminary experiments. The experimental values for compressive strengthat 7, 28 and 90 days and modulus of elasticity underdifferent treatment conditions are also discussed and presented.These dataset can effectively be used for modelling and prediction in concrete production settings.

  2. Spatial prediction of landslide susceptibility using an adaptive neuro-fuzzy inference system combined with frequency ratio, generalized additive model, and support vector machine techniques

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Pourghasemi, Hamid Reza; Panahi, Mahdi; Kornejady, Aiding; Wang, Jiale; Xie, Xiaoshen; Cao, Shubo

    2017-11-01

    The spatial prediction of landslide susceptibility is an important prerequisite for the analysis of landslide hazards and risks in any area. This research uses three data mining techniques, such as an adaptive neuro-fuzzy inference system combined with frequency ratio (ANFIS-FR), a generalized additive model (GAM), and a support vector machine (SVM), for landslide susceptibility mapping in Hanyuan County, China. In the first step, in accordance with a review of the previous literature, twelve conditioning factors, including slope aspect, altitude, slope angle, topographic wetness index (TWI), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, land use, normalized difference vegetation index (NDVI), and lithology, were selected. In the second step, a collinearity test and correlation analysis between the conditioning factors and landslides were applied. In the third step, we used three advanced methods, namely, ANFIS-FR, GAM, and SVM, for landslide susceptibility modeling. Subsequently, the results of their accuracy were validated using a receiver operating characteristic curve. The results showed that all three models have good prediction capabilities, while the SVM model has the highest prediction rate of 0.875, followed by the ANFIS-FR and GAM models with prediction rates of 0.851 and 0.846, respectively. Thus, the landslide susceptibility maps produced in the study area can be applied for management of hazards and risks in landslide-prone Hanyuan County.

  3. Can nutrient status of four woody plant species be predicted using field spectrometry?

    NASA Astrophysics Data System (ADS)

    Ferwerda, Jelle G.; Skidmore, Andrew K.

    This paper demonstrates the potential of hyperspectral remote sensing to predict the chemical composition (i.e., nitrogen, phosphorous, calcium, potassium, sodium, and magnesium) of three tree species (i.e., willow, mopane and olive) and one shrub species (i.e., heather). Reflectance spectra, derivative spectra and continuum-removed spectra were compared in terms of predictive power. Results showed that the best predictions for nitrogen, phosphorous, and magnesium occur when using derivative spectra, and the best predictions for sodium, potassium, and calcium occur when using continuum-removed data. To test whether a general model for multiple species is also valid for individual species, a bootstrapping routine was applied. Prediction accuracies for the individual species were lower then prediction accuracies obtained for the combined dataset for all except one element/species combination, indicating that indices with high prediction accuracies at the landscape scale are less appropriate to detect the chemical content of individual species.

  4. Speech Perception With Combined Electric-Acoustic Stimulation: A Simulation and Model Comparison.

    PubMed

    Rader, Tobias; Adel, Youssef; Fastl, Hugo; Baumann, Uwe

    2015-01-01

    The aim of this study is to simulate speech perception with combined electric-acoustic stimulation (EAS), verify the advantage of combined stimulation in normal-hearing (NH) subjects, and then compare it with cochlear implant (CI) and EAS user results from the authors' previous study. Furthermore, an automatic speech recognition (ASR) system was built to examine the impact of low-frequency information and is proposed as an applied model to study different hypotheses of the combined-stimulation advantage. Signal-detection-theory (SDT) models were applied to assess predictions of subject performance without the need to assume any synergistic effects. Speech perception was tested using a closed-set matrix test (Oldenburg sentence test), and its speech material was processed to simulate CI and EAS hearing. A total of 43 NH subjects and a customized ASR system were tested. CI hearing was simulated by an aurally adequate signal spectrum analysis and representation, the part-tone-time-pattern, which was vocoded at 12 center frequencies according to the MED-EL DUET speech processor. Residual acoustic hearing was simulated by low-pass (LP)-filtered speech with cutoff frequencies 200 and 500 Hz for NH subjects and in the range from 100 to 500 Hz for the ASR system. Speech reception thresholds were determined in amplitude-modulated noise and in pseudocontinuous noise. Previously proposed SDT models were lastly applied to predict NH subject performance with EAS simulations. NH subjects tested with EAS simulations demonstrated the combined-stimulation advantage. Increasing the LP cutoff frequency from 200 to 500 Hz significantly improved speech reception thresholds in both noise conditions. In continuous noise, CI and EAS users showed generally better performance than NH subjects tested with simulations. In modulated noise, performance was comparable except for the EAS at cutoff frequency 500 Hz where NH subject performance was superior. The ASR system showed similar behavior to NH subjects despite a positive signal-to-noise ratio shift for both noise conditions, while demonstrating the synergistic effect for cutoff frequencies ≥300 Hz. One SDT model largely predicted the combined-stimulation results in continuous noise, while falling short of predicting performance observed in modulated noise. The presented simulation was able to demonstrate the combined-stimulation advantage for NH subjects as observed in EAS users. Only NH subjects tested with EAS simulations were able to take advantage of the gap listening effect, while CI and EAS user performance was consistently degraded in modulated noise compared with performance in continuous noise. The application of ASR systems seems feasible to assess the impact of different signal processing strategies on speech perception with CI and EAS simulations. In continuous noise, SDT models were largely able to predict the performance gain without assuming any synergistic effects, but model amendments are required to explain the gap listening effect in modulated noise.

  5. Computer models for predicting the probability of violating CO air quality standards : the model SIMCO.

    DOT National Transportation Integrated Search

    1982-01-01

    This report presents the user instructions and data requirements for SIMCO, a combined simulation and probability computer model developed to quantify and evaluate carbon monoxide in roadside environments. The model permits direct determinations of t...

  6. Predicting the kinetics of Listeria monocytogenes and Yersinia enterocolitica under dynamic growth/death-inducing conditions, in Italian style fresh sausage.

    PubMed

    Iannetti, Luigi; Salini, Romolo; Sperandii, Anna Franca; Santarelli, Gino Angelo; Neri, Diana; Di Marzio, Violeta; Romantini, Romina; Migliorati, Giacomo; Baranyi, József

    2017-01-02

    Traditional Italian pork products can be consumed after variable drying periods, where the temporal decrease of water activity spans from optimal to inactivating values. This makes it necessary to A) consider the bias factor when applying culture-medium-based predictive models to sausage; B) apply the dynamic version (described by differential equations) of those models; C) combine growth and death models in a continuous way, including the highly uncertain growth/no growth range separating the two regions. This paper tests the applicability of published predictive models on the responses of Listeria monocytogenes and Yersinia enterocolitica to dynamic conditions in traditional Italian pork sausage, where the environment changes from growth-supporting to inhibitory conditions, so the growth and death models need to be combined. The effect of indigenous lactic acid bacteria was also taken into account in the predictions. Challenge tests were carried out using such sausages, inoculated separately with L. monocytogenes and Y. enterocolitica, stored for 480h at 8, 12, 18 and 20°C. The pH was fairly constant, while the water activity changed dynamically. The effects of the environment on the specific growth and death rate of the studied organisms were predicted using previously published predictive models and parameters. Microbial kinetics in many products with a long shelf-life and dynamic internal environment, could result in both growth and inactivation, making it difficult to estimate the bacterial concentration at the time of consumption by means of commonly available predictive software tools. Our prediction of the effect of the storage environment, where the water activity gradually decreases during a drying period, is designed to overcome these difficulties. The methodology can be used generally to predict and visualise bacterial kinetics under temporal variation of environments, which is vital when assessing the safety of many similar products. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Machine-learning prediction of cancer survival: a retrospective study using electronic administrative records and a cancer registry

    PubMed Central

    Gupta, Sunil; Tran, Truyen; Luo, Wei; Phung, Dinh; Kennedy, Richard Lee; Broad, Adam; Campbell, David; Kipp, David; Singh, Madhu; Khasraw, Mustafa; Matheson, Leigh; Ashley, David M; Venkatesh, Svetha

    2014-01-01

    Objectives Using the prediction of cancer outcome as a model, we have tested the hypothesis that through analysing routinely collected digital data contained in an electronic administrative record (EAR), using machine-learning techniques, we could enhance conventional methods in predicting clinical outcomes. Setting A regional cancer centre in Australia. Participants Disease-specific data from a purpose-built cancer registry (Evaluation of Cancer Outcomes (ECO)) from 869 patients were used to predict survival at 6, 12 and 24 months. The model was validated with data from a further 94 patients, and results compared to the assessment of five specialist oncologists. Machine-learning prediction using ECO data was compared with that using EAR and a model combining ECO and EAR data. Primary and secondary outcome measures Survival prediction accuracy in terms of the area under the receiver operating characteristic curve (AUC). Results The ECO model yielded AUCs of 0.87 (95% CI 0.848 to 0.890) at 6 months, 0.796 (95% CI 0.774 to 0.823) at 12 months and 0.764 (95% CI 0.737 to 0.789) at 24 months. Each was slightly better than the performance of the clinician panel. The model performed consistently across a range of cancers, including rare cancers. Combining ECO and EAR data yielded better prediction than the ECO-based model (AUCs ranging from 0.757 to 0.997 for 6 months, AUCs from 0.689 to 0.988 for 12 months and AUCs from 0.713 to 0.973 for 24 months). The best prediction was for genitourinary, head and neck, lung, skin, and upper gastrointestinal tumours. Conclusions Machine learning applied to information from a disease-specific (cancer) database and the EAR can be used to predict clinical outcomes. Importantly, the approach described made use of digital data that is already routinely collected but underexploited by clinical health systems. PMID:24643167

  8. Machine-learning prediction of cancer survival: a retrospective study using electronic administrative records and a cancer registry.

    PubMed

    Gupta, Sunil; Tran, Truyen; Luo, Wei; Phung, Dinh; Kennedy, Richard Lee; Broad, Adam; Campbell, David; Kipp, David; Singh, Madhu; Khasraw, Mustafa; Matheson, Leigh; Ashley, David M; Venkatesh, Svetha

    2014-03-17

    Using the prediction of cancer outcome as a model, we have tested the hypothesis that through analysing routinely collected digital data contained in an electronic administrative record (EAR), using machine-learning techniques, we could enhance conventional methods in predicting clinical outcomes. A regional cancer centre in Australia. Disease-specific data from a purpose-built cancer registry (Evaluation of Cancer Outcomes (ECO)) from 869 patients were used to predict survival at 6, 12 and 24 months. The model was validated with data from a further 94 patients, and results compared to the assessment of five specialist oncologists. Machine-learning prediction using ECO data was compared with that using EAR and a model combining ECO and EAR data. Survival prediction accuracy in terms of the area under the receiver operating characteristic curve (AUC). The ECO model yielded AUCs of 0.87 (95% CI 0.848 to 0.890) at 6 months, 0.796 (95% CI 0.774 to 0.823) at 12 months and 0.764 (95% CI 0.737 to 0.789) at 24 months. Each was slightly better than the performance of the clinician panel. The model performed consistently across a range of cancers, including rare cancers. Combining ECO and EAR data yielded better prediction than the ECO-based model (AUCs ranging from 0.757 to 0.997 for 6 months, AUCs from 0.689 to 0.988 for 12 months and AUCs from 0.713 to 0.973 for 24 months). The best prediction was for genitourinary, head and neck, lung, skin, and upper gastrointestinal tumours. Machine learning applied to information from a disease-specific (cancer) database and the EAR can be used to predict clinical outcomes. Importantly, the approach described made use of digital data that is already routinely collected but underexploited by clinical health systems.

  9. Classifier ensemble based on feature selection and diversity measures for predicting the affinity of A(2B) adenosine receptor antagonists.

    PubMed

    Bonet, Isis; Franco-Montero, Pedro; Rivero, Virginia; Teijeira, Marta; Borges, Fernanda; Uriarte, Eugenio; Morales Helguera, Aliuska

    2013-12-23

    A(2B) adenosine receptor antagonists may be beneficial in treating diseases like asthma, diabetes, diabetic retinopathy, and certain cancers. This has stimulated research for the development of potent ligands for this subtype, based on quantitative structure-affinity relationships. In this work, a new ensemble machine learning algorithm is proposed for classification and prediction of the ligand-binding affinity of A(2B) adenosine receptor antagonists. This algorithm is based on the training of different classifier models with multiple training sets (composed of the same compounds but represented by diverse features). The k-nearest neighbor, decision trees, neural networks, and support vector machines were used as single classifiers. To select the base classifiers for combining into the ensemble, several diversity measures were employed. The final multiclassifier prediction results were computed from the output obtained by using a combination of selected base classifiers output, by utilizing different mathematical functions including the following: majority vote, maximum and average probability. In this work, 10-fold cross- and external validation were used. The strategy led to the following results: i) the single classifiers, together with previous features selections, resulted in good overall accuracy, ii) a comparison between single classifiers, and their combinations in the multiclassifier model, showed that using our ensemble gave a better performance than the single classifier model, and iii) our multiclassifier model performed better than the most widely used multiclassifier models in the literature. The results and statistical analysis demonstrated the supremacy of our multiclassifier approach for predicting the affinity of A(2B) adenosine receptor antagonists, and it can be used to develop other QSAR models.

  10. Hyperspectral imaging for predicting the allicin and soluble solid content of garlic with variable selection algorithms and chemometric models.

    PubMed

    Rahman, Anisur; Faqeerzada, Mohammad A; Cho, Byoung-Kwan

    2018-03-14

    Allicin and soluble solid content (SSC) in garlic is the responsible for its pungent flavor and odor. However, current conventional methods such as the use of high-pressure liquid chromatography and a refractometer have critical drawbacks in that they are time-consuming, labor-intensive and destructive procedures. The present study aimed to predict allicin and SSC in garlic using hyperspectral imaging in combination with variable selection algorithms and calibration models. Hyperspectral images of 100 garlic cloves were acquired that covered two spectral ranges, from which the mean spectra of each clove were extracted. The calibration models included partial least squares (PLS) and least squares-support vector machine (LS-SVM) regression, as well as different spectral pre-processing techniques, from which the highest performing spectral preprocessing technique and spectral range were selected. Then, variable selection methods, such as regression coefficients, variable importance in projection (VIP) and the successive projections algorithm (SPA), were evaluated for the selection of effective wavelengths (EWs). Furthermore, PLS and LS-SVM regression methods were applied to quantitatively predict the quality attributes of garlic using the selected EWs. Of the established models, the SPA-LS-SVM model obtained an Rpred2 of 0.90 and standard error of prediction (SEP) of 1.01% for SSC prediction, whereas the VIP-LS-SVM model produced the best result with an Rpred2 of 0.83 and SEP of 0.19 mg g -1 for allicin prediction in the range 1000-1700 nm. Furthermore, chemical images of garlic were developed using the best predictive model to facilitate visualization of the spatial distributions of allicin and SSC. The present study clearly demonstrates that hyperspectral imaging combined with an appropriate chemometrics method can potentially be employed as a fast, non-invasive method to predict the allicin and SSC in garlic. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  11. One-Dimensional Simulations for Spall in Metals with Intra- and Inter-grain failure models

    NASA Astrophysics Data System (ADS)

    Ferri, Brian; Dwivedi, Sunil; McDowell, David

    2017-06-01

    The objective of the present work is to model spall failure in metals with coupled effect of intra-grain and inter-grain failure mechanisms. The two mechanisms are modeled by a void nucleation, growth, and coalescence (VNGC) model and contact-cohesive model respectively. Both models were implemented in a 1-D code to simulate spall in 6061-T6 aluminum at two impact velocities. The parameters of the VNGC model without inter-grain failure and parameters of the cohesive model without intra-grain failure were first determined to obtain pull-back velocity profiles in agreement with experimental data. With the same impact velocities, the same sets of parameters did not predict the velocity profiles when both mechanisms were simultaneously activated. A sensitivity study was performed to predict spall under combined mechanisms by varying critical stress in the VNGC model and maximum traction in the cohesive model. The study provided possible sets of the two parameters leading to spall. Results will be presented comparing the predicted velocity profile with experimental data using one such set of parameters for the combined intra-grain and inter-grain failures during spall. Work supported by HDTRA1-12-1-0004 gran and by the School of Mechanical Engineering GTA.

  12. Predicting lettuce canopy photosynthesis with statistical and neural network models

    NASA Technical Reports Server (NTRS)

    Frick, J.; Precetti, C.; Mitchell, C. A.

    1998-01-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).

  13. A combined PHREEQC-2/parallel fracture model for the simulation of laminar/non-laminar flow and contaminant transport with reactions

    NASA Astrophysics Data System (ADS)

    Masciopinto, Costantino; Volpe, Angela; Palmiotta, Domenico; Cherubini, Claudia

    2010-09-01

    A combination of a parallel fracture model with the PHREEQC-2 geochemical model was developed to simulate sequential flow and chemical transport with reactions in fractured media where both laminar and turbulent flows occur. The integration of non-laminar flow resistances in one model produced relevant effects on water flow velocities, thus improving model prediction capabilities on contaminant transport. The proposed conceptual model consists of 3D rock-blocks, separated by horizontal bedding plane fractures with variable apertures. Particle tracking solved the transport equations for conservative compounds and provided input for PHREEQC-2. For each cluster of contaminant pathways, PHREEQC-2 determined the concentration for mass-transfer, sorption/desorption, ion exchange, mineral dissolution/precipitation and biodegradation, under kinetically controlled reactive processes of equilibrated chemical species. Field tests have been performed for the code verification. As an example, the combined model has been applied to a contaminated fractured aquifer of southern Italy in order to simulate the phenol transport. The code correctly fitted the field available data and also predicted a possible rapid depletion of phenols as a result of an increased biodegradation rate induced by a simulated artificial injection of nitrates, upgradient to the sources.

  14. A comparative study of machine learning methods for time-to-event survival data for radiomics risk modelling.

    PubMed

    Leger, Stefan; Zwanenburg, Alex; Pilz, Karoline; Lohaus, Fabian; Linge, Annett; Zöphel, Klaus; Kotzerke, Jörg; Schreiber, Andreas; Tinhofer, Inge; Budach, Volker; Sak, Ali; Stuschke, Martin; Balermpas, Panagiotis; Rödel, Claus; Ganswindt, Ute; Belka, Claus; Pigorsch, Steffi; Combs, Stephanie E; Mönnich, David; Zips, Daniel; Krause, Mechthild; Baumann, Michael; Troost, Esther G C; Löck, Steffen; Richter, Christian

    2017-10-16

    Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g. C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.

  15. Predictive Model for the Meniscus-Guided Coating of High-Quality Organic Single-Crystalline Thin Films.

    PubMed

    Janneck, Robby; Vercesi, Federico; Heremans, Paul; Genoe, Jan; Rolin, Cedric

    2016-09-01

    A model that describes solvent evaporation dynamics in meniscus-guided coating techniques is developed. In combination with a single fitting parameter, it is shown that this formula can accurately predict a processing window for various coating conditions. Organic thin-film transistors (OTFTs), fabricated by a zone-casting setup, indeed show the best performance at the predicted coating speeds with mobilities reaching 7 cm 2 V -1 s -1 . © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A Bayesian network approach for modeling local failure in lung cancer

    NASA Astrophysics Data System (ADS)

    Oh, Jung Hun; Craft, Jeffrey; Lozi, Rawan Al; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O.; Bradley, Jeffrey D.; El Naqa, Issam

    2011-03-01

    Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.

  17. Predicting Chemical Toxicity from Proteomics and Computational Chemistry

    DTIC Science & Technology

    2008-07-30

    similarity spaces, BD Gute and SC Basak, SAR QSAR Environ. Res., 17, 37-51 (2006). Predicting pharmacological and toxicological activity of heterocyclic...affinity of dibenzofurans: a hierarchical QSAR approach, authored jointly by Basak and Mills; Division of Chemical Toxicology iii. Prediction of blood...biodescriptors vis-ä-vis chemodescriptors in predictive toxicology e) Development of integrated QSTR models using the combined set of chemodescriptors and

  18. Predicting the planform configuration of the braided Toklat River, AK with a suite of rule-based models

    USGS Publications Warehouse

    Podolak, Charles J.

    2013-01-01

    An ensemble of rule-based models was constructed to assess possible future braided river planform configurations for the Toklat River in Denali National Park and Preserve, Alaska. This approach combined an analysis of large-scale influences on stability with several reduced-complexity models to produce the predictions at a practical level for managers concerned about the persistence of bank erosion while acknowledging the great uncertainty in any landscape prediction. First, a model of confluence angles reproduced observed angles of a major confluence, but showed limited susceptibility to a major rearrangement of the channel planform downstream. Second, a probabilistic map of channel locations was created with a two-parameter channel avulsion model. The predicted channel belt location was concentrated in the same area as the current channel belt. Finally, a suite of valley-scale channel and braid plain characteristics were extracted from a light detection and ranging (LiDAR)-derived surface. The characteristics demonstrated large-scale stabilizing topographic influences on channel planform. The combination of independent analyses increased confidence in the conclusion that the Toklat River braided planform is a dynamically stable system due to large and persistent valley-scale influences, and that a range of avulsive perturbations are likely to result in a relatively unchanged planform configuration in the short term.

  19. Combining hygrothermal and corrosion models to predict corrosion of metal fasteners embedded in wood

    Treesearch

    Samuel L. Zelinka; Dominique Derome; Samuel V. Glass

    2011-01-01

    A combined heat, moisture, and corrosion model is presented and used to simulate the corrosion of metal fasteners embedded in solid wood exposed to the exterior environment. First, the moisture content and temperature at the wood/fastener interface is determined at each time step. Then, the amount of corrosion is determined spatially using an empirical corrosion rate...

  20. Proactive Supply Chain Performance Management with Predictive Analytics

    PubMed Central

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605

  1. Proactive supply chain performance management with predictive analytics.

    PubMed

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.

  2. Forecasting influenza-like illness dynamics for military populations using neural networks and social media

    DOE PAGES

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; ...

    2017-12-15

    This work is the first to take advantage of recurrent neural networks to predict influenza-like-illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data [1, 2] and the state-of-the-art machine learning models [3, 4], we build and evaluate the predictive power of Long Short Term Memory (LSTMs) architectures capable of nowcasting (predicting in \\real-time") and forecasting (predicting the future) ILI dynamics in the 2011 { 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, stylistic and syntactic patterns,more » emotions and opinions, and communication behavior. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks. Finally, we combine ILI and social media signals to build joint neural network models for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance [1], specifically for military rather than general populations [3] in 26 U.S. and six international locations. Our approach demonstrates several advantages: (a) Neural network models learned from social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than syntactic and stylistic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns.« less

  3. Forecasting influenza-like illness dynamics for military populations using neural networks and social media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine

    This work is the first to take advantage of recurrent neural networks to predict influenza-like-illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data [1, 2] and the state-of-the-art machine learning models [3, 4], we build and evaluate the predictive power of Long Short Term Memory (LSTMs) architectures capable of nowcasting (predicting in \\real-time") and forecasting (predicting the future) ILI dynamics in the 2011 { 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, stylistic and syntactic patterns,more » emotions and opinions, and communication behavior. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks. Finally, we combine ILI and social media signals to build joint neural network models for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance [1], specifically for military rather than general populations [3] in 26 U.S. and six international locations. Our approach demonstrates several advantages: (a) Neural network models learned from social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than syntactic and stylistic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns.« less

  4. Accounting for spatial variation of trabecular anisotropy with subject-specific finite element modeling moderately improves predictions of local subchondral bone stiffness at the proximal tibia.

    PubMed

    Nazemi, S Majid; Kalajahi, S Mehrdad Hosseini; Cooper, David M L; Kontulainen, Saija A; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D

    2017-07-05

    Previously, a finite element (FE) model of the proximal tibia was developed and validated against experimentally measured local subchondral stiffness. This model indicated modest predictions of stiffness (R 2 =0.77, normalized root mean squared error (RMSE%)=16.6%). Trabecular bone though was modeled with isotropic material properties despite its orthotropic anisotropy. The objective of this study was to identify the anisotropic FE modeling approach which best predicted (with largest explained variance and least amount of error) local subchondral bone stiffness at the proximal tibia. Local stiffness was measured at the subchondral surface of 13 medial/lateral tibial compartments using in situ macro indentation testing. An FE model of each specimen was generated assuming uniform anisotropy with 14 different combinations of cortical- and tibial-specific density-modulus relationships taken from the literature. Two FE models of each specimen were also generated which accounted for the spatial variation of trabecular bone anisotropy directly from clinical CT images using grey-level structure tensor and Cowin's fabric-elasticity equations. Stiffness was calculated using FE and compared to measured stiffness in terms of R 2 and RMSE%. The uniform anisotropic FE model explained 53-74% of the measured stiffness variance, with RMSE% ranging from 12.4 to 245.3%. The models which accounted for spatial variation of trabecular bone anisotropy predicted 76-79% of the variance in stiffness with RMSE% being 11.2-11.5%. Of the 16 evaluated finite element models in this study, the combination of Synder and Schneider (for cortical bone) and Cowin's fabric-elasticity equations (for trabecular bone) best predicted local subchondral bone stiffness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Cross-scale assessment of potential habitat shifts in a rapidly changing climate

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Holcombe, Tracy R.; Bella, Elizabeth S.; Carlson, Matthew L.; Graziano, Gino; Lamb, Melinda; Seefeldt, Steven S.; Morisette, Jeffrey T.

    2014-01-01

    We assessed the ability of climatic, environmental, and anthropogenic variables to predict areas of high-risk for plant invasion and consider the relative importance and contribution of these predictor variables by considering two spatial scales in a region of rapidly changing climate. We created predictive distribution models, using Maxent, for three highly invasive plant species (Canada thistle, white sweetclover, and reed canarygrass) in Alaska at both a regional scale and a local scale. Regional scale models encompassed southern coastal Alaska and were developed from topographic and climatic data at a 2 km (1.2 mi) spatial resolution. Models were applied to future climate (2030). Local scale models were spatially nested within the regional area; these models incorporated physiographic and anthropogenic variables at a 30 m (98.4 ft) resolution. Regional and local models performed well (AUC values > 0.7), with the exception of one species at each spatial scale. Regional models predict an increase in area of suitable habitat for all species by 2030 with a general shift to higher elevation areas; however, the distribution of each species was driven by different climate and topographical variables. In contrast local models indicate that distance to right-of-ways and elevation are associated with habitat suitability for all three species at this spatial level. Combining results from regional models, capturing long-term distribution, and local models, capturing near-term establishment and distribution, offers a new and effective tool for highlighting at-risk areas and provides insight on how variables acting at different scales contribute to suitability predictions. The combinations also provides easy comparison, highlighting agreement between the two scales, where long-term distribution factors predict suitability while near-term do not and vice versa.

  6. First Trimester Urine and Serum Metabolomics for Prediction of Preeclampsia and Gestational Hypertension: A Prospective Screening Study.

    PubMed

    Austdal, Marie; Tangerås, Line H; Skråstad, Ragnhild B; Salvesen, Kjell; Austgulen, Rigmor; Iversen, Ann-Charlotte; Bathen, Tone F

    2015-09-08

    Hypertensive disorders of pregnancy, including preeclampsia, are major contributors to maternal morbidity. The goal of this study was to evaluate the potential of metabolomics to predict preeclampsia and gestational hypertension from urine and serum samples in early pregnancy, and elucidate the metabolic changes related to the diseases. Metabolic profiles were obtained by nuclear magnetic resonance spectroscopy of serum and urine samples from 599 women at medium to high risk of preeclampsia (nulliparous or previous preeclampsia/gestational hypertension). Preeclampsia developed in 26 (4.3%) and gestational hypertension in 21 (3.5%) women. Multivariate analyses of the metabolic profiles were performed to establish prediction models for the hypertensive disorders individually and combined. Urinary metabolomic profiles predicted preeclampsia and gestational hypertension at 51.3% and 40% sensitivity, respectively, at 10% false positive rate, with hippurate as the most important metabolite for the prediction. Serum metabolomic profiles predicted preeclampsia and gestational hypertension at 15% and 33% sensitivity, respectively, with increased lipid levels and an atherogenic lipid profile as most important for the prediction. Combining maternal characteristics with the urinary hippurate/creatinine level improved the prediction rates of preeclampsia in a logistic regression model. The study indicates a potential future role of clinical importance for metabolomic analysis of urine in prediction of preeclampsia.

  7. A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia

    NASA Astrophysics Data System (ADS)

    Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.

    2017-08-01

    In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.

  8. Combining Frequency Doubling Technology Perimetry and Scanning Laser Polarimetry for Glaucoma Detection

    PubMed Central

    Mwanza, Jean-Claude; Warren, Joshua L.; Hochberg, Jessica T.; Budenz, Donald L.; Chang, Robert T.; Ramulu, Pradeep Y.

    2014-01-01

    Purpose To determine the ability of frequency doubling technology (FDT) and scanning laser polarimetry with variable corneal compensation (GDx-VCC) to detect glaucoma when used individually and in combination. Methods One hundred and ten normal and 114 glaucomatous subjects were tested with FDT C-20-5 screening protocol and the GDx-VCC. The discriminating ability was tested for each device individually and for both devices combined using GDx-NFI, GDx-TSNIT, number of missed points of FDT, and normal or abnormal FDT. Measures of discrimination included sensitivity, specificity, area under the curve (AUC), Akaike’s information criterion (AIC), and prediction confidence interval lengths (PIL). Results For detecting glaucoma regardless of severity, the multivariable model resulting from the combination of GDX-TSNIT, number of abnormal points on FDT (NAP-FDT), and the interaction GDx-TSNIT * NAP-FDT (AIC: 88.28, AUC: 0.959, sensitivity: 94.6%, specificity: 89.5%) outperformed the best single variable model provided by GDx-NFI (AIC: 120.88, AUC: 0.914, sensitivity: 87.8%, specificity: 84.2%). The multivariable model combining GDx-TSNIT, NAPFDT, and interaction GDx-TSNIT*NAP-FDT consistently provided better discriminating abilities for detecting early, moderate and severe glaucoma than the best single variable models. Conclusions The multivariable model including GDx-TSNIT, NAP-FDT, and the interaction GDX-TSNIT * NAP-FDT provides the best glaucoma prediction compared to all other multivariable and univariable models. Combining the FDT C-20-5 screening protocol and GDx-VCC improves glaucoma detection compared to using GDx or FDT alone. PMID:24777046

  9. A Hierarchical Model Predictive Tracking Control for Independent Four-Wheel Driving/Steering Vehicles with Coaxial Steering Mechanism

    NASA Astrophysics Data System (ADS)

    Itoh, Masato; Hagimori, Yuki; Nonaka, Kenichiro; Sekiguchi, Kazuma

    2016-09-01

    In this study, we apply a hierarchical model predictive control to omni-directional mobile vehicle, and improve the tracking performance. We deal with an independent four-wheel driving/steering vehicle (IFWDS) equipped with four coaxial steering mechanisms (CSM). The coaxial steering mechanism is a special one composed of two steering joints on the same axis. In our previous study with respect to IFWDS with ideal steering, we proposed a model predictive tracking control. However, this method did not consider constraints of the coaxial steering mechanism which causes delay of steering. We also proposed a model predictive steering control considering constraints of this mechanism. In this study, we propose a hierarchical system combining above two control methods for IFWDS. An upper controller, which deals with vehicle kinematics, runs a model predictive tracking control, and a lower controller, which considers constraints of coaxial steering mechanism, runs a model predictive steering control which tracks the predicted steering angle optimized an upper controller. We verify the superiority of this method by comparing this method with the previous method.

  10. Contaminant dispersal in bounded turbulent shear flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, J.M.; Bernard, P.S.; Chiang, K.F.

    The dispersion of smoke downstream of a line source at the wall and at y{sup +} = 30 in a turbulent boundary layer has been predicted with a non-local model of the scalar fluxes {bar u}c and {bar v}c. The predicted plume from the wall source has been compared to high Schmidt number experimental measurements using a combination of hot-wire anemometry to obtain velocity component data synchronously with concentration data obtained optically. The predicted plumes from the source at y{sup +} = 30 and at the wall also have been compared to a low Schmidt number direct numerical simulation. Nearmore » the source, the non-local flux models give considerably better predictions than models which account solely for mean gradient transport. At a sufficient distance downstream the gradient models gives reasonably good predictions.« less

  11. The NASA Seasonal-to-Interannual Prediction Project (NSIPP). [Annual Report for 2000

    NASA Technical Reports Server (NTRS)

    Rienecker, Michele; Suarez, Max; Adamec, David; Koster, Randal; Schubert, Siegfried; Hansen, James; Koblinsky, Chester (Technical Monitor)

    2001-01-01

    The goal of the project is to develop an assimilation and forecast system based on a coupled atmosphere-ocean-land-surface-sea-ice model capable of using a combination of satellite and in situ data sources to improve the prediction of ENSO and other major S-I signals and their global teleconnections. The objectives of this annual report are to: (1) demonstrate the utility of satellite data, especially surface height surface winds, air-sea fluxes and soil moisture, in a coupled model prediction system; and (2) aid in the design of the observing system for short-term climate prediction by conducting OSSE's and predictability studies.

  12. A Two-Time Scale Decentralized Model Predictive Controller Based on Input and Output Model

    PubMed Central

    Niu, Jian; Zhao, Jun; Xu, Zuhua; Qian, Jixin

    2009-01-01

    A decentralized model predictive controller applicable for some systems which exhibit different dynamic characteristics in different channels was presented in this paper. These systems can be regarded as combinations of a fast model and a slow model, the response speeds of which are in two-time scale. Because most practical models used for control are obtained in the form of transfer function matrix by plant tests, a singular perturbation method was firstly used to separate the original transfer function matrix into two models in two-time scale. Then a decentralized model predictive controller was designed based on the two models derived from the original system. And the stability of the control method was proved. Simulations showed that the method was effective. PMID:19834542

  13. Forecast Model Analysis for the Morbidity of Tuberculosis in Xinjiang, China

    PubMed Central

    Zheng, Yan-Ling; Zhang, Li-Ping; Zhang, Xue-Liang; Wang, Kai; Zheng, Yu-Jian

    2015-01-01

    Tuberculosis is a major global public health problem, which also affects economic and social development. China has the second largest burden of tuberculosis in the world. The tuberculosis morbidity in Xinjiang is much higher than the national situation; therefore, there is an urgent need for monitoring and predicting tuberculosis morbidity so as to make the control of tuberculosis more effective. Recently, the Box-Jenkins approach, specifically the autoregressive integrated moving average (ARIMA) model, is typically applied to predict the morbidity of infectious diseases; it can take into account changing trends, periodic changes, and random disturbances in time series. Autoregressive conditional heteroscedasticity (ARCH) models are the prevalent tools used to deal with time series heteroscedasticity. In this study, based on the data of the tuberculosis morbidity from January 2004 to June 2014 in Xinjiang, we establish the single ARIMA (1, 1, 2) (1, 1, 1)12 model and the combined ARIMA (1, 1, 2) (1, 1, 1)12-ARCH (1) model, which can be used to predict the tuberculosis morbidity successfully in Xinjiang. Comparative analyses show that the combined model is more effective. To the best of our knowledge, this is the first study to establish the ARIMA model and ARIMA-ARCH model for prediction and monitoring the monthly morbidity of tuberculosis in Xinjiang. Based on the results of this study, the ARIMA (1, 1, 2) (1, 1, 1)12-ARCH (1) model is suggested to give tuberculosis surveillance by providing estimates on tuberculosis morbidity trends in Xinjiang, China. PMID:25760345

  14. An inventory of the Aspergillus niger secretome by combining in silico predictions with shotgun proteomics data.

    PubMed

    Braaksma, Machtelt; Martens-Uzunova, Elena S; Punt, Peter J; Schaap, Peter J

    2010-10-19

    The ecological niche occupied by a fungal species, its pathogenicity and its usefulness as a microbial cell factory to a large degree depends on its secretome. Protein secretion usually requires the presence of a N-terminal signal peptide (SP) and by scanning for this feature using available highly accurate SP-prediction tools, the fraction of potentially secreted proteins can be directly predicted. However, prediction of a SP does not guarantee that the protein is actually secreted and current in silico prediction methods suffer from gene-model errors introduced during genome annotation. A majority rule based classifier that also evaluates signal peptide predictions from the best homologs of three neighbouring Aspergillus species was developed to create an improved list of potential signal peptide containing proteins encoded by the Aspergillus niger genome. As a complement to these in silico predictions, the secretome associated with growth and upon carbon source depletion was determined using a shotgun proteomics approach. Overall, some 200 proteins with a predicted signal peptide were identified to be secreted proteins. Concordant changes in the secretome state were observed as a response to changes in growth/culture conditions. Additionally, two proteins secreted via a non-classical route operating in A. niger were identified. We were able to improve the in silico inventory of A. niger secretory proteins by combining different gene-model predictions from neighbouring Aspergilli and thereby avoiding prediction conflicts associated with inaccurate gene-models. The expected accuracy of signal peptide prediction for proteins that lack homologous sequences in the proteomes of related species is 85%. An experimental validation of the predicted proteome confirmed in silico predictions.

  15. An inventory of the Aspergillus niger secretome by combining in silico predictions with shotgun proteomics data

    PubMed Central

    2010-01-01

    Background The ecological niche occupied by a fungal species, its pathogenicity and its usefulness as a microbial cell factory to a large degree depends on its secretome. Protein secretion usually requires the presence of a N-terminal signal peptide (SP) and by scanning for this feature using available highly accurate SP-prediction tools, the fraction of potentially secreted proteins can be directly predicted. However, prediction of a SP does not guarantee that the protein is actually secreted and current in silico prediction methods suffer from gene-model errors introduced during genome annotation. Results A majority rule based classifier that also evaluates signal peptide predictions from the best homologs of three neighbouring Aspergillus species was developed to create an improved list of potential signal peptide containing proteins encoded by the Aspergillus niger genome. As a complement to these in silico predictions, the secretome associated with growth and upon carbon source depletion was determined using a shotgun proteomics approach. Overall, some 200 proteins with a predicted signal peptide were identified to be secreted proteins. Concordant changes in the secretome state were observed as a response to changes in growth/culture conditions. Additionally, two proteins secreted via a non-classical route operating in A. niger were identified. Conclusions We were able to improve the in silico inventory of A. niger secretory proteins by combining different gene-model predictions from neighbouring Aspergilli and thereby avoiding prediction conflicts associated with inaccurate gene-models. The expected accuracy of signal peptide prediction for proteins that lack homologous sequences in the proteomes of related species is 85%. An experimental validation of the predicted proteome confirmed in silico predictions. PMID:20959013

  16. Climate-Induced Range Shifts and Possible Hybridisation Consequences in Insects

    PubMed Central

    Sánchez-Guillén, Rosa Ana; Muñoz, Jesús; Rodríguez-Tapia, Gerardo; Feria Arroyo, T. Patricia; Córdoba-Aguilar, Alex

    2013-01-01

    Many ectotherms have altered their geographic ranges in response to rising global temperatures. Current range shifts will likely increase the sympatry and hybridisation between recently diverged species. Here we predict future sympatric distributions and risk of hybridisation in seven Mediterranean ischnurid damselfly species (I. elegans, I. fountaineae, I. genei, I. graellsii, I. pumilio, I. saharensis and I. senegalensis). We used a maximum entropy modelling technique to predict future potential distribution under four different Global Circulation Models and a realistic emissions scenario of climate change. We carried out a comprehensive data compilation of reproductive isolation (habitat, temporal, sexual, mechanical and gametic) between the seven studied species. Combining the potential distribution and data of reproductive isolation at different instances (habitat, temporal, sexual, mechanical and gametic), we infer the risk of hybridisation in these insects. Our findings showed that all but I. graellsii will decrease in distributional extent and all species except I. senegalensis are predicted to have northern range shifts. Models of potential distribution predicted an increase of the likely overlapping ranges for 12 species combinations, out of a total of 42 combinations, 10 of which currently overlap. Moreover, the lack of complete reproductive isolation and the patterns of hybridisation detected between closely related ischnurids, could lead to local extinctions of native species if the hybrids or the introgressed colonising species become more successful. PMID:24260411

  17. Assessing Risk Prediction Models Using Individual Participant Data From Multiple Studies

    PubMed Central

    Pennells, Lisa; Kaptoge, Stephen; White, Ian R.; Thompson, Simon G.; Wood, Angela M.; Tipping, Robert W.; Folsom, Aaron R.; Couper, David J.; Ballantyne, Christie M.; Coresh, Josef; Goya Wannamethee, S.; Morris, Richard W.; Kiechl, Stefan; Willeit, Johann; Willeit, Peter; Schett, Georg; Ebrahim, Shah; Lawlor, Debbie A.; Yarnell, John W.; Gallacher, John; Cushman, Mary; Psaty, Bruce M.; Tracy, Russ; Tybjærg-Hansen, Anne; Price, Jackie F.; Lee, Amanda J.; McLachlan, Stela; Khaw, Kay-Tee; Wareham, Nicholas J.; Brenner, Hermann; Schöttker, Ben; Müller, Heiko; Jansson, Jan-Håkan; Wennberg, Patrik; Salomaa, Veikko; Harald, Kennet; Jousilahti, Pekka; Vartiainen, Erkki; Woodward, Mark; D'Agostino, Ralph B.; Bladbjerg, Else-Marie; Jørgensen, Torben; Kiyohara, Yutaka; Arima, Hisatomi; Doi, Yasufumi; Ninomiya, Toshiharu; Dekker, Jacqueline M.; Nijpels, Giel; Stehouwer, Coen D. A.; Kauhanen, Jussi; Salonen, Jukka T.; Meade, Tom W.; Cooper, Jackie A.; Cushman, Mary; Folsom, Aaron R.; Psaty, Bruce M.; Shea, Steven; Döring, Angela; Kuller, Lewis H.; Grandits, Greg; Gillum, Richard F.; Mussolino, Michael; Rimm, Eric B.; Hankinson, Sue E.; Manson, JoAnn E.; Pai, Jennifer K.; Kirkland, Susan; Shaffer, Jonathan A.; Shimbo, Daichi; Bakker, Stephan J. L.; Gansevoort, Ron T.; Hillege, Hans L.; Amouyel, Philippe; Arveiler, Dominique; Evans, Alun; Ferrières, Jean; Sattar, Naveed; Westendorp, Rudi G.; Buckley, Brendan M.; Cantin, Bernard; Lamarche, Benoît; Barrett-Connor, Elizabeth; Wingard, Deborah L.; Bettencourt, Richele; Gudnason, Vilmundur; Aspelund, Thor; Sigurdsson, Gunnar; Thorsson, Bolli; Kavousi, Maryam; Witteman, Jacqueline C.; Hofman, Albert; Franco, Oscar H.; Howard, Barbara V.; Zhang, Ying; Best, Lyle; Umans, Jason G.; Onat, Altan; Sundström, Johan; Michael Gaziano, J.; Stampfer, Meir; Ridker, Paul M.; Michael Gaziano, J.; Ridker, Paul M.; Marmot, Michael; Clarke, Robert; Collins, Rory; Fletcher, Astrid; Brunner, Eric; Shipley, Martin; Kivimäki, Mika; Ridker, Paul M.; Buring, Julie; Cook, Nancy; Ford, Ian; Shepherd, James; Cobbe, Stuart M.; Robertson, Michele; Walker, Matthew; Watson, Sarah; Alexander, Myriam; Butterworth, Adam S.; Angelantonio, Emanuele Di; Gao, Pei; Haycock, Philip; Kaptoge, Stephen; Pennells, Lisa; Thompson, Simon G.; Walker, Matthew; Watson, Sarah; White, Ian R.; Wood, Angela M.; Wormser, David; Danesh, John

    2014-01-01

    Individual participant time-to-event data from multiple prospective epidemiologic studies enable detailed investigation into the predictive ability of risk models. Here we address the challenges in appropriately combining such information across studies. Methods are exemplified by analyses of log C-reactive protein and conventional risk factors for coronary heart disease in the Emerging Risk Factors Collaboration, a collation of individual data from multiple prospective studies with an average follow-up duration of 9.8 years (dates varied). We derive risk prediction models using Cox proportional hazards regression analysis stratified by study and obtain estimates of risk discrimination, Harrell's concordance index, and Royston's discrimination measure within each study; we then combine the estimates across studies using a weighted meta-analysis. Various weighting approaches are compared and lead us to recommend using the number of events in each study. We also discuss the calculation of measures of reclassification for multiple studies. We further show that comparison of differences in predictive ability across subgroups should be based only on within-study information and that combining measures of risk discrimination from case-control studies and prospective studies is problematic. The concordance index and discrimination measure gave qualitatively similar results throughout. While the concordance index was very heterogeneous between studies, principally because of differing age ranges, the increments in the concordance index from adding log C-reactive protein to conventional risk factors were more homogeneous. PMID:24366051

  18. Assessing risk prediction models using individual participant data from multiple studies.

    PubMed

    Pennells, Lisa; Kaptoge, Stephen; White, Ian R; Thompson, Simon G; Wood, Angela M

    2014-03-01

    Individual participant time-to-event data from multiple prospective epidemiologic studies enable detailed investigation into the predictive ability of risk models. Here we address the challenges in appropriately combining such information across studies. Methods are exemplified by analyses of log C-reactive protein and conventional risk factors for coronary heart disease in the Emerging Risk Factors Collaboration, a collation of individual data from multiple prospective studies with an average follow-up duration of 9.8 years (dates varied). We derive risk prediction models using Cox proportional hazards regression analysis stratified by study and obtain estimates of risk discrimination, Harrell's concordance index, and Royston's discrimination measure within each study; we then combine the estimates across studies using a weighted meta-analysis. Various weighting approaches are compared and lead us to recommend using the number of events in each study. We also discuss the calculation of measures of reclassification for multiple studies. We further show that comparison of differences in predictive ability across subgroups should be based only on within-study information and that combining measures of risk discrimination from case-control studies and prospective studies is problematic. The concordance index and discrimination measure gave qualitatively similar results throughout. While the concordance index was very heterogeneous between studies, principally because of differing age ranges, the increments in the concordance index from adding log C-reactive protein to conventional risk factors were more homogeneous.

  19. Ligand and structure-based methodologies for the prediction of the activity of G protein-coupled receptor ligands

    NASA Astrophysics Data System (ADS)

    Costanzi, Stefano; Tikhonova, Irina G.; Harden, T. Kendall; Jacobson, Kenneth A.

    2009-11-01

    Accurate in silico models for the quantitative prediction of the activity of G protein-coupled receptor (GPCR) ligands would greatly facilitate the process of drug discovery and development. Several methodologies have been developed based on the properties of the ligands, the direct study of the receptor-ligand interactions, or a combination of both approaches. Ligand-based three-dimensional quantitative structure-activity relationships (3D-QSAR) techniques, not requiring knowledge of the receptor structure, have been historically the first to be applied to the prediction of the activity of GPCR ligands. They are generally endowed with robustness and good ranking ability; however they are highly dependent on training sets. Structure-based techniques generally do not provide the level of accuracy necessary to yield meaningful rankings when applied to GPCR homology models. However, they are essentially independent from training sets and have a sufficient level of accuracy to allow an effective discrimination between binders and nonbinders, thus qualifying as viable lead discovery tools. The combination of ligand and structure-based methodologies in the form of receptor-based 3D-QSAR and ligand and structure-based consensus models results in robust and accurate quantitative predictions. The contribution of the structure-based component to these combined approaches is expected to become more substantial and effective in the future, as more sophisticated scoring functions are developed and more detailed structural information on GPCRs is gathered.

  20. Does information available at admission for delivery improve prediction of vaginal birth after cesarean?

    PubMed Central

    Grobman, William A.; Lai, Yinglei; Landon, Mark B.; Spong, Catherine Y.; Leveno, Kenneth J.; Rouse, Dwight J.; Varner, Michael W.; Moawad, Atef H.; Simhan, Hyagriv N.; Harper, Margaret; Wapner, Ronald J.; Sorokin, Yoram; Miodovnik, Menachem; Carpenter, Marshall; O'sullivan, Mary J.; Sibai, Baha M.; Langer, Oded; Thorp, John M.; Ramin, Susan M.; Mercer, Brian M.

    2010-01-01

    Objective To construct a predictive model for vaginal birth after cesarean (VBAC) that combines factors that can be ascertained only as the pregnancy progresses with those known at initiation of prenatal care. Study design Using multivariable modeling, we constructed a predictive model for VBAC that included patient factors known at the initial prenatal visit as well as those that only became evident as the pregancy progressed to the admission for delivery. Results 9616 women were analyzed. The regression equation for VBAC success included multiple factors that could not be known at the first prenatal visit. The area under the curve for this model was significantly greater (P < .001) than that of a model that included only factors available at the first prenatal visit. Conclusion A prediction model for VBAC success that incorporates factors that can be ascertained only as the pregnancy progresses adds to the predictive accuracy of a model that uses only factors available at a first prenatal visit. PMID:19813165

  1. On the interest of combining an analog model to a regression model for the adaptation of the downscaling link. Application to probabilistic prediction of precipitation over France.

    NASA Astrophysics Data System (ADS)

    Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine

    2016-04-01

    Scenarios of surface weather required for the impact studies have to be unbiased and adapted to the space and time scales of the considered hydro-systems. Hence, surface weather scenarios obtained from global climate models and/or numerical weather prediction models are not really appropriated. Outputs of these models have to be post-processed, which is often carried out thanks to Statistical Downscaling Methods (SDMs). Among those SDMs, approaches based on regression are often applied. For a given station, a regression link can be established between a set of large scale atmospheric predictors and the surface weather variable. These links are then used for the prediction of the latter. However, physical processes generating surface weather vary in time. This is well known for precipitation for instance. The most relevant predictors and the regression link are also likely to vary in time. A better prediction skill is thus classically obtained with a seasonal stratification of the data. Another strategy is to identify the most relevant predictor set and establish the regression link from dates that are similar - or analog - to the target date. In practice, these dates can be selected thanks to an analog model. In this study, we explore the possibility of improving the local performance of an analog model - where the analogy is applied to the geopotential heights 1000 and 500 hPa - using additional local scale predictors for the probabilistic prediction of the Safran precipitation over France. For each prediction day, the prediction is obtained from two GLM regression models - for both the occurrence and the quantity of precipitation - for which predictors and parameters are estimated from the analog dates. Firstly, the resulting combined model noticeably allows increasing the prediction performance by adapting the downscaling link for each prediction day. Secondly, the selected predictors for a given prediction depend on the large scale situation and on the considered region. Finally, even with such an adaptive predictor identification, the downscaling link appears to be robust: for a same prediction day, predictors selected for different locations of a given region are similar and the regression parameters are consistent within the region of interest.

  2. Combination of Pre-Treatment DWI-Signal Intensity and S-1 Treatment: A Predictor of Survival in Patients with Locally Advanced Pancreatic Cancer Receiving Stereotactic Body Radiation Therapy and Sequential S-1.

    PubMed

    Zhang, Yu; Zhu, Xiaofei; Liu, Ri; Wang, Xianglian; Sun, Gaofeng; Song, Jiaqi; Lu, Jianping; Zhang, Huojun

    2018-04-01

    To identify whether the combination of pre-treatment radiological and clinical factors can predict the overall survival (OS) in patients with locally advanced pancreatic cancer (LAPC) treated with stereotactic body radiation and sequential S-1 (a prodrug of 5-FU combined with two modulators) therapy with improved accuracy compared with that of established clinical and radiologic risk models. Patients admitted with LAPC underwent diffusion weighted imaging (DWI) scan at 3.0-T (b = 600 s/mm 2 ). The mean signal intensity (SI b = 600) of region-of-interest (ROI) was measured. The Log-rank test was done for tumor location, biliary stent, S-1, and other treatments and the Cox regression analysis was done to identify independent prognostic factors for OS. Prediction error curves (PEC) were used to assess potential errors in prediction of survival. The accuracy of prediction was evaluated by Integrated Brier Score (IBS) and C index. 41 patients were included in this study. The median OS was 11.7 months (2.8-23.23 months). The 1-year OS was 46%. Multivariate analysis showed that pre-treatment SI b = 600 value and administration of S-1 were independent predictors for OS. The performance of pre-treatment SI b = 600 and S-1 treatment in combination was better than that of SI b = 600 or S-1 treatment alone. The combination of pre-treatment SI b = 600 and S-1 treatment could predict the OS in patients with LAPC undergoing SBRT and sequential S-1 therapy with improved accuracy compared with that of established clinical and radiologic risk models. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  3. A Personalized Predictive Framework for Multivariate Clinical Time Series via Adaptive Model Selection.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2017-11-01

    Building of an accurate predictive model of clinical time series for a patient is critical for understanding of the patient condition, its dynamics, and optimal patient management. Unfortunately, this process is not straightforward. First, patient-specific variations are typically large and population-based models derived or learned from many different patients are often unable to support accurate predictions for each individual patient. Moreover, time series observed for one patient at any point in time may be too short and insufficient to learn a high-quality patient-specific model just from the patient's own data. To address these problems we propose, develop and experiment with a new adaptive forecasting framework for building multivariate clinical time series models for a patient and for supporting patient-specific predictions. The framework relies on the adaptive model switching approach that at any point in time selects the most promising time series model out of the pool of many possible models, and consequently, combines advantages of the population, patient-specific and short-term individualized predictive models. We demonstrate that the adaptive model switching framework is very promising approach to support personalized time series prediction, and that it is able to outperform predictions based on pure population and patient-specific models, as well as, other patient-specific model adaptation strategies.

  4. A Bayesian hierarchical model with spatial variable selection: the effect of weather on insurance claims

    PubMed Central

    Scheel, Ida; Ferkingstad, Egil; Frigessi, Arnoldo; Haug, Ola; Hinnerichsen, Mikkel; Meze-Hausken, Elisabeth

    2013-01-01

    Climate change will affect the insurance industry. We develop a Bayesian hierarchical statistical approach to explain and predict insurance losses due to weather events at a local geographic scale. The number of weather-related insurance claims is modelled by combining generalized linear models with spatially smoothed variable selection. Using Gibbs sampling and reversible jump Markov chain Monte Carlo methods, this model is fitted on daily weather and insurance data from each of the 319 municipalities which constitute southern and central Norway for the period 1997–2006. Precise out-of-sample predictions validate the model. Our results show interesting regional patterns in the effect of different weather covariates. In addition to being useful for insurance pricing, our model can be used for short-term predictions based on weather forecasts and for long-term predictions based on downscaled climate models. PMID:23396890

  5. Toxicodynamic analysis of the combined cholinesterase inhibition by paraoxon and methamidophos in human whole blood

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosgra, Sieto; Eijkeren, Jan C.H. van; Schans, Marcel J. van der

    2009-04-01

    Theoretical work has shown that the isobole method is not generally valid as a method for testing the absence or presence of interaction (in the biochemical sense) between chemicals. The present study illustrates how interaction can be tested by fitting a toxicodynamic model to the results of a mixture experiment. The inhibition of cholinesterases (ChE) in human whole blood by various dose combinations of paraoxon and methamidophos was measured in vitro. A toxicodynamic model describing the processes related to both OPs in inhibiting AChE activity was developed, and fit to the observed activities. This model, not containing any interaction betweenmore » the two OPs, described the results from the mixture experiment well, and it was concluded that the OPs did not interact in the whole blood samples. While this approach of toxicodynamic modeling is the most appropriate method for predicting combined effects, it is not rapidly applicable. Therefore, we illustrate how toxicodynamic modeling can be used to explore under which conditions dose addition would give an acceptable approximation of the combined effects from various chemicals. In the specific case of paraoxon and methamidophos in whole blood samples, it was found that dose addition gave a reasonably accurate prediction of the combined effects, despite considerable difference in some of their rate constants, and mildly non-parallel dose-response curves. Other possibilities of validating dose-addition using toxicodynamic modeling are briefly discussed.« less

  6. Livestock Helminths in a Changing Climate: Approaches and Restrictions to Meaningful Predictions

    PubMed Central

    Fox, Naomi J.; Marion, Glenn; Davidson, Ross S.; White, Piran C. L.; Hutchings, Michael R.

    2012-01-01

    Simple Summary Parasitic helminths represent one of the most pervasive challenges to livestock, and their intensity and distribution will be influenced by climate change. There is a need for long-term predictions to identify potential risks and highlight opportunities for control. We explore the approaches to modelling future helminth risk to livestock under climate change. One of the limitations to model creation is the lack of purpose driven data collection. We also conclude that models need to include a broad view of the livestock system to generate meaningful predictions. Abstract Climate change is a driving force for livestock parasite risk. This is especially true for helminths including the nematodes Haemonchus contortus, Teladorsagia circumcincta, Nematodirus battus, and the trematode Fasciola hepatica, since survival and development of free-living stages is chiefly affected by temperature and moisture. The paucity of long term predictions of helminth risk under climate change has driven us to explore optimal modelling approaches and identify current bottlenecks to generating meaningful predictions. We classify approaches as correlative or mechanistic, exploring their strengths and limitations. Climate is one aspect of a complex system and, at the farm level, husbandry has a dominant influence on helminth transmission. Continuing environmental change will necessitate the adoption of mitigation and adaptation strategies in husbandry. Long term predictive models need to have the architecture to incorporate these changes. Ultimately, an optimal modelling approach is likely to combine mechanistic processes and physiological thresholds with correlative bioclimatic modelling, incorporating changes in livestock husbandry and disease control. Irrespective of approach, the principal limitation to parasite predictions is the availability of active surveillance data and empirical data on physiological responses to climate variables. By combining improved empirical data and refined models with a broad view of the livestock system, robust projections of helminth risk can be developed. PMID:26486780

  7. Application of a fuzzy neural network model in predicting polycyclic aromatic hydrocarbon-mediated perturbations of the Cyp1b1 transcriptional regulatory network in mouse skin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larkin, Andrew; Department of Statistics, Oregon State University; Superfund Research Center, Oregon State University

    2013-03-01

    Polycyclic aromatic hydrocarbons (PAHs) are present in the environment as complex mixtures with components that have diverse carcinogenic potencies and mostly unknown interactive effects. Non-additive PAH interactions have been observed in regulation of cytochrome P450 (CYP) gene expression in the CYP1 family. To better understand and predict biological effects of complex mixtures, such as environmental PAHs, an 11 gene input-1 gene output fuzzy neural network (FNN) was developed for predicting PAH-mediated perturbations of dermal Cyp1b1 transcription in mice. Input values were generalized using fuzzy logic into low, medium, and high fuzzy subsets, and sorted using k-means clustering to create Mamdanimore » logic functions for predicting Cyp1b1 mRNA expression. Model testing was performed with data from microarray analysis of skin samples from FVB/N mice treated with toluene (vehicle control), dibenzo[def,p]chrysene (DBC), benzo[a]pyrene (BaP), or 1 of 3 combinations of diesel particulate extract (DPE), coal tar extract (CTE) and cigarette smoke condensate (CSC) using leave-one-out cross-validation. Predictions were within 1 log{sub 2} fold change unit of microarray data, with the exception of the DBC treatment group, where the unexpected down-regulation of Cyp1b1 expression was predicted but did not reach statistical significance on the microarrays. Adding CTE to DPE was predicted to increase Cyp1b1 expression, whereas adding CSC to CTE and DPE was predicted to have no effect, in agreement with microarray results. The aryl hydrocarbon receptor repressor (Ahrr) was determined to be the most significant input variable for model predictions using back-propagation and normalization of FNN weights. - Highlights: ► Tested a model to predict PAH mixture-mediated changes in Cyp1b1 expression ► Quantitative predictions in agreement with microarrays for Cyp1b1 induction ► Unexpected difference in expression between DBC and other treatments predicted ► Model predictions for combining PAH mixtures in agreement with microarrays ► Predictions highly dependent on aryl hydrocarbon receptor repressor expression.« less

  8. Integrated PK-PD and agent-based modeling in oncology.

    PubMed

    Wang, Zhihui; Butner, Joseph D; Cristini, Vittorio; Deisboeck, Thomas S

    2015-04-01

    Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed.

  9. Integrated PK-PD and Agent-Based Modeling in Oncology

    PubMed Central

    Wang, Zhihui; Butner, Joseph D.; Cristini, Vittorio

    2016-01-01

    Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed. PMID:25588379

  10. Unified constitutive material models for nonlinear finite-element structural analysis. [gas turbine engine blades and vanes

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Laflen, J. H.; Lindholm, U. S.

    1985-01-01

    Unified constitutive material models were developed for structural analyses of aircraft gas turbine engine components with particular application to isotropic materials used for high-pressure stage turbine blades and vanes. Forms or combinations of models independently proposed by Bodner and Walker were considered. These theories combine time-dependent and time-independent aspects of inelasticity into a continuous spectrum of behavior. This is in sharp contrast to previous classical approaches that partition inelastic strain into uncoupled plastic and creep components. Predicted stress-strain responses from these models were evaluated against monotonic and cyclic test results for uniaxial specimens of two cast nickel-base alloys, B1900+Hf and Rene' 80. Previously obtained tension-torsion test results for Hastelloy X alloy were used to evaluate multiaxial stress-strain cycle predictions. The unified models, as well as appropriate algorithms for integrating the constitutive equations, were implemented in finite-element computer codes.

  11. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  12. Influences of misprediction costs on solar flare prediction

    NASA Astrophysics Data System (ADS)

    Huang, Xin; Wang, HuaNing; Dai, XingHua

    2012-10-01

    The mispredictive costs of flaring and non-flaring samples are different for different applications of solar flare prediction. Hence, solar flare prediction is considered a cost sensitive problem. A cost sensitive solar flare prediction model is built by modifying the basic decision tree algorithm. Inconsistency rate with the exhaustive search strategy is used to determine the optimal combination of magnetic field parameters in an active region. These selected parameters are applied as the inputs of the solar flare prediction model. The performance of the cost sensitive solar flare prediction model is evaluated for the different thresholds of solar flares. It is found that more flaring samples are correctly predicted and more non-flaring samples are wrongly predicted with the increase of the cost for wrongly predicting flaring samples as non-flaring samples, and the larger cost of wrongly predicting flaring samples as non-flaring samples is required for the higher threshold of solar flares. This can be considered as the guide line for choosing proper cost to meet the requirements in different applications.

  13. Network-based Prediction of Lotic Thermal Regimes Across New England

    EPA Science Inventory

    Thermal regimes are a critical factor in models predicting effects of watershed management activities on fish habitat suitability. We have assembled a database of lotic temperature time series across New England (> 7000 station-year combinations) from state and Federal data sour...

  14. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  15. Micromechanical models for textile structural composites

    NASA Technical Reports Server (NTRS)

    Marrey, Ramesh V.; Sankar, Bhavani V.

    1995-01-01

    The objective is to develop micromechanical models for predicting the stiffness and strength properties of textile composite materials. Two models are presented to predict the homogeneous elastic constants and coefficients of thermal expansion of a textile composite. The first model is based on rigorous finite element analysis of the textile composite unit-cell. Periodic boundary conditions are enforced between opposite faces of the unit-cell to simulate deformations accurately. The second model implements the selective averaging method (SAM), which is based on a judicious combination of stiffness and compliance averaging. For thin textile composites, both models can predict the plate stiffness coefficients and plate thermal coefficients. The finite element procedure is extended to compute the thermal residual microstresses, and to estimate the initial failure envelope for textile composites.

  16. Thermal Pollution Mathematical Model. Volume 5: User's Manual for Three-Dimensional Rigid-Lid Model. [environment impact of thermal discharges from power plants

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.

    1980-01-01

    A user's manual for a three dimensional, rigid lid model used for hydrothermal predictions of closed basins subjected to a heated discharge together with various other inflows and outflows is presented. The model has the capability to predict (1) wind driven circulation; (2) the circulation caused by inflows and outflows to the domain; and (3) the thermal effects in the domain, and to combine the above processes. The calibration procedure consists of comparing ground truth corrected airborne radiometer data with surface isotherms predicted by the model. The model was verified for accuracy at various sites and results are found to be fairly accurate in all verification runs.

  17. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  18. Protein loop modeling using a new hybrid energy function and its application to modeling in inaccurate structural environments.

    PubMed

    Park, Hahnbeom; Lee, Gyu Rie; Heo, Lim; Seok, Chaok

    2014-01-01

    Protein loop modeling is a tool for predicting protein local structures of particular interest, providing opportunities for applications involving protein structure prediction and de novo protein design. Until recently, the majority of loop modeling methods have been developed and tested by reconstructing loops in frameworks of experimentally resolved structures. In many practical applications, however, the protein loops to be modeled are located in inaccurate structural environments. These include loops in model structures, low-resolution experimental structures, or experimental structures of different functional forms. Accordingly, discrepancies in the accuracy of the structural environment assumed in development of the method and that in practical applications present additional challenges to modern loop modeling methods. This study demonstrates a new strategy for employing a hybrid energy function combining physics-based and knowledge-based components to help tackle this challenge. The hybrid energy function is designed to combine the strengths of each energy component, simultaneously maintaining accurate loop structure prediction in a high-resolution framework structure and tolerating minor environmental errors in low-resolution structures. A loop modeling method based on global optimization of this new energy function is tested on loop targets situated in different levels of environmental errors, ranging from experimental structures to structures perturbed in backbone as well as side chains and template-based model structures. The new method performs comparably to force field-based approaches in loop reconstruction in crystal structures and better in loop prediction in inaccurate framework structures. This result suggests that higher-accuracy predictions would be possible for a broader range of applications. The web server for this method is available at http://galaxy.seoklab.org/loop with the PS2 option for the scoring function.

  19. Predictive Models and Tools for Screening Chemicals under TSCA: Consumer Exposure Models 1.5

    EPA Pesticide Factsheets

    CEM contains a combination of models and default parameters which are used to estimate inhalation, dermal, and oral exposures to consumer products and articles for a wide variety of product and article use categories.

  20. Constraining proposed combinations of ice history and Earth rheology using VLBI determined baseline length rates in North America

    NASA Technical Reports Server (NTRS)

    Mitrovica, J. X.; Davis, J. L.; Shapiro, I. I.

    1993-01-01

    We predict the present-day rates of change of the lengths of 19 North American baselines due to the glacial isostatic adjustment process. Contrary to previously published research, we find that the three dimensional motion of each of the sites defining a baseline, rather than only the radial motions of these sites, needs to be considered to obtain an accurate estimate of the rate of change of the baseline length. Predictions are generated using a suite of Earth models and late Pleistocene ice histories, these include specific combinations of the two which have been proposed in the literature as satisfying a variety of rebound related geophysical observations from the North American region. A number of these published models are shown to predict rates which differ significantly from the VLBI observations.

  1. Survey of statistical techniques used in validation studies of air pollution prediction models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bornstein, R D; Anderson, S F

    1979-03-01

    Statistical techniques used by meteorologists to validate predictions made by air pollution models are surveyed. Techniques are divided into the following three groups: graphical, tabular, and summary statistics. Some of the practical problems associated with verification are also discussed. Characteristics desired in any validation program are listed and a suggested combination of techniques that possesses many of these characteristics is presented.

  2. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    ERIC Educational Resources Information Center

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  3. Prediction of In Vivo Knee Joint Kinematics Using a Combined Dual Fluoroscopy Imaging and Statistical Shape Modeling Technique

    PubMed Central

    Li, Jing-Sheng; Tsai, Tsung-Yuan; Wang, Shaobai; Li, Pingyue; Kwon, Young-Min; Freiberg, Andrew; Rubash, Harry E.; Li, Guoan

    2014-01-01

    Using computed tomography (CT) or magnetic resonance (MR) images to construct 3D knee models has been widely used in biomedical engineering research. Statistical shape modeling (SSM) method is an alternative way to provide a fast, cost-efficient, and subject-specific knee modeling technique. This study was aimed to evaluate the feasibility of using a combined dual-fluoroscopic imaging system (DFIS) and SSM method to investigate in vivo knee kinematics. Three subjects were studied during a treadmill walking. The data were compared with the kinematics obtained using a CT-based modeling technique. Geometric root-mean-square (RMS) errors between the knee models constructed using the SSM and CT-based modeling techniques were 1.16 mm and 1.40 mm for the femur and tibia, respectively. For the kinematics of the knee during the treadmill gait, the SSM model can predict the knee kinematics with RMS errors within 3.3 deg for rotation and within 2.4 mm for translation throughout the stance phase of the gait cycle compared with those obtained using the CT-based knee models. The data indicated that the combined DFIS and SSM technique could be used for quick evaluation of knee joint kinematics. PMID:25320846

  4. Comparison of two weighted integration models for the cueing task: linear and likelihood

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2003-01-01

    In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.

  5. Predictive models of poly(ethylene-terephthalate) film degradation under multi-factor accelerated weathering exposures

    PubMed Central

    Ngendahimana, David K.; Fagerholm, Cara L.; Sun, Jiayang; Bruckman, Laura S.

    2017-01-01

    Accelerated weathering exposures were performed on poly(ethylene-terephthalate) (PET) films. Longitudinal multi-level predictive models as a function of PET grades and exposure types were developed for the change in yellowness index (YI) and haze (%). Exposures with similar change in YI were modeled using a linear fixed-effects modeling approach. Due to the complex nature of haze formation, measurement uncertainty, and the differences in the samples’ responses, the change in haze (%) depended on individual samples’ responses and a linear mixed-effects modeling approach was used. When compared to fixed-effects models, the addition of random effects in the haze formation models significantly increased the variance explained. For both modeling approaches, diagnostic plots confirmed independence and homogeneity with normally distributed residual errors. Predictive R2 values for true prediction error and predictive power of the models demonstrated that the models were not subject to over-fitting. These models enable prediction under pre-defined exposure conditions for a given exposure time (or photo-dosage in case of UV light exposure). PET degradation under cyclic exposures combining UV light and condensing humidity is caused by photolytic and hydrolytic mechanisms causing yellowing and haze formation. Quantitative knowledge of these degradation pathways enable cross-correlation of these lab-based exposures with real-world conditions for service life prediction. PMID:28498875

  6. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information

    PubMed Central

    Wang, Xiaohong; Wang, Lizhi

    2017-01-01

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system. PMID:28926930

  7. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information.

    PubMed

    Wang, Jingbin; Wang, Xiaohong; Wang, Lizhi

    2017-09-15

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system.

  8. Overview of the 1986--1987 atomic mass predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haustein, P.E.

    1988-07-01

    The need for a comprehensive update of earlier sets of atomic mass predictions is documented. A project that grew from this need and which resulted in the preparation of the 1986--1987 Atomic Mass Predictions is summarized. Ten sets of new mass predictions and expository text from a variety of types of mass models are combined with the latest evaluation of experimentally determined atomic masses. The methodology employed in constructing these mass predictions is outlined. The models are compared with regard to their reproduction of the experimental mass surface and their use of varying numbers of adjustable parameters. Plots are presented,more » for each set of predictions, of differences between model calculations and the measured masses. These plots may be used to estimate the reliability of the new mass predictions in unmeasured regions that border the experimetally known mass surface. copyright 1988 Academic Press, Inc.« less

  9. Measurement-based reliability prediction methodology. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Linn, Linda Shen

    1991-01-01

    In the past, analytical and measurement based models were developed to characterize computer system behavior. An open issue is how these models can be used, if at all, for system design improvement. The issue is addressed here. A combined statistical/analytical approach to use measurements from one environment to model the system failure behavior in a new environment is proposed. A comparison of the predicted results with the actual data from the new environment shows a close correspondence.

  10. Gaussian functional regression for output prediction: Model assimilation and experimental design

    NASA Astrophysics Data System (ADS)

    Nguyen, N. C.; Peraire, J.

    2016-03-01

    In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.

  11. Assessing Predictive Properties of Genome-Wide Selection in Soybeans

    PubMed Central

    Xavier, Alencar; Muir, William M.; Rainey, Katy Martin

    2016-01-01

    Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set. PMID:27317786

  12. Dengue Baidu Search Index data can improve the prediction of local dengue epidemic: A case study in Guangzhou, China

    PubMed Central

    Liu, Tao; Zhu, Guanghu; Lin, Hualiang; Zhang, Yonghui; He, Jianfeng; Deng, Aiping; Peng, Zhiqiang; Xiao, Jianpeng; Rutherford, Shannon; Xie, Runsheng; Zeng, Weilin; Li, Xing; Ma, Wenjun

    2017-01-01

    Background Dengue fever (DF) in Guangzhou, Guangdong province in China is an important public health issue. The problem was highlighted in 2014 by a large, unprecedented outbreak. In order to respond in a more timely manner and hence better control such potential outbreaks in the future, this study develops an early warning model that integrates internet-based query data into traditional surveillance data. Methodology and principal findings A Dengue Baidu Search Index (DBSI) was collected from the Baidu website for developing a predictive model of dengue fever in combination with meteorological and demographic factors. Generalized additive models (GAM) with or without DBSI were established. The generalized cross validation (GCV) score and deviance explained indexes, intraclass correlation coefficient (ICC) and root mean squared error (RMSE), were respectively applied to measure the fitness and the prediction capability of the models. Our results show that the DBSI with one-week lag has a positive linear relationship with the local DF occurrence, and the model with DBSI (ICC:0.94 and RMSE:59.86) has a better prediction capability than the model without DBSI (ICC:0.72 and RMSE:203.29). Conclusions Our study suggests that a DSBI combined with traditional disease surveillance and meteorological data can improve the dengue early warning system in Guangzhou. PMID:28263988

  13. Improving the prediction of arsenic contents in agricultural soils by combining the reflectance spectroscopy of soils and rice plants

    NASA Astrophysics Data System (ADS)

    Shi, Tiezhu; Wang, Junjie; Chen, Yiyun; Wu, Guofeng

    2016-10-01

    Visible and near-infrared reflectance spectroscopy provides a beneficial tool for investigating soil heavy metal contamination. This study aimed to investigate mechanisms of soil arsenic prediction using laboratory based soil and leaf spectra, compare the prediction of arsenic content using soil spectra with that using rice plant spectra, and determine whether the combination of both could improve the prediction of soil arsenic content. A total of 100 samples were collected and the reflectance spectra of soils and rice plants were measured using a FieldSpec3 portable spectroradiometer (350-2500 nm). After eliminating spectral outliers, the reflectance spectra were divided into calibration (n = 62) and validation (n = 32) data sets using the Kennard-Stone algorithm. Genetic algorithm (GA) was used to select useful spectral variables for soil arsenic prediction. Thereafter, the GA-selected spectral variables of the soil and leaf spectra were individually and jointly employed to calibrate the partial least squares regression (PLSR) models using the calibration data set. The regression models were validated and compared using independent validation data set. Furthermore, the correlation coefficients of soil arsenic against soil organic matter, leaf arsenic and leaf chlorophyll were calculated, and the important wavelengths for PLSR modeling were extracted. Results showed that arsenic prediction using the leaf spectra (coefficient of determination in validation, Rv2 = 0.54; root mean square error in validation, RMSEv = 12.99 mg kg-1; and residual prediction deviation in validation, RPDv = 1.35) was slightly better than using the soil spectra (Rv2 = 0.42, RMSEv = 13.35 mg kg-1, and RPDv = 1.31). However, results also showed that the combinational use of soil and leaf spectra resulted in higher arsenic prediction (Rv2 = 0.63, RMSEv = 11.94 mg kg-1, RPDv = 1.47) compared with either soil or leaf spectra alone. Soil spectral bands near 480, 600, 670, 810, 1980, 2050 and 2290 nm, leaf spectral bands near 700, 890 and 900 nm in PLSR models were important wavelengths for soil arsenic prediction. Moreover, soil arsenic showed significantly positive correlations with soil organic matter (r = 0.62, p < 0.01) and leaf arsenic (r = 0.77, p < 0.01), and a significantly negative correlation with leaf chlorophyll (r = -0.67, p < 0.01). The results showed that the prediction of arsenic contents using soil and leaf spectra may be based on their relationships with soil organic matter and leaf chlorophyll contents, respectively. Although RPD of 1.47 was below the recommended RPD of >2 for soil analysis, arsenic prediction in agricultural soils can be improved by combining the leaf and soil spectra.

  14. A systems approach to model the relationship between aflatoxin gene cluster expression, environmental factors, growth and toxin production by Aspergillus flavus

    PubMed Central

    Abdel-Hadi, Ahmed; Schmidt-Heydt, Markus; Parra, Roberto; Geisen, Rolf; Magan, Naresh

    2012-01-01

    A microarray analysis was used to examine the effect of combinations of water activity (aw, 0.995–0.90) and temperature (20–42°C) on the activation of aflatoxin biosynthetic genes (30 genes) in Aspergillus flavus grown on a conducive YES (20 g yeast extract, 150 g sucrose, 1 g MgSO4·7H2O) medium. The relative expression of 10 key genes (aflF, aflD, aflE, aflM, aflO, aflP, aflQ, aflX, aflR and aflS) in the biosynthetic pathway was examined in relation to different environmental factors and phenotypic aflatoxin B1 (AFB1) production. These data, plus data on relative growth rates and AFB1 production under different aw × temperature conditions were used to develop a mixed-growth-associated product formation model. The gene expression data were normalized and then used as a linear combination of the data for all 10 genes and combined with the physical model. This was used to relate gene expression to aw and temperature conditions to predict AFB1 production. The relationship between the observed AFB1 production provided a good linear regression fit to the predicted production based in the model. The model was then validated by examining datasets outside the model fitting conditions used (37°C, 40°C and different aw levels). The relationship between structural genes (aflD, aflM) in the biosynthetic pathway and the regulatory genes (aflS, aflJ) was examined in relation to aw and temperature by developing ternary diagrams of relative expression. These findings are important in developing a more integrated systems approach by combining gene expression, ecophysiological influences and growth data to predict mycotoxin production. This could help in developing a more targeted approach to develop prevention strategies to control such carcinogenic natural metabolites that are prevalent in many staple food products. The model could also be used to predict the impact of climate change on toxin production. PMID:21880616

  15. Prediction Errors but Not Sharpened Signals Simulate Multivoxel fMRI Patterns during Speech Perception

    PubMed Central

    Davis, Matthew H.

    2016-01-01

    Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains. PMID:27846209

  16. Discovering Synergistic Drug Combination from a Computational Perspective.

    PubMed

    Ding, Pingjian; Luo, Jiawei; Liang, Cheng; Xiao, Qiu; Cao, Buwen; Li, Guanghui

    2018-03-30

    Synergistic drug combinations play an important role in the treatment of complex diseases. The identification of effective drug combination is vital to further reduce the side effects and improve therapeutic efficiency. In previous years, in vitro method has been the main route to discover synergistic drug combinations. However, many limitations of time and resource consumption lie within the in vitro method. Therefore, with the rapid development of computational models and the explosive growth of large and phenotypic data, computational methods for discovering synergistic drug combinations are an efficient and promising tool and contribute to precision medicine. It is the key of computational methods how to construct the computational model. Different computational strategies generate different performance. In this review, the recent advancements in computational methods for predicting effective drug combination are concluded from multiple aspects. First, various datasets utilized to discover synergistic drug combinations are summarized. Second, we discussed feature-based approaches and partitioned these methods into two classes including feature-based methods in terms of similarity measure, and feature-based methods in terms of machine learning. Third, we discussed network-based approaches for uncovering synergistic drug combinations. Finally, we analyzed and prospected computational methods for predicting effective drug combinations. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. Development of a Threshold Model to Predict Germination of Populus tomentosa Seeds after Harvest and Storage under Ambient Condition

    PubMed Central

    Wang, Wei-Qing; Cheng, Hong-Yan; Song, Song-Quan

    2013-01-01

    Effects of temperature, storage time and their combination on germination of aspen (Populus tomentosa) seeds were investigated. Aspen seeds were germinated at 5 to 30°C at 5°C intervals after storage for a period of time under 28°C and 75% relative humidity. The effect of temperature on aspen seed germination could not be effectively described by the thermal time (TT) model, which underestimated the germination rate at 5°C and poorly predicted the time courses of germination at 10, 20, 25 and 30°C. A modified TT model (MTT) which assumed a two-phased linear relationship between germination rate and temperature was more accurate in predicting the germination rate and percentage and had a higher likelihood of being correct than the TT model. The maximum lifetime threshold (MLT) model accurately described the effect of storage time on seed germination across all the germination temperatures. An aging thermal time (ATT) model combining both the TT and MLT models was developed to describe the effect of both temperature and storage time on seed germination. When the ATT model was applied to germination data across all the temperatures and storage times, it produced a relatively poor fit. Adjusting the ATT model to separately fit germination data at low and high temperatures in the suboptimal range increased the models accuracy for predicting seed germination. Both the MLT and ATT models indicate that germination of aspen seeds have distinct physiological responses to temperature within a suboptimal range. PMID:23658654

  18. Application of clustering analysis in the prediction of photovoltaic power generation based on neural network

    NASA Astrophysics Data System (ADS)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.

  19. Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Depeursinge, Adrien, E-mail: adrien.depeursinge@hevs.ch; Yanagawa, Masahiro; Leung, Ann N.

    Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, themore » proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10{sup −5}). Conclusions: This study constitutes a novel perspective on how to interpret imaging information from CT examinations by suggesting that most of the information related to adenocarcinoma aggressiveness is related to the intensity and morphological properties of solid components of the tumor. The prediction of adenocarcinoma relapse was found to have low specificity but very high sensitivity. Our results could be useful in clinical practice to identify patients for which no recurrence is expected with a very high confidence using a presurgical CT scan only. It also provided an accurate estimation of the risk of recurrence after a given duration t from surgical resection (i.e., C-index = 0.81 ± 0.02)« less

  20. Program of research in severe storms

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Two modeling areas, the development of a mesoscale chemistry-meteorology interaction model, and the development of a combined urban chemical kinetics-transport model are examined. The problems associated with developing a three dimensional combined meteorological-chemical kinetics computer program package are defined. A similar three dimensional hydrostatic real time model which solves the fundamental Navier-Stokes equations for nonviscous flow is described. An urban air quality simulation model, developed to predict the temporal and spatial distribution of reactive and nonreactive gases in and around an urban area and to support a remote sensor evaluation program is reported.

  1. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.

  2. Stochastic simulation of predictive space–time scenarios of wind speed using observations and physical model outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai

    We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less

  3. Stochastic simulation of predictive space–time scenarios of wind speed using observations and physical model outputs

    DOE PAGES

    Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai

    2018-03-01

    We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less

  4. Airborne Wireless Communication Modeling and Analysis with MATLAB

    DTIC Science & Technology

    2014-03-27

    research develops a physical layer model that combines antenna modeling using computational electromagnetics and the two-ray propagation model to...predict the received signal strength. The antenna is modeled with triangular patches and analyzed by extending the antenna modeling algorithm by Sergey...7  2.7. Propagation Modeling : Statistical Models ............................................................8  2.8. Antenna Modeling

  5. Minimizing effects of methodological decisions on interpretation and prediction in species distribution studies: An example with background selection

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.

    2017-01-01

    Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.

  6. Identification of the feedforward component in manual control with predictable target signals.

    PubMed

    Drop, Frank M; Pool, Daan M; Damveld, Herman J; van Paassen, Marinus M; Mulder, Max

    2013-12-01

    In the manual control of a dynamic system, the human controller (HC) often follows a visible and predictable reference path. Compared with a purely feedback control strategy, performance can be improved by making use of this knowledge of the reference. The operator could effectively introduce feedforward control in conjunction with a feedback path to compensate for errors, as hypothesized in literature. However, feedforward behavior has never been identified from experimental data, nor have the hypothesized models been validated. This paper investigates human control behavior in pursuit tracking of a predictable reference signal while being perturbed by a quasi-random multisine disturbance signal. An experiment was done in which the relative strength of the target and disturbance signals were systematically varied. The anticipated changes in control behavior were studied by means of an ARX model analysis and by fitting three parametric HC models: two different feedback models and a combined feedforward and feedback model. The ARX analysis shows that the experiment participants employed control action on both the error and the target signal. The control action on the target was similar to the inverse of the system dynamics. Model fits show that this behavior can be modeled best by the combined feedforward and feedback model.

  7. Transmembrane Topology and Signal Peptide Prediction Using Dynamic Bayesian Networks

    PubMed Central

    Reynolds, Sheila M.; Käll, Lukas; Riffle, Michael E.; Bilmes, Jeff A.; Noble, William Stafford

    2008-01-01

    Hidden Markov models (HMMs) have been successfully applied to the tasks of transmembrane protein topology prediction and signal peptide prediction. In this paper we expand upon this work by making use of the more powerful class of dynamic Bayesian networks (DBNs). Our model, Philius, is inspired by a previously published HMM, Phobius, and combines a signal peptide submodel with a transmembrane submodel. We introduce a two-stage DBN decoder that combines the power of posterior decoding with the grammar constraints of Viterbi-style decoding. Philius also provides protein type, segment, and topology confidence metrics to aid in the interpretation of the predictions. We report a relative improvement of 13% over Phobius in full-topology prediction accuracy on transmembrane proteins, and a sensitivity and specificity of 0.96 in detecting signal peptides. We also show that our confidence metrics correlate well with the observed precision. In addition, we have made predictions on all 6.3 million proteins in the Yeast Resource Center (YRC) database. This large-scale study provides an overall picture of the relative numbers of proteins that include a signal-peptide and/or one or more transmembrane segments as well as a valuable resource for the scientific community. All DBNs are implemented using the Graphical Models Toolkit. Source code for the models described here is available at http://noble.gs.washington.edu/proj/philius. A Philius Web server is available at http://www.yeastrc.org/philius, and the predictions on the YRC database are available at http://www.yeastrc.org/pdr. PMID:18989393

  8. Improved NASA-ANOPP Noise Prediction Computer Code for Advanced Subsonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Kontos, K. B.; Janardan, B. A.; Gliebe, P. R.

    1996-01-01

    Recent experience using ANOPP to predict turbofan engine flyover noise suggests that it over-predicts overall EPNL by a significant amount. An improvement in this prediction method is desired for system optimization and assessment studies of advanced UHB engines. An assessment of the ANOPP fan inlet, fan exhaust, jet, combustor, and turbine noise prediction methods is made using static engine component noise data from the CF6-8OC2, E(3), and QCSEE turbofan engines. It is shown that the ANOPP prediction results are generally higher than the measured GE data, and that the inlet noise prediction method (Heidmann method) is the most significant source of this overprediction. Fan noise spectral comparisons show that improvements to the fan tone, broadband, and combination tone noise models are required to yield results that more closely simulate the GE data. Suggested changes that yield improved fan noise predictions but preserve the Heidmann model structure are identified and described. These changes are based on the sets of engine data mentioned, as well as some CFM56 engine data that was used to expand the combination tone noise database. It should be noted that the recommended changes are based on an analysis of engines that are limited to single stage fans with design tip relative Mach numbers greater than one.

  9. Second order closure modeling of turbulent buoyant wall plumes

    NASA Technical Reports Server (NTRS)

    Zhu, Gang; Lai, Ming-Chia; Shih, Tsan-Hsing

    1992-01-01

    Non-intrusive measurements of scalar and momentum transport in turbulent wall plumes, using a combined technique of laser Doppler anemometry and laser-induced fluorescence, has shown some interesting features not present in the free jet or plumes. First, buoyancy-generation of turbulence is shown to be important throughout the flow field. Combined with low-Reynolds-number turbulence and near-wall effect, this may raise the anisotropic turbulence structure beyond the prediction of eddy-viscosity models. Second, the transverse scalar fluxes do not correspond only to the mean scalar gradients, as would be expected from gradient-diffusion modeling. Third, higher-order velocity-scalar correlations which describe turbulent transport phenomena could not be predicted using simple turbulence models. A second-order closure simulation of turbulent adiabatic wall plumes, taking into account the recent progress in scalar transport, near-wall effect and buoyancy, is reported in the current study to compare with the non-intrusive measurements. In spite of the small velocity scale of the wall plumes, the results showed that low-Reynolds-number correction is not critically important to predict the adiabatic cases tested and cannot be applied beyond the maximum velocity location. The mean and turbulent velocity profiles are very closely predicted by the second-order closure models. but the scalar field is less satisfactory, with the scalar fluctuation level underpredicted. Strong intermittency of the low-Reynolds-number flow field is suspected of these discrepancies. The trends in second- and third-order velocity-scalar correlations, which describe turbulent transport phenomena, are also predicted in general, with the cross-streamwise correlations better than the streamwise one. Buoyancy terms modeling the pressure-correlation are shown to improve the prediction slightly. The effects of equilibrium time-scale ratio and boundary condition are also discussed.

  10. Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.

    PubMed

    Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza

    2015-09-15

    The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Demonstrating the improvement of predictive maturity of a computational model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M; Unal, Cetin; Atamturktur, Huriye S

    2010-01-01

    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smallermore » discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.« less

  12. Transient excitation and mechanical admittance test techniques for prediction of payload vibration environments

    NASA Technical Reports Server (NTRS)

    Kana, D. D.; Vargas, L. M.

    1977-01-01

    Transient excitation forces were applied separately to simple beam-and-mass launch vehicle and payload models to develop complex admittance functions for the interface and other appropriate points on the structures. These measured admittances were then analytically combined by a matrix representation to obtain a description of the coupled system dynamic characteristics. Response of the payload model to excitation of the launch vehicle model was predicted and compared with results measured on the combined models. These results are also compared with results of earlier work in which a similar procedure was employed except that steady-state sinusoidal excitation techniques were included. It is found that the method employing transient tests produces results that are better overall than the steady state methods. Furthermore, the transient method requires far less time to implement, and provides far better resolution in the data. However, the data acquisition and handling problem is more complex for this method. It is concluded that the transient test and admittance matrix prediction method can be a valuable tool for development of payload vibration tests.

  13. Predicting clinical diagnosis in Huntington's disease: An imaging polymarker

    PubMed Central

    Daws, Richard E.; Soreq, Eyal; Johnson, Eileanoir B.; Scahill, Rachael I.; Tabrizi, Sarah J.; Barker, Roger A.; Hampshire, Adam

    2018-01-01

    Objective Huntington's disease (HD) gene carriers can be identified before clinical diagnosis; however, statistical models for predicting when overt motor symptoms will manifest are too imprecise to be useful at the level of the individual. Perfecting this prediction is integral to the search for disease modifying therapies. This study aimed to identify an imaging marker capable of reliably predicting real‐life clinical diagnosis in HD. Method A multivariate machine learning approach was applied to resting‐state and structural magnetic resonance imaging scans from 19 premanifest HD gene carriers (preHD, 8 of whom developed clinical disease in the 5 years postscanning) and 21 healthy controls. A classification model was developed using cross‐group comparisons between preHD and controls, and within the preHD group in relation to “estimated” and “actual” proximity to disease onset. Imaging measures were modeled individually, and combined, and permutation modeling robustly tested classification accuracy. Results Classification performance for preHDs versus controls was greatest when all measures were combined. The resulting polymarker predicted converters with high accuracy, including those who were not expected to manifest in that time scale based on the currently adopted statistical models. Interpretation We propose that a holistic multivariate machine learning treatment of brain abnormalities in the premanifest phase can be used to accurately identify those patients within 5 years of developing motor features of HD, with implications for prognostication and preclinical trials. Ann Neurol 2018;83:532–543 PMID:29405351

  14. Object detection in natural backgrounds predicted by discrimination performance and models

    NASA Technical Reports Server (NTRS)

    Rohaly, A. M.; Ahumada, A. J. Jr; Watson, A. B.

    1997-01-01

    Many models of visual performance predict image discriminability, the visibility of the difference between a pair of images. We compared the ability of three image discrimination models to predict the detectability of objects embedded in natural backgrounds. The three models were: a multiple channel Cortex transform model with within-channel masking; a single channel contrast sensitivity filter model; and a digital image difference metric. Each model used a Minkowski distance metric (generalized vector magnitude) to summate absolute differences between the background and object plus background images. For each model, this summation was implemented with three different exponents: 2, 4 and infinity. In addition, each combination of model and summation exponent was implemented with and without a simple contrast gain factor. The model outputs were compared to measures of object detectability obtained from 19 observers. Among the models without the contrast gain factor, the multiple channel model with a summation exponent of 4 performed best, predicting the pattern of observer d's with an RMS error of 2.3 dB. The contrast gain factor improved the predictions of all three models for all three exponents. With the factor, the best exponent was 4 for all three models, and their prediction errors were near 1 dB. These results demonstrate that image discrimination models can predict the relative detectability of objects in natural scenes.

  15. Intelligent postoperative morbidity prediction of heart disease using artificial intelligence techniques.

    PubMed

    Hsieh, Nan-Chen; Hung, Lun-Ping; Shih, Chun-Che; Keh, Huan-Chao; Chan, Chien-Hui

    2012-06-01

    Endovascular aneurysm repair (EVAR) is an advanced minimally invasive surgical technology that is helpful for reducing patients' recovery time, postoperative morbidity and mortality. This study proposes an ensemble model to predict postoperative morbidity after EVAR. The ensemble model was developed using a training set of consecutive patients who underwent EVAR between 2000 and 2009. All data required for prediction modeling, including patient demographics, preoperative, co-morbidities, and complication as outcome variables, was collected prospectively and entered into a clinical database. A discretization approach was used to categorize numerical values into informative feature space. Then, the Bayesian network (BN), artificial neural network (ANN), and support vector machine (SVM) were adopted as base models, and stacking combined multiple models. The research outcomes consisted of an ensemble model to predict postoperative morbidity after EVAR, the occurrence of postoperative complications prospectively recorded, and the causal effect knowledge by BNs with Markov blanket concept.

  16. Efficient Reduction and Analysis of Model Predictive Error

    NASA Astrophysics Data System (ADS)

    Doherty, J.

    2006-12-01

    Most groundwater models are calibrated against historical measurements of head and other system states before being used to make predictions in a real-world context. Through the calibration process, parameter values are estimated or refined such that the model is able to reproduce historical behaviour of the system at pertinent observation points reasonably well. Predictions made by the model are deemed to have greater integrity because of this. Unfortunately, predictive integrity is not as easy to achieve as many groundwater practitioners would like to think. The level of parameterisation detail estimable through the calibration process (especially where estimation takes place on the basis of heads alone) is strictly limited, even where full use is made of modern mathematical regularisation techniques such as those encapsulated in the PEST calibration package. (Use of these mechanisms allows more information to be extracted from a calibration dataset than is possible using simpler regularisation devices such as zones of piecewise constancy.) Where a prediction depends on aspects of parameterisation detail that are simply not inferable through the calibration process (which is often the case for predictions related to contaminant movement, and/or many aspects of groundwater/surface water interaction), then that prediction may be just as much in error as it would have been if the model had not been calibrated at all. Model predictive error arises from two sources. These are (a) the presence of measurement noise within the calibration dataset through which linear combinations of parameters spanning the "calibration solution space" are inferred, and (b) the sensitivity of the prediction to members of the "calibration null space" spanned by linear combinations of parameters which are not inferable through the calibration process. The magnitude of the former contribution depends on the level of measurement noise. The magnitude of the latter contribution (which often dominates the former) depends on the "innate variability" of hydraulic properties within the model domain. Knowledge of both of these is a prerequisite for characterisation of the magnitude of possible model predictive error. Unfortunately, in most cases, such knowledge is incomplete and subjective. Nevertheless, useful analysis of model predictive error can still take place. The present paper briefly discusses the means by which mathematical regularisation can be employed in the model calibration process in order to extract as much information as possible on hydraulic property heterogeneity prevailing within the model domain, thereby reducing predictive error to the lowest that can be achieved on the basis of that dataset. It then demonstrates the means by which predictive error variance can be quantified based on information supplied by the regularised inversion process. Both linear and nonlinear predictive error variance analysis is demonstrated using a number of real-world and synthetic examples.

  17. Combining a Spatial Model and Demand Forecasts to Map Future Surface Coal Mining in Appalachia

    PubMed Central

    Strager, Michael P.; Strager, Jacquelyn M.; Evans, Jeffrey S.; Dunscomb, Judy K.; Kreps, Brad J.; Maxwell, Aaron E.

    2015-01-01

    Predicting the locations of future surface coal mining in Appalachia is challenging for a number of reasons. Economic and regulatory factors impact the coal mining industry and forecasts of future coal production do not specifically predict changes in location of future coal production. With the potential environmental impacts from surface coal mining, prediction of the location of future activity would be valuable to decision makers. The goal of this study was to provide a method for predicting future surface coal mining extents under changing economic and regulatory forecasts through the year 2035. This was accomplished by integrating a spatial model with production demand forecasts to predict (1 km2) gridded cell size land cover change. Combining these two inputs was possible with a ratio which linked coal extraction quantities to a unit area extent. The result was a spatial distribution of probabilities allocated over forecasted demand for the Appalachian region including northern, central, southern, and eastern Illinois coal regions. The results can be used to better plan for land use alterations and potential cumulative impacts. PMID:26090883

  18. Adaptive adjustment of interval predictive control based on combined model and application in shell brand petroleum distillation tower

    NASA Astrophysics Data System (ADS)

    Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin

    2017-10-01

    Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.

  19. Tornado damage risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reinhold, T.A.; Ellingwood, B.

    1982-09-01

    Several proposed models were evaluated for predicting tornado wind speed probabilities at nuclear plant sites as part of a program to develop statistical data on tornadoes needed for probability-based load combination analysis. A unified model was developed which synthesized the desired aspects of tornado occurrence and damage potential. The sensitivity of wind speed probability estimates to various tornado modeling assumptions are examined, and the probability distributions of tornado wind speed that are needed for load combination studies are presented.

  20. Monthly streamflow forecasting at varying spatial scales in the Rhine basin

    NASA Astrophysics Data System (ADS)

    Schick, Simon; Rössler, Ole; Weingartner, Rolf

    2018-02-01

    Model output statistics (MOS) methods can be used to empirically relate an environmental variable of interest to predictions from earth system models (ESMs). This variable often belongs to a spatial scale not resolved by the ESM. Here, using the linear model fitted by least squares, we regress monthly mean streamflow of the Rhine River at Lobith and Basel against seasonal predictions of precipitation, surface air temperature, and runoff from the European Centre for Medium-Range Weather Forecasts. To address potential effects of a scale mismatch between the ESM's horizontal grid resolution and the hydrological application, the MOS method is further tested with an experiment conducted at the subcatchment scale. This experiment applies the MOS method to 133 additional gauging stations located within the Rhine basin and combines the forecasts from the subcatchments to predict streamflow at Lobith and Basel. In doing so, the MOS method is tested for catchments areas covering 4 orders of magnitude. Using data from the period 1981-2011, the results show that skill, with respect to climatology, is restricted on average to the first month ahead. This result holds for both the predictor combination that mimics the initial conditions and the predictor combinations that additionally include the dynamical seasonal predictions. The latter, however, reduce the mean absolute error of the former in the range of 5 to 12 %, which is consistently reproduced at the subcatchment scale. An additional experiment conducted for 5-day mean streamflow indicates that the dynamical predictions help to reduce uncertainties up to about 20 days ahead, but it also reveals some shortcomings of the present MOS method.

  1. Predicting stress urinary incontinence during pregnancy: combination of pelvic floor ultrasound parameters and clinical factors.

    PubMed

    Chen, Ling; Luo, Dan; Yu, Xiajuan; Jin, Mei; Cai, Wenzhi

    2018-05-12

    The aim of this study was to develop and validate a predictive tool that combining pelvic floor ultrasound parameters and clinical factors for stress urinary incontinence during pregnancy. A total of 535 women in first or second trimester were included for an interview and transperineal ultrasound assessment from two hospitals. Imaging data sets were analyzed offline to assess for bladder neck vertical position, urethra angles (α, β, and γ angles), hiatal area and bladder neck funneling. All significant continuous variables at univariable analysis were analyzed by receiver-operating characteristics. Three multivariable logistic models were built on clinical factor, and combined with ultrasound parameters. The final predictive model with best performance and fewest variables was selected to establish a nomogram. Internal and external validation of the nomogram were performed by both discrimination represented by C-index and calibration measured by Hosmer-Lemeshow test. A decision curve analysis was conducted to determine the clinical utility of the nomogram. After excluding 14 women with invalid data, 521 women were analyzed. β angle, γ angle and hiatal area had limited predictive value for stress urinary incontinence during pregnancy, with area under curves of 0.558-0.648. The final predictive model included body mass index gain since pregnancy, constipation, previous delivery mode, β angle at rest, and bladder neck funneling. The nomogram based on the final model showed good discrimination with a C-index of 0.789 and satisfactory calibration (P=0.828), both of which were supported by external validation. Decision curve analysis showed that the nomogram was clinical useful. The nomogram incorporating both the pelvic floor ultrasound parameters and clinical factors has been validated to show good discrimination and calibration, and could be an important tool for stress urinary incontinence risk prediction at an early stage of pregnancy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  2. Gender Dimorphic ACL Strain In Response to Combined Dynamic 3D Knee Joint Loading: Implications for ACL Injury Risk

    PubMed Central

    Mizuno, Kiyonori; Andrish, Jack T.; van den Bogert, Antonie J.; McLean, Scott G.

    2009-01-01

    While gender-based differences in knee joint anatomies/laxities are well documented, the potential for them to precipitate gender-dimorphic ACL loading and resultant injury risk has not been considered. To this end, we generated gender-specific models of ACL strain as a function of any six degrees of freedom (6DOF) knee joint load state via a combined cadaveric and analytical approach. Continuously varying joint forces and torques were applied to five male and five female cadaveric specimens and recorded along with synchronous knee flexion and ACL strain data. All data (~10,000 samples) were submitted to specimen-specific regression analyses, affording ACL strain predictions as a function of the combined 6 DOF knee loads. Following individual model verifications, generalized gender-specific models were generated and subjected to 6 DOF external load scenarios consistent with both a clinical examination and a dynamic sports maneuver. The ensuing model-based strain predictions were subsequently examined for gender-based discrepancies. Male and female specimen specific models predicted ACL strain within 0.51% ± 0.10% and 0.52% ± 0.07% of the measured data respectively, and explained more than 75% of the associated variance in each case. Predicted female ACL strains were also significantly larger than respective male values for both of simulated 6 DOF load scenarios. Outcomes suggest that the female ACL will rupture in response to comparatively smaller external load applications. Future work must address the underlying anatomical/laxity contributions to knee joint mechanical and resultant ACL loading, ultimately affording prevention strategies that may cater to individual joint vulnerabilities. PMID:19464897

  3. Validation of variants in SLC28A3 and UGT1A6 as genetic markers predictive of anthracycline-induced cardiotoxicity in children.

    PubMed

    Visscher, H; Ross, C J D; Rassekh, S R; Sandor, G S S; Caron, H N; van Dalen, E C; Kremer, L C; van der Pal, H J; Rogers, P C; Rieder, M J; Carleton, B C; Hayden, M R

    2013-08-01

    The use of anthracyclines as effective antineoplastic drugs is limited by the occurrence of cardiotoxicity. Multiple genetic variants predictive of anthracycline-induced cardiotoxicity (ACT) in children were recently identified. The current study was aimed to assess replication of these findings in an independent cohort of children. . Twenty-three variants were tested for association with ACT in an independent cohort of 218 patients. Predictive models including genetic and clinical risk factors were constructed in the original cohort and assessed in the current replication cohort. . We confirmed the association of rs17863783 in UGT1A6 and ACT in the replication cohort (P = 0.0062, odds ratio (OR) 7.98). Additional evidence for association of rs7853758 (P = 0.058, OR 0.46) and rs885004 (P = 0.058, OR 0.42) in SLC28A3 was found (combined P = 1.6 × 10(-5) and P = 3.0 × 10(-5), respectively). A previously constructed prediction model did not significantly improve risk prediction in the replication cohort over clinical factors alone. However, an improved prediction model constructed using replicated genetic variants as well as clinical factors discriminated significantly better between cases and controls than clinical factors alone in both original (AUC 0.77 vs. 0.68, P = 0.0031) and replication cohort (AUC 0.77 vs. 0.69, P = 0.060). . We validated genetic variants in two genes predictive of ACT in an independent cohort. A prediction model combining replicated genetic variants as well as clinical risk factors might be able to identify high- and low-risk patients who could benefit from alternative treatment options. Copyright © 2013 Wiley Periodicals, Inc.

  4. [Determination of acidity and vitamin C in apples using portable NIR analyzer].

    PubMed

    Yang, Fan; Li, Ya-Ting; Gu, Xuan; Ma, Jiang; Fan, Xing; Wang, Xiao-Xuan; Zhang, Zhuo-Yong

    2011-09-01

    Near infrared (NIR) spectroscopy technology based on a portable NIR analyzer, combined with kernel Isomap algorithm and generalized regression neural network (GRNN) has been applied to establishing quantitative models for prediction of acidity and vitamin C in six kinds of apple samples. The obtained results demonstrated that the fitting and the predictive accuracy of the models with kernel Isomap algorithm were satisfactory. The correlation between actual and predicted values of calibration samples (R(c)) obtained by the acidity model was 0.999 4, and for prediction samples (R(p)) was 0.979 9. The root mean square error of prediction set (RMSEP) was 0.055 8. For the vitamin C model, R(c) was 0.989 1, R(p) was 0.927 2, and RMSEP was 4.043 1. Results proved that the portable NIR analyzer can be a feasible tool for the determination of acidity and vitamin C in apples.

  5. Prediction of a service demand using combined forecasting approach

    NASA Astrophysics Data System (ADS)

    Zhou, Ling

    2017-08-01

    Forecasting facilitates cutting down operational and management costs while ensuring service level for a logistics service provider. Our case study here is to investigate how to forecast short-term logistic demand for a LTL carrier. Combined approach depends on several forecasting methods simultaneously, instead of a single method. It can offset the weakness of a forecasting method with the strength of another, which could improve the precision performance of prediction. Main issues of combined forecast modeling are how to select methods for combination, and how to find out weight coefficients among methods. The principles of method selection include that each method should apply to the problem of forecasting itself, also methods should differ in categorical feature as much as possible. Based on these principles, exponential smoothing, ARIMA and Neural Network are chosen to form the combined approach. Besides, least square technique is employed to settle the optimal weight coefficients among forecasting methods. Simulation results show the advantage of combined approach over the three single methods. The work done in the paper helps manager to select prediction method in practice.

  6. Sweat loss prediction using a multi-model approach

    NASA Astrophysics Data System (ADS)

    Xu, Xiaojiang; Santee, William R.

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  7. Assessment of the Risks of Mixtures of Major Use Veterinary Antibiotics in European Surface Waters.

    PubMed

    Guo, Jiahua; Selby, Katherine; Boxall, Alistair B A

    2016-08-02

    Effects of single veterinary antibiotics on a range of aquatic organisms have been explored in many studies. In reality, surface waters will be exposed to mixtures of these substances. In this study, we present an approach for establishing risks of antibiotic mixtures to surface waters and illustrate this by assessing risks of mixtures of three major use antibiotics (trimethoprim, tylosin, and lincomycin) to algal and cyanobacterial species in European surface waters. Ecotoxicity tests were initially performed to assess the combined effects of the antibiotics to the cyanobacteria Anabaena flos-aquae. The results were used to evaluate two mixture prediction models: concentration addition (CA) and independent action (IA). The CA model performed best at predicting the toxicity of the mixture with the experimental 96 h EC50 for the antibiotic mixture being 0.248 μmol/L compared to the CA predicted EC50 of 0.21 μmol/L. The CA model was therefore used alongside predictions of exposure for different European scenarios and estimations of hazards obtained from species sensitivity distributions to estimate risks of mixtures of the three antibiotics. Risk quotients for the different scenarios ranged from 0.066 to 385 indicating that the combination of three substances could be causing adverse impacts on algal communities in European surface waters. This could have important implications for primary production and nutrient cycling. Tylosin contributed most to the risk followed by lincomycin and trimethoprim. While we have explored only three antibiotics, the combined experimental and modeling approach could readily be applied to the wider range of antibiotics that are in use.

  8. Combining correlative and mechanistic habitat suitability models to improve ecological compensation.

    PubMed

    Meineri, Eric; Deville, Anne-Sophie; Grémillet, David; Gauthier-Clerc, Michel; Béchet, Arnaud

    2015-02-01

    Only a few studies have shown positive impacts of ecological compensation on species dynamics affected by human activities. We argue that this is due to inappropriate methods used to forecast required compensation in environmental impact assessments. These assessments are mostly descriptive and only valid at limited spatial and temporal scales. However, habitat suitability models developed to predict the impacts of environmental changes on potential species' distributions should provide rigorous science-based tools for compensation planning. Here we describe the two main classes of predictive models: correlative models and individual-based mechanistic models. We show how these models can be used alone or synoptically to improve compensation planning. While correlative models are easier to implement, they tend to ignore underlying ecological processes and lack accuracy. On the contrary, individual-based mechanistic models can integrate biological interactions, dispersal ability and adaptation. Moreover, among mechanistic models, those considering animal energy balance are particularly efficient at predicting the impact of foraging habitat loss. However, mechanistic models require more field data compared to correlative models. Hence we present two approaches which combine both methods for compensation planning, especially in relation to the spatial scale considered. We show how the availability of biological databases and software enabling fast and accurate population projections could be advantageously used to assess ecological compensation requirement efficiently in environmental impact assessments. © 2014 The Authors. Biological Reviews © 2014 Cambridge Philosophical Society.

  9. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  10. A Reliability Estimation in Modeling Watershed Runoff With Uncertainties

    NASA Astrophysics Data System (ADS)

    Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.

    1990-10-01

    The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.

  11. Forecasting Zika Incidence in the 2016 Latin America Outbreak Combining Traditional Disease Surveillance with Search, Social Media, and News Report Data

    PubMed Central

    McGough, Sarah F.; Brownstein, John S.; Hawkins, Jared B.; Santillana, Mauricio

    2017-01-01

    Background Over 400,000 people across the Americas are thought to have been infected with Zika virus as a consequence of the 2015–2016 Latin American outbreak. Official government-led case count data in Latin America are typically delayed by several weeks, making it difficult to track the disease in a timely manner. Thus, timely disease tracking systems are needed to design and assess interventions to mitigate disease transmission. Methodology/Principal Findings We combined information from Zika-related Google searches, Twitter microblogs, and the HealthMap digital surveillance system with historical Zika suspected case counts to track and predict estimates of suspected weekly Zika cases during the 2015–2016 Latin American outbreak, up to three weeks ahead of the publication of official case data. We evaluated the predictive power of these data and used a dynamic multivariable approach to retrospectively produce predictions of weekly suspected cases for five countries: Colombia, El Salvador, Honduras, Venezuela, and Martinique. Models that combined Google (and Twitter data where available) with autoregressive information showed the best out-of-sample predictive accuracy for 1-week ahead predictions, whereas models that used only Google and Twitter typically performed best for 2- and 3-week ahead predictions. Significance Given the significant delay in the release of official government-reported Zika case counts, we show that these Internet-based data streams can be used as timely and complementary ways to assess the dynamics of the outbreak. PMID:28085877

  12. IMHOTEP—a composite score integrating popular tools for predicting the functional consequences of non-synonymous sequence variants

    PubMed Central

    Knecht, Carolin; Mort, Matthew; Junge, Olaf; Cooper, David N.; Krawczak, Michael

    2017-01-01

    Abstract The in silico prediction of the functional consequences of mutations is an important goal of human pathogenetics. However, bioinformatic tools that classify mutations according to their functionality employ different algorithms so that predictions may vary markedly between tools. We therefore integrated nine popular prediction tools (PolyPhen-2, SNPs&GO, MutPred, SIFT, MutationTaster2, Mutation Assessor and FATHMM as well as conservation-based Grantham Score and PhyloP) into a single predictor. The optimal combination of these tools was selected by means of a wide range of statistical modeling techniques, drawing upon 10 029 disease-causing single nucleotide variants (SNVs) from Human Gene Mutation Database and 10 002 putatively ‘benign’ non-synonymous SNVs from UCSC. Predictive performance was found to be markedly improved by model-based integration, whilst maximum predictive capability was obtained with either random forest, decision tree or logistic regression analysis. A combination of PolyPhen-2, SNPs&GO, MutPred, MutationTaster2 and FATHMM was found to perform as well as all tools combined. Comparison of our approach with other integrative approaches such as Condel, CoVEC, CAROL, CADD, MetaSVM and MetaLR using an independent validation dataset, revealed the superiority of our newly proposed integrative approach. An online implementation of this approach, IMHOTEP (‘Integrating Molecular Heuristics and Other Tools for Effect Prediction’), is provided at http://www.uni-kiel.de/medinfo/cgi-bin/predictor/. PMID:28180317

  13. Using built environment characteristics to predict walking for exercise

    PubMed Central

    Lovasi, Gina S; Moudon, Anne V; Pearson, Amber L; Hurvitz, Philip M; Larson, Eric B; Siscovick, David S; Berke, Ethan M; Lumley, Thomas; Psaty, Bruce M

    2008-01-01

    Background Environments conducive to walking may help people avoid sedentary lifestyles and associated diseases. Recent studies developed walkability models combining several built environment characteristics to optimally predict walking. Developing and testing such models with the same data could lead to overestimating one's ability to predict walking in an independent sample of the population. More accurate estimates of model fit can be obtained by splitting a single study population into training and validation sets (holdout approach) or through developing and evaluating models in different populations. We used these two approaches to test whether built environment characteristics near the home predict walking for exercise. Study participants lived in western Washington State and were adult members of a health maintenance organization. The physical activity data used in this study were collected by telephone interview and were selected for their relevance to cardiovascular disease. In order to limit confounding by prior health conditions, the sample was restricted to participants in good self-reported health and without a documented history of cardiovascular disease. Results For 1,608 participants meeting the inclusion criteria, the mean age was 64 years, 90 percent were white, 37 percent had a college degree, and 62 percent of participants reported that they walked for exercise. Single built environment characteristics, such as residential density or connectivity, did not significantly predict walking for exercise. Regression models using multiple built environment characteristics to predict walking were not successful at predicting walking for exercise in an independent population sample. In the validation set, none of the logistic models had a C-statistic confidence interval excluding the null value of 0.5, and none of the linear models explained more than one percent of the variance in time spent walking for exercise. We did not detect significant differences in walking for exercise among census areas or postal codes, which were used as proxies for neighborhoods. Conclusion None of the built environment characteristics significantly predicted walking for exercise, nor did combinations of these characteristics predict walking for exercise when tested using a holdout approach. These results reflect a lack of neighborhood-level variation in walking for exercise for the population studied. PMID:18312660

  14. Modeled changes of cerebellar activity in mutant mice are predictive of their learning impairments

    NASA Astrophysics Data System (ADS)

    Badura, Aleksandra; Clopath, Claudia; Schonewille, Martijn; de Zeeuw, Chris I.

    2016-11-01

    Translating neuronal activity to measurable behavioral changes has been a long-standing goal of systems neuroscience. Recently, we have developed a model of phase-reversal learning of the vestibulo-ocular reflex, a well-established, cerebellar-dependent task. The model, comprising both the cerebellar cortex and vestibular nuclei, reproduces behavioral data and accounts for the changes in neural activity during learning in wild type mice. Here, we used our model to predict Purkinje cell spiking as well as behavior before and after learning of five different lines of mutant mice with distinct cell-specific alterations of the cerebellar cortical circuitry. We tested these predictions by obtaining electrophysiological data depicting changes in neuronal spiking. We show that our data is largely consistent with the model predictions for simple spike modulation of Purkinje cells and concomitant behavioral learning in four of the mutants. In addition, our model accurately predicts a shift in simple spike activity in a mutant mouse with a brainstem specific mutation. This combination of electrophysiological and computational techniques opens a possibility of predicting behavioral impairments from neural activity.

  15. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    PubMed Central

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  16. Modeled changes of cerebellar activity in mutant mice are predictive of their learning impairments

    PubMed Central

    Badura, Aleksandra; Clopath, Claudia; Schonewille, Martijn; De Zeeuw, Chris I.

    2016-01-01

    Translating neuronal activity to measurable behavioral changes has been a long-standing goal of systems neuroscience. Recently, we have developed a model of phase-reversal learning of the vestibulo-ocular reflex, a well-established, cerebellar-dependent task. The model, comprising both the cerebellar cortex and vestibular nuclei, reproduces behavioral data and accounts for the changes in neural activity during learning in wild type mice. Here, we used our model to predict Purkinje cell spiking as well as behavior before and after learning of five different lines of mutant mice with distinct cell-specific alterations of the cerebellar cortical circuitry. We tested these predictions by obtaining electrophysiological data depicting changes in neuronal spiking. We show that our data is largely consistent with the model predictions for simple spike modulation of Purkinje cells and concomitant behavioral learning in four of the mutants. In addition, our model accurately predicts a shift in simple spike activity in a mutant mouse with a brainstem specific mutation. This combination of electrophysiological and computational techniques opens a possibility of predicting behavioral impairments from neural activity. PMID:27805050

  17. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less

  18. Genomic estimation of additive and dominance effects and impact of accounting for dominance on accuracy of genomic evaluation in sheep populations.

    PubMed

    Moghaddar, N; van der Werf, J H J

    2017-12-01

    The objectives of this study were to estimate the additive and dominance variance component of several weight and ultrasound scanned body composition traits in purebred and combined cross-bred sheep populations based on single nucleotide polymorphism (SNP) marker genotypes and then to investigate the effect of fitting additive and dominance effects on accuracy of genomic evaluation. Additive and dominance variance components were estimated in a mixed model equation based on "average information restricted maximum likelihood" using additive and dominance (co)variances between animals calculated from 48,599 SNP marker genotypes. Genomic prediction was based on genomic best linear unbiased prediction (GBLUP), and the accuracy of prediction was assessed based on a random 10-fold cross-validation. Across different weight and scanned body composition traits, dominance variance ranged from 0.0% to 7.3% of the phenotypic variance in the purebred population and from 7.1% to 19.2% in the combined cross-bred population. In the combined cross-bred population, the range of dominance variance decreased to 3.1% and 9.9% after accounting for heterosis effects. Accounting for dominance effects significantly improved the likelihood of the fitting model in the combined cross-bred population. This study showed a substantial dominance genetic variance for weight and ultrasound scanned body composition traits particularly in cross-bred population; however, improvement in the accuracy of genomic breeding values was small and statistically not significant. Dominance variance estimates in combined cross-bred population could be overestimated if heterosis is not fitted in the model. © 2017 Blackwell Verlag GmbH.

  19. Modeling to predict growth/no growth boundaries and kinetic behavior of Salmonella on cutting board surfaces.

    PubMed

    Yoon, Hyunjoo; Lee, Joo-Yeon; Suk, Hee-Jin; Lee, Sunah; Lee, Heeyoung; Lee, Soomin; Yoon, Yohan

    2012-12-01

    This study developed models to predict the growth probabilities and kinetic behavior of Salmonella enterica strains on cutting boards. Polyethylene coupons (3 by 5 cm) were rubbed with pork belly, and pork purge was then sprayed on the coupon surface, followed by inoculation of a five-strain Salmonella mixture onto the surface of the coupons. These coupons were stored at 13 to 35°C for 12 h, and total bacterial and Salmonella cell counts were enumerated on tryptic soy agar and xylose lysine deoxycholate (XLD) agar, respectively, every 2 h, which produced 56 combinations. The combinations that had growth of ≥0.5 log CFU/cm(2) of Salmonella bacteria recovered on XLD agar were given the value 1 (growth), and the combinations that had growth of <0.5 log CFU/cm(2) were assigned the value 0 (no growth). These growth response data from XLD agar were analyzed by logistic regression for producing growth/no growth interfaces of Salmonella bacteria. In addition, a linear model was fitted to the Salmonella cell counts to calculate the growth rate (log CFU per square centimeter per hour) and initial cell count (log CFU per square centimeter), following secondary modeling with the square root model. All of the models developed were validated with observed data, which were not used for model development. Growth of total bacteria and Salmonella cells was observed at 28, 30, 33, and 35°C, but there was no growth detected below 20°C within the time frame investigated. Moreover, various indices indicated that the performance of the developed models was acceptable. The results suggest that the models developed in this study may be useful in predicting the growth/no growth interface and kinetic behavior of Salmonella bacteria on polyethylene cutting boards.

  20. Modelling and simulation of the consolidation behavior during thermoplastic prepreg composites forming process

    NASA Astrophysics Data System (ADS)

    Xiong, H.; Hamila, N.; Boisse, P.

    2017-10-01

    Pre-impregnated thermoplastic composites have recently attached increasing interest in the automotive industry for their excellent mechanical properties and their rapid cycle manufacturing process, modelling and numerical simulations of forming processes for composites parts with complex geometry is necessary to predict and optimize manufacturing practices, especially for the consolidation effects. A viscoelastic relaxation model is proposed to characterize the consolidation behavior of thermoplastic prepregs based on compaction tests with a range of temperatures. The intimate contact model is employed to predict the evolution of the consolidation which permits the microstructure prediction of void presented through the prepreg. Within a hyperelastic framework, several simulation tests are launched by combining a new developed solid shell finite element and the consolidation models.

Top