Sample records for effect prediction method

  1. A unified frame of predicting side effects of drugs by using linear neighborhood similarity.

    PubMed

    Zhang, Wen; Yue, Xiang; Liu, Feng; Chen, Yanlin; Tu, Shikui; Zhang, Xining

    2017-12-14

    Drug side effects are one of main concerns in the drug discovery, which gains wide attentions. Investigating drug side effects is of great importance, and the computational prediction can help to guide wet experiments. As far as we known, a great number of computational methods have been proposed for the side effect predictions. The assumption that similar drugs may induce same side effects is usually employed for modeling, and how to calculate the drug-drug similarity is critical in the side effect predictions. In this paper, we present a novel measure of drug-drug similarity named "linear neighborhood similarity", which is calculated in a drug feature space by exploring linear neighborhood relationship. Then, we transfer the similarity from the feature space into the side effect space, and predict drug side effects by propagating known side effect information through a similarity-based graph. Under a unified frame based on the linear neighborhood similarity, we propose method "LNSM" and its extension "LNSM-SMI" to predict side effects of new drugs, and propose the method "LNSM-MSE" to predict unobserved side effect of approved drugs. We evaluate the performances of LNSM and LNSM-SMI in predicting side effects of new drugs, and evaluate the performances of LNSM-MSE in predicting missing side effects of approved drugs. The results demonstrate that the linear neighborhood similarity can improve the performances of side effect prediction, and the linear neighborhood similarity-based methods can outperform existing side effect prediction methods. More importantly, the proposed methods can predict side effects of new drugs as well as unobserved side effects of approved drugs under a unified frame.

  2. Quantitative prediction of drug side effects based on drug-related features.

    PubMed

    Niu, Yanqing; Zhang, Wen

    2017-09-01

    Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.

  3. Prediction of ground effects on aircraft noise

    NASA Technical Reports Server (NTRS)

    Pao, S. P.; Wenzel, A. R.; Oncley, P. B.

    1978-01-01

    A unified method is recommended for predicting ground effects on noise. This method may be used in flyover noise predictions and in correcting static test-stand data to free-field conditions. The recommendation is based on a review of recent progress in the theory of ground effects and of the experimental evidence which supports this theory. It is shown that a surface wave must be included sometimes in the prediction method. Prediction equations are collected conveniently in a single section of the paper. Methods of measuring ground impedance and the resulting ground-impedance data are also reviewed because the recommended method is based on a locally reactive impedance boundary model. Current practice of estimating ground effects are reviewed and consideration is given to practical problems in applying the recommended method. These problems include finite frequency-band filters, finite source dimension, wind and temperature gradients, and signal incoherence.

  4. Flight effects on exhaust noise for turbojet and turbofan engines: Comparison of experimental data with prediction

    NASA Technical Reports Server (NTRS)

    Stone, J. R.

    1976-01-01

    It was demonstrated that static and in flight jet engine exhaust noise can be predicted with reasonable accuracy when the multiple source nature of the problem is taken into account. Jet mixing noise was predicted from the interim prediction method. Provisional methods of estimating internally generated noise and shock noise flight effects were used, based partly on existing prediction methods and partly on recent reported engine data.

  5. Comparison on genomic predictions using three GBLUP methods and two single-step blending methods in the Nordic Holstein population

    PubMed Central

    2012-01-01

    Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP) for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40) was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20) were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect). The adjusted single-step blending and original single-step blending methods (relative weight of 0.20) had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In addition, the GBLUP method with a polygenic effect led to less bias of genomic predictions than the simple GBLUP method, and both single-step blending methods yielded less bias of predictions than all GBLUP methods. Conclusions The single-step blending method is an appealing approach for practical genomic prediction in dairy cattle. Genomic prediction from the single-step blending method can be improved by adjusting the scale of the genomic relationship matrix. PMID:22455934

  6. Prediction techniques for jet-induced effects in hover on STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Wardwell, Douglas A.; Kuhn, Richard E.

    1991-01-01

    Prediction techniques for jet induced lift effects during hover are available, relatively easy to use, and produce adequate results for preliminary design work. Although deficiencies of the current method were found, it is still currently the best way to estimate jet induced lift effects short of using computational fluid dynamics. Its use is summarized. The new summarized method, represents the first step toward the use of surface pressure data in an empirical method, as opposed to just balance data in the current method, for calculating jet induced effects. Although the new method is currently limited to flat plate configurations having two circular jets of equal thrust, it has the potential of more accurately predicting jet induced effects including a means for estimating the pitching moment in hover. As this method was developed from a very limited amount of data, broader applications of the method require the inclusion of new data on additional configurations. However, within this small data base, the new method does a better job in predicting jet induced effects in hover than the current method.

  7. Experimental studies on thermodynamic effects of developed cavitation

    NASA Technical Reports Server (NTRS)

    Ruggeri, R. S.

    1974-01-01

    A method for predicting thermodynamic effects of cavitation (changes in cavity pressure relative to stream vapor pressure) is presented. The prediction method accounts for changes in liquid, liquid temperature, flow velocity, and body scale. Both theoretical and experimental studies used in formulating the method are discussed. The prediction method provided good agreement between predicted and experimental results for geometrically scaled venturis handling four different liquids of widely diverse physical properties. Use of the method requires geometric similarity of the body and cavitated region and a known reference cavity-pressure depression at one operating condition.

  8. A survey of the broadband shock associated noise prediction methods

    NASA Technical Reports Server (NTRS)

    Kim, Chan M.; Krejsa, Eugene A.; Khavaran, Abbas

    1992-01-01

    Several different prediction methods to estimate the broadband shock associated noise of a supersonic jet are introduced and compared with experimental data at various test conditions. The nozzle geometries considered for comparison include a convergent and a convergent-divergent nozzle, both axisymmetric. Capabilities and limitations of prediction methods in incorporating the two nozzle geometries, flight effect, and temperature effect are discussed. Predicted noise field shows the best agreement for a convergent nozzle geometry under static conditions. Predicted results for nozzles in flight show larger discrepancies from data and more dependable flight data are required for further comparison. Qualitative effects of jet temperature, as observed in experiment, are reproduced in predicted results.

  9. The effect of using genealogy-based haplotypes for genomic prediction

    PubMed Central

    2013-01-01

    Background Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Conclusions Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy. PMID:23496971

  10. Micromechanical Prediction of the Effective Coefficients of Thermo-Piezoelectric Multiphase Composites

    NASA Technical Reports Server (NTRS)

    Aboudi, Jacob

    1998-01-01

    The micromechanical generalized method of cells model is employed for the prediction of the effective elastic, piezoelectric, dielectric, pyroelectric and thermal-expansion constants of multiphase composites with embedded piezoelectric materials. The predicted effective constants are compared with other micromechanical methods available in the literature and good agreements are obtained.

  11. Choosing the appropriate forecasting model for predictive parameter control.

    PubMed

    Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars

    2014-01-01

    All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  12. Link prediction measures considering different neighbors’ effects and application in social networks

    NASA Astrophysics Data System (ADS)

    Luo, Peng; Wu, Chong; Li, Yongli

    Link prediction measures have been attracted particular attention in the field of mathematical physics. In this paper, we consider the different effects of neighbors in link prediction and focus on four different situations: only consider the individual’s own effects; consider the effects of individual, neighbors and neighbors’ neighbors; consider the effects of individual, neighbors, neighbors’ neighbors, neighbors’ neighbors’ neighbors and neighbors’ neighbors’ neighbors’ neighbors; consider the whole network participants’ effects. Then, according to the four situations, we present our link prediction models which also take the effects of social characteristics into consideration. An artificial network is adopted to illustrate the parameter estimation based on logistic regression. Furthermore, we compare our methods with the some other link prediction methods (LPMs) to examine the validity of our proposed model in online social networks. The results show the superior of our proposed link prediction methods compared with others. In the application part, our models are applied to study the social network evolution and used to recommend friends and cooperators in social networks.

  13. The effect of using genealogy-based haplotypes for genomic prediction.

    PubMed

    Edriss, Vahid; Fernando, Rohan L; Su, Guosheng; Lund, Mogens S; Guldbrandtsen, Bernt

    2013-03-06

    Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy.

  14. Nondestructive testing methods to predict effect of degradation on wood : a critical assessment

    Treesearch

    J. Kaiserlik

    1978-01-01

    Results are reported for an assessment of methods for predicting strength of wood, wood-based, or related material. Research directly applicable to nondestructive strength prediction was very limited. In wood, strength prediction research is limited to vibration decay, wave attenuation, and multiparameter "degradation models." Nonwood methods with potential...

  15. Strong ground motion prediction using virtual earthquakes.

    PubMed

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  16. An Ensemble Approach for Drug Side Effect Prediction

    PubMed Central

    Jahid, Md Jamiul; Ruan, Jianhua

    2014-01-01

    In silico prediction of drug side-effects in early stage of drug development is becoming more popular now days, which not only reduces the time for drug design but also reduces the drug development costs. In this article we propose an ensemble approach to predict drug side-effects of drug molecules based on their chemical structure. Our idea originates from the observation that similar drugs have similar side-effects. Based on this observation we design an ensemble approach that combine the results from different classification models where each model is generated by a different set of similar drugs. We applied our approach to 1385 side-effects in the SIDER database for 888 drugs. Results show that our approach outperformed previously published approaches and standard classifiers. Furthermore, we applied our method to a number of uncharacterized drug molecules in DrugBank database and predict their side-effect profiles for future usage. Results from various sources confirm that our method is able to predict the side-effects for uncharacterized drugs and more importantly able to predict rare side-effects which are often ignored by other approaches. The method described in this article can be useful to predict side-effects in drug design in an early stage to reduce experimental cost and time. PMID:25327524

  17. Integrating Crop Growth Models with Whole Genome Prediction through Approximate Bayesian Computation.

    PubMed

    Technow, Frank; Messina, Carlos D; Totir, L Radu; Cooper, Mark

    2015-01-01

    Genomic selection, enabled by whole genome prediction (WGP) methods, is revolutionizing plant breeding. Existing WGP methods have been shown to deliver accurate predictions in the most common settings, such as prediction of across environment performance for traits with additive gene effects. However, prediction of traits with non-additive gene effects and prediction of genotype by environment interaction (G×E), continues to be challenging. Previous attempts to increase prediction accuracy for these particularly difficult tasks employed prediction methods that are purely statistical in nature. Augmenting the statistical methods with biological knowledge has been largely overlooked thus far. Crop growth models (CGMs) attempt to represent the impact of functional relationships between plant physiology and the environment in the formation of yield and similar output traits of interest. Thus, they can explain the impact of G×E and certain types of non-additive gene effects on the expressed phenotype. Approximate Bayesian computation (ABC), a novel and powerful computational procedure, allows the incorporation of CGMs directly into the estimation of whole genome marker effects in WGP. Here we provide a proof of concept study for this novel approach and demonstrate its use with synthetic data sets. We show that this novel approach can be considerably more accurate than the benchmark WGP method GBLUP in predicting performance in environments represented in the estimation set as well as in previously unobserved environments for traits determined by non-additive gene effects. We conclude that this proof of concept demonstrates that using ABC for incorporating biological knowledge in the form of CGMs into WGP is a very promising and novel approach to improving prediction accuracy for some of the most challenging scenarios in plant breeding and applied genetics.

  18. Integrating Crop Growth Models with Whole Genome Prediction through Approximate Bayesian Computation

    PubMed Central

    Technow, Frank; Messina, Carlos D.; Totir, L. Radu; Cooper, Mark

    2015-01-01

    Genomic selection, enabled by whole genome prediction (WGP) methods, is revolutionizing plant breeding. Existing WGP methods have been shown to deliver accurate predictions in the most common settings, such as prediction of across environment performance for traits with additive gene effects. However, prediction of traits with non-additive gene effects and prediction of genotype by environment interaction (G×E), continues to be challenging. Previous attempts to increase prediction accuracy for these particularly difficult tasks employed prediction methods that are purely statistical in nature. Augmenting the statistical methods with biological knowledge has been largely overlooked thus far. Crop growth models (CGMs) attempt to represent the impact of functional relationships between plant physiology and the environment in the formation of yield and similar output traits of interest. Thus, they can explain the impact of G×E and certain types of non-additive gene effects on the expressed phenotype. Approximate Bayesian computation (ABC), a novel and powerful computational procedure, allows the incorporation of CGMs directly into the estimation of whole genome marker effects in WGP. Here we provide a proof of concept study for this novel approach and demonstrate its use with synthetic data sets. We show that this novel approach can be considerably more accurate than the benchmark WGP method GBLUP in predicting performance in environments represented in the estimation set as well as in previously unobserved environments for traits determined by non-additive gene effects. We conclude that this proof of concept demonstrates that using ABC for incorporating biological knowledge in the form of CGMs into WGP is a very promising and novel approach to improving prediction accuracy for some of the most challenging scenarios in plant breeding and applied genetics. PMID:26121133

  19. Study on model current predictive control method of PV grid- connected inverters systems with voltage sag

    NASA Astrophysics Data System (ADS)

    Jin, N.; Yang, F.; Shang, S. Y.; Tao, T.; Liu, J. S.

    2016-08-01

    According to the limitations of the LVRT technology of traditional photovoltaic inverter existed, this paper proposes a low voltage ride through (LVRT) control method based on model current predictive control (MCPC). This method can effectively improve the photovoltaic inverter output characteristics and response speed. The MCPC method of photovoltaic grid-connected inverter designed, the sum of the absolute value of the predictive current and the given current error is adopted as the cost function with the model predictive control method. According to the MCPC, the optimal space voltage vector is selected. Photovoltaic inverter has achieved automatically switches of priority active or reactive power control of two control modes according to the different operating states, which effectively improve the inverter capability of LVRT. The simulation and experimental results proves that the proposed method is correct and effective.

  20. Predicting drug side-effect profiles: a chemical fragment-based approach

    PubMed Central

    2011-01-01

    Background Drug side-effects, or adverse drug reactions, have become a major public health concern. It is one of the main causes of failure in the process of drug development, and of drug withdrawal once they have reached the market. Therefore, in silico prediction of potential side-effects early in the drug discovery process, before reaching the clinical stages, is of great interest to improve this long and expensive process and to provide new efficient and safe therapies for patients. Results In the present work, we propose a new method to predict potential side-effects of drug candidate molecules based on their chemical structures, applicable on large molecular databanks. A unique feature of the proposed method is its ability to extract correlated sets of chemical substructures (or chemical fragments) and side-effects. This is made possible using sparse canonical correlation analysis (SCCA). In the results, we show the usefulness of the proposed method by predicting 1385 side-effects in the SIDER database from the chemical structures of 888 approved drugs. These predictions are performed with simultaneous extraction of correlated ensembles formed by a set of chemical substructures shared by drugs that are likely to have a set of side-effects. We also conduct a comprehensive side-effect prediction for many uncharacterized drug molecules stored in DrugBank, and were able to confirm interesting predictions using independent source of information. Conclusions The proposed method is expected to be useful in various stages of the drug development process. PMID:21586169

  1. Prediction of polypharmacological profiles of drugs by the integration of chemical, side effect, and therapeutic space.

    PubMed

    Cheng, Feixiong; Li, Weihua; Wu, Zengrui; Wang, Xichuan; Zhang, Chen; Li, Jie; Liu, Guixia; Tang, Yun

    2013-04-22

    Prediction of polypharmacological profiles of drugs enables us to investigate drug side effects and further find their new indications, i.e. drug repositioning, which could reduce the costs while increase the productivity of drug discovery. Here we describe a new computational framework to predict polypharmacological profiles of drugs by the integration of chemical, side effect, and therapeutic space. On the basis of our previous developed drug side effects database, named MetaADEDB, a drug side effect similarity inference (DSESI) method was developed for drug-target interaction (DTI) prediction on a known DTI network connecting 621 approved drugs and 893 target proteins. The area under the receiver operating characteristic curve was 0.882 ± 0.011 averaged from 100 simulated tests of 10-fold cross-validation for the DSESI method, which is comparative with drug structural similarity inference and drug therapeutic similarity inference methods. Seven new predicted candidate target proteins for seven approved drugs were confirmed by published experiments, with the successful hit rate more than 15.9%. Moreover, network visualization of drug-target interactions and off-target side effect associations provide new mechanism-of-action of three approved antipsychotic drugs in a case study. The results indicated that the proposed methods could be helpful for prediction of polypharmacological profiles of drugs.

  2. Life prediction for high temperature low cycle fatigue of two kinds of titanium alloys based on exponential function

    NASA Astrophysics Data System (ADS)

    Mu, G. Y.; Mi, X. Z.; Wang, F.

    2018-01-01

    The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.

  3. Analysis of deep learning methods for blind protein contact prediction in CASP12.

    PubMed

    Wang, Sheng; Sun, Siqi; Xu, Jinbo

    2018-03-01

    Here we present the results of protein contact prediction achieved in CASP12 by our RaptorX-Contact server, which is an early implementation of our deep learning method for contact prediction. On a set of 38 free-modeling target domains with a median family size of around 58 effective sequences, our server obtained an average top L/5 long- and medium-range contact accuracy of 47% and 44%, respectively (L = length). A complete implementation has an average accuracy of 59% and 57%, respectively. Our deep learning method formulates contact prediction as a pixel-level image labeling problem and simultaneously predicts all residue pairs of a protein using a combination of two deep residual neural networks, taking as input the residue conservation information, predicted secondary structure and solvent accessibility, contact potential, and coevolution information. Our approach differs from existing methods mainly in (1) formulating contact prediction as a pixel-level image labeling problem instead of an image-level classification problem; (2) simultaneously predicting all contacts of an individual protein to make effective use of contact occurrence patterns; and (3) integrating both one-dimensional and two-dimensional deep convolutional neural networks to effectively learn complex sequence-structure relationship including high-order residue correlation. This paper discusses the RaptorX-Contact pipeline, both contact prediction and contact-based folding results, and finally the strength and weakness of our method. © 2017 Wiley Periodicals, Inc.

  4. Staff Study on Cost and Training Effectiveness of Proposed Training Systems. TAEG Report 1.

    ERIC Educational Resources Information Center

    Naval Training Equipment Center, Orlando, FL. Training Analysis and Evaluation Group.

    A study began the development and initial testing of a method for predicting cost and training effectiveness of proposed training programs. A prototype Training Effectiveness and Cost Effectiveness Prediction (TECEP) model was developed and tested. The model was a method for optimization of training media allocation on the basis of fixed training…

  5. Effects of the Forecasting Methods, Precipitation Character, and Satellite Resolution on the Predictability of Short-Term Quantitative Precipitation Nowcasting (QPN) from a Geostationary Satellite.

    PubMed

    Liu, Yu; Xi, Du-Gang; Li, Zhao-Liang; Ji, Wei

    2015-01-01

    The prediction of the short-term quantitative precipitation nowcasting (QPN) from consecutive gestational satellite images has important implications for hydro-meteorological modeling and forecasting. However, the systematic analysis of the predictability of QPN is limited. The objective of this study is to evaluate effects of the forecasting model, precipitation character, and satellite resolution on the predictability of QPN using images of a Chinese geostationary meteorological satellite Fengyun-2F (FY-2F) which covered all intensive observation since its launch despite of only a total of approximately 10 days. In the first step, three methods were compared to evaluate the performance of the QPN methods: a pixel-based QPN using the maximum correlation method (PMC); the Horn-Schunck optical-flow scheme (PHS); and the Pyramid Lucas-Kanade Optical Flow method (PPLK), which is newly proposed here. Subsequently, the effect of the precipitation systems was indicated by 2338 imageries of 8 precipitation periods. Then, the resolution dependence was demonstrated by analyzing the QPN with six spatial resolutions (0.1atial, 0.3a, 0.4atial rand 0.6). The results show that the PPLK improves the predictability of QPN with better performance than the other comparison methods. The predictability of the QPN is significantly determined by the precipitation system, and a coarse spatial resolution of the satellite reduces the predictability of QPN.

  6. Effects of the Forecasting Methods, Precipitation Character, and Satellite Resolution on the Predictability of Short-Term Quantitative Precipitation Nowcasting (QPN) from a Geostationary Satellite

    PubMed Central

    Liu, Yu; Xi, Du-Gang; Li, Zhao-Liang; Ji, Wei

    2015-01-01

    The prediction of the short-term quantitative precipitation nowcasting (QPN) from consecutive gestational satellite images has important implications for hydro-meteorological modeling and forecasting. However, the systematic analysis of the predictability of QPN is limited. The objective of this study is to evaluate effects of the forecasting model, precipitation character, and satellite resolution on the predictability of QPN usingimages of a Chinese geostationary meteorological satellite Fengyun-2F (FY-2F) which covered all intensive observation since its launch despite of only a total of approximately 10 days. In the first step, three methods were compared to evaluate the performance of the QPN methods: a pixel-based QPN using the maximum correlation method (PMC); the Horn-Schunck optical-flow scheme (PHS); and the Pyramid Lucas-Kanade Optical Flow method (PPLK), which is newly proposed here. Subsequently, the effect of the precipitation systems was indicated by 2338 imageries of 8 precipitation periods. Then, the resolution dependence was demonstrated by analyzing the QPN with six spatial resolutions (0.1atial, 0.3a, 0.4atial rand 0.6). The results show that the PPLK improves the predictability of QPN with better performance than the other comparison methods. The predictability of the QPN is significantly determined by the precipitation system, and a coarse spatial resolution of the satellite reduces the predictability of QPN. PMID:26447470

  7. Protein contact prediction by integrating deep multiple sequence alignments, coevolution and machine learning.

    PubMed

    Adhikari, Badri; Hou, Jie; Cheng, Jianlin

    2018-03-01

    In this study, we report the evaluation of the residue-residue contacts predicted by our three different methods in the CASP12 experiment, focusing on studying the impact of multiple sequence alignment, residue coevolution, and machine learning on contact prediction. The first method (MULTICOM-NOVEL) uses only traditional features (sequence profile, secondary structure, and solvent accessibility) with deep learning to predict contacts and serves as a baseline. The second method (MULTICOM-CONSTRUCT) uses our new alignment algorithm to generate deep multiple sequence alignment to derive coevolution-based features, which are integrated by a neural network method to predict contacts. The third method (MULTICOM-CLUSTER) is a consensus combination of the predictions of the first two methods. We evaluated our methods on 94 CASP12 domains. On a subset of 38 free-modeling domains, our methods achieved an average precision of up to 41.7% for top L/5 long-range contact predictions. The comparison of the three methods shows that the quality and effective depth of multiple sequence alignments, coevolution-based features, and machine learning integration of coevolution-based features and traditional features drive the quality of predicted protein contacts. On the full CASP12 dataset, the coevolution-based features alone can improve the average precision from 28.4% to 41.6%, and the machine learning integration of all the features further raises the precision to 56.3%, when top L/5 predicted long-range contacts are evaluated. And the correlation between the precision of contact prediction and the logarithm of the number of effective sequences in alignments is 0.66. © 2017 Wiley Periodicals, Inc.

  8. FUN-LDA: A Latent Dirichlet Allocation Model for Predicting Tissue-Specific Functional Effects of Noncoding Variation: Methods and Applications.

    PubMed

    Backenroth, Daniel; He, Zihuai; Kiryluk, Krzysztof; Boeva, Valentina; Pethukova, Lynn; Khurana, Ekta; Christiano, Angela; Buxbaum, Joseph D; Ionita-Laza, Iuliana

    2018-05-03

    We describe a method based on a latent Dirichlet allocation model for predicting functional effects of noncoding genetic variants in a cell-type- and/or tissue-specific way (FUN-LDA). Using this unsupervised approach, we predict tissue-specific functional effects for every position in the human genome in 127 different tissues and cell types. We demonstrate the usefulness of our predictions by using several validation experiments. Using eQTL data from several sources, including the GTEx project, Geuvadis project, and TwinsUK cohort, we show that eQTLs in specific tissues tend to be most enriched among the predicted functional variants in relevant tissues in Roadmap. We further show how these integrated functional scores can be used for (1) deriving the most likely cell or tissue type causally implicated for a complex trait by using summary statistics from genome-wide association studies and (2) estimating a tissue-based correlation matrix of various complex traits. We found large enrichment of heritability in functional components of relevant tissues for various complex traits, and FUN-LDA yielded higher enrichment estimates than existing methods. Finally, using experimentally validated functional variants from the literature and variants possibly implicated in disease by previous studies, we rigorously compare FUN-LDA with state-of-the-art functional annotation methods and show that FUN-LDA has better prediction accuracy and higher resolution than these methods. In particular, our results suggest that tissue- and cell-type-specific functional prediction methods tend to have substantially better prediction accuracy than organism-level prediction methods. Scores for each position in the human genome and for each ENCODE and Roadmap tissue are available online (see Web Resources). Copyright © 2018 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  9. Predicting longitudinal trajectories of health probabilities with random-effects multinomial logit regression.

    PubMed

    Liu, Xian; Engel, Charles C

    2012-12-20

    Researchers often encounter longitudinal health data characterized with three or more ordinal or nominal categories. Random-effects multinomial logit models are generally applied to account for potential lack of independence inherent in such clustered data. When parameter estimates are used to describe longitudinal processes, however, random effects, both between and within individuals, need to be retransformed for correctly predicting outcome probabilities. This study attempts to go beyond existing work by developing a retransformation method that derives longitudinal growth trajectories of unbiased health probabilities. We estimated variances of the predicted probabilities by using the delta method. Additionally, we transformed the covariates' regression coefficients on the multinomial logit function, not substantively meaningful, to the conditional effects on the predicted probabilities. The empirical illustration uses the longitudinal data from the Asset and Health Dynamics among the Oldest Old. Our analysis compared three sets of the predicted probabilities of three health states at six time points, obtained from, respectively, the retransformation method, the best linear unbiased prediction, and the fixed-effects approach. The results demonstrate that neglect of retransforming random errors in the random-effects multinomial logit model results in severely biased longitudinal trajectories of health probabilities as well as overestimated effects of covariates on the probabilities. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Aircraft Noise Prediction Program theoretical manual: Propeller aerodynamics and noise

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E. (Editor); Weir, D. S. (Editor)

    1986-01-01

    The prediction sequence used in the aircraft noise prediction program (ANOPP) is described. The elements of the sequence are called program modules. The first group of modules analyzes the propeller geometry, the aerodynamics, including both potential and boundary-layer flow, the propeller performance, and the surface loading distribution. This group of modules is based entirely on aerodynamic strip theory. The next group of modules deals with the first group. Predictions of periodic thickness and loading noise are determined with time-domain methods. Broadband noise is predicted by a semiempirical method. Near-field predictions of fuselage surface pressrues include the effects of boundary layer refraction and scattering. Far-field predictions include atmospheric and ground effects.

  11. An efficient approach to understanding and predicting the effects of multiple task characteristics on performance.

    PubMed

    Richardson, Miles

    2017-04-01

    In ergonomics there is often a need to identify and predict the separate effects of multiple factors on performance. A cost-effective fractional factorial approach to understanding the relationship between task characteristics and task performance is presented. The method has been shown to provide sufficient independent variability to reveal and predict the effects of task characteristics on performance in two domains. The five steps outlined are: selection of performance measure, task characteristic identification, task design for user trials, data collection, regression model development and task characteristic analysis. The approach can be used for furthering knowledge of task performance, theoretical understanding, experimental control and prediction of task performance. Practitioner Summary: A cost-effective method to identify and predict the separate effects of multiple factors on performance is presented. The five steps allow a better understanding of task factors during the design process.

  12. Clustering gene expression data based on predicted differential effects of GV interaction.

    PubMed

    Pan, Hai-Yan; Zhu, Jun; Han, Dan-Fu

    2005-02-01

    Microarray has become a popular biotechnology in biological and medical research. However, systematic and stochastic variabilities in microarray data are expected and unavoidable, resulting in the problem that the raw measurements have inherent "noise" within microarray experiments. Currently, logarithmic ratios are usually analyzed by various clustering methods directly, which may introduce bias interpretation in identifying groups of genes or samples. In this paper, a statistical method based on mixed model approaches was proposed for microarray data cluster analysis. The underlying rationale of this method is to partition the observed total gene expression level into various variations caused by different factors using an ANOVA model, and to predict the differential effects of GV (gene by variety) interaction using the adjusted unbiased prediction (AUP) method. The predicted GV interaction effects can then be used as the inputs of cluster analysis. We illustrated the application of our method with a gene expression dataset and elucidated the utility of our approach using an external validation.

  13. Benchmarking protein-protein interface predictions: why you should care about protein size.

    PubMed

    Martin, Juliette

    2014-07-01

    A number of predictive methods have been developed to predict protein-protein binding sites. Each new method is traditionally benchmarked using sets of protein structures of various sizes, and global statistics are used to assess the quality of the prediction. Little attention has been paid to the potential bias due to protein size on these statistics. Indeed, small proteins involve proportionally more residues at interfaces than large ones. If a predictive method is biased toward small proteins, this can lead to an over-estimation of its performance. Here, we investigate the bias due to the size effect when benchmarking protein-protein interface prediction on the widely used docking benchmark 4.0. First, we simulate random scores that favor small proteins over large ones. Instead of the 0.5 AUC (Area Under the Curve) value expected by chance, these biased scores result in an AUC equal to 0.6 using hypergeometric distributions, and up to 0.65 using constant scores. We then use real prediction results to illustrate how to detect the size bias by shuffling, and subsequently correct it using a simple conversion of the scores into normalized ranks. In addition, we investigate the scores produced by eight published methods and show that they are all affected by the size effect, which can change their relative ranking. The size effect also has an impact on linear combination scores by modifying the relative contributions of each method. In the future, systematic corrections should be applied when benchmarking predictive methods using data sets with mixed protein sizes. © 2014 Wiley Periodicals, Inc.

  14. Prediction of jump phenomena in rotationally-coupled maneuvers of aircraft, including nonlinear aerodynamic effects

    NASA Technical Reports Server (NTRS)

    Young, J. W.; Schy, A. A.; Johnson, K. G.

    1977-01-01

    An analytical method has been developed for predicting critical control inputs for which nonlinear rotational coupling may cause sudden jumps in aircraft response. The analysis includes the effect of aerodynamics which are nonlinear in angle of attack. The method involves the simultaneous solution of two polynomials in roll rate, whose coefficients are functions of angle of attack and the control inputs. Results obtained using this procedure are compared with calculated time histories to verify the validity of the method for predicting jump-like instabilities.

  15. Molecular effective coverage surface area of optical clearing agents for predicting optical clearing potential

    NASA Astrophysics Data System (ADS)

    Feng, Wei; Ma, Ning; Zhu, Dan

    2015-03-01

    The improvement of methods for optical clearing agent prediction exerts an important impact on tissue optical clearing technique. The molecular dynamic simulation is one of the most convincing and simplest approaches to predict the optical clearing potential of agents by analyzing the hydrogen bonds, hydrogen bridges and hydrogen bridges type forming between agents and collagen. However, the above analysis methods still suffer from some problem such as analysis of cyclic molecule by reason of molecular conformation. In this study, a molecular effective coverage surface area based on the molecular dynamic simulation was proposed to predict the potential of optical clearing agents. Several typical cyclic molecules, fructose, glucose and chain molecules, sorbitol, xylitol were analyzed by calculating their molecular effective coverage surface area, hydrogen bonds, hydrogen bridges and hydrogen bridges type, respectively. In order to verify this analysis methods, in vitro skin samples optical clearing efficacy were measured after 25 min immersing in the solutions, fructose, glucose, sorbitol and xylitol at concentration of 3.5 M using 1951 USAF resolution test target. The experimental results show accordance with prediction of molecular effective coverage surface area. Further to compare molecular effective coverage surface area with other parameters, it can show that molecular effective coverage surface area has a better performance in predicting OCP of agents.

  16. Combining Structural Modeling with Ensemble Machine Learning to Accurately Predict Protein Fold Stability and Binding Affinity Effects upon Mutation

    PubMed Central

    Garcia Lopez, Sebastian; Kim, Philip M.

    2014-01-01

    Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403

  17. Multiple Kernel Learning with Random Effects for Predicting Longitudinal Outcomes and Data Integration

    PubMed Central

    Chen, Tianle; Zeng, Donglin

    2015-01-01

    Summary Predicting disease risk and progression is one of the main goals in many clinical research studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. In this paper, we develop a novel statistical learning method for longitudinal data by introducing subject-specific short-term and long-term latent effects through a designed kernel to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of the distinctive feature of each data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzheimer's Disease (Alzheimer's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to study prediction of mild cognitive impairment, and show a substantial gain in performance while accounting for the longitudinal aspect of the data. PMID:26177419

  18. PREDICTING THE EFFECTIVENESS OF CHEMICAL-PROTECTIVE CLOTHING MODEL AND TEST METHOD DEVELOPMENT

    EPA Science Inventory

    A predictive model and test method were developed for determining the chemical resistance of protective polymeric gloves exposed to liquid organic chemicals. The prediction of permeation through protective gloves by solvents was based on theories of the solution thermodynamics of...

  19. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    NASA Astrophysics Data System (ADS)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  20. Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research

    PubMed Central

    Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi

    2016-01-01

    The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637

  1. Comparison of theoretically predicted lateral-directional aerodynamic characteristics with full-scale wind tunnel data on the ATLIT airplane

    NASA Technical Reports Server (NTRS)

    Griswold, M.; Roskam, J.

    1980-01-01

    An analytical method is presented for predicting lateral-directional aerodynamic characteristics of light twin engine propeller-driven airplanes. This method is applied to the Advanced Technology Light Twin Engine airplane. The calculated characteristics are correlated against full-scale wind tunnel data. The method predicts the sideslip derivatives fairly well, although angle of attack variations are not well predicted. Spoiler performance was predicted somewhat high but was still reasonable. The rudder derivatives were not well predicted, in particular the effect of angle of attack. The predicted dynamic derivatives could not be correlated due to lack of experimental data.

  2. A component prediction method for flue gas of natural gas combustion based on nonlinear partial least squares method.

    PubMed

    Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun

    2014-01-01

    Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.

  3. Predicted changes in advanced turboprop noise with shaft angle of attack

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Block, P. J. W.

    1984-01-01

    Advanced turboprop blade designs and new propeller installation schemes motivated an effort to include unsteady loading effects in existing propeller noise prediction computer programs. The present work validates the prediction capability while studing the effects of shaft inclination on the radiated sound field. Classical methods of propeller performance analysis supply the time-dependent blade loading needed to calculate noise. Polar plots of the sound pressure level (SPL) of the first four harmonics and overall SPL are indicative of the change in directivity pattern as a function of propeller angle of attack. Noise predictions are compared with newly available wind tunnel data and the accuracy and applicability of the prediction method are discussed. It is concluded that unsteady blade loading caused by inclining the propeller with respect to the flow changes the directionality and the intensity of the radiated noise. These changes are well modeled by the present quasi-steady prediction method.

  4. Using the surface panel method to predict the steady performance of ducted propellers

    NASA Astrophysics Data System (ADS)

    Cai, Hao-Peng; Su, Yu-Min; Li, Xin; Shen, Hai-Long

    2009-12-01

    A new numerical method was developed for predicting the steady hydrodynamic performance of ducted propellers. A potential based surface panel method was applied both to the duct and the propeller, and the interaction between them was solved by an induced velocity potential iterative method. Compared with the induced velocity iterative method, the method presented can save programming and calculating time. Numerical results for a JD simplified ducted propeller series showed that the method presented is effective for predicting the steady hydrodynamic performance of ducted propellers.

  5. Landing Gear Noise Prediction and Analysis for Tube-and-Wing and Hybrid-Wing-Body Aircraft

    NASA Technical Reports Server (NTRS)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    Improvements and extensions to landing gear noise prediction methods are developed. New features include installation effects such as reflection from the aircraft, gear truck angle effect, local flow calculation at the landing gear locations, gear size effect, and directivity for various gear designs. These new features have not only significantly improved the accuracy and robustness of the prediction tools, but also have enabled applications to unconventional aircraft designs and installations. Systematic validations of the improved prediction capability are then presented, including parametric validations in functional trends as well as validations in absolute amplitudes, covering a wide variety of landing gear designs, sizes, and testing conditions. The new method is then applied to selected concept aircraft configurations in the portfolio of the NASA Environmentally Responsible Aviation Project envisioned for the timeframe of 2025. The landing gear noise levels are on the order of 2 to 4 dB higher than previously reported predictions due to increased fidelity in accounting for installation effects and gear design details. With the new method, it is now possible to reveal and assess the unique noise characteristics of landing gear systems for each type of aircraft. To address the inevitable uncertainties in predictions of landing gear noise models for future aircraft, an uncertainty analysis is given, using the method of Monte Carlo simulation. The standard deviation of the uncertainty in predicting the absolute level of landing gear noise is quantified and determined to be 1.4 EPNL dB.

  6. New approach to predict photoallergic potentials of chemicals based on murine local lymph node assay.

    PubMed

    Maeda, Yosuke; Hirosaki, Haruka; Yamanaka, Hidenori; Takeyoshi, Masahiro

    2018-05-23

    Photoallergic dermatitis, caused by pharmaceuticals and other consumer products, is a very important issue in human health. However, S10 guidelines of the International Conference on Harmonization do not recommend the existing prediction methods for photoallergy because of their low predictability in human cases. We applied local lymph node assay (LLNA), a reliable, quantitative skin sensitization prediction test, to develop a new photoallergy prediction method. This method involves a three-step approach: (1) ultraviolet (UV) absorption analysis; (2) determination of no observed adverse effect level for skin phototoxicity based on LLNA; and (3) photoallergy evaluation based on LLNA. Photoallergic potential of chemicals was evaluated by comparing lymph node cell proliferation among groups treated with chemicals with minimal effect levels of skin sensitization and skin phototoxicity under UV irradiation (UV+) or non-UV irradiation (UV-). A case showing significant difference (P < .05) in lymph node cell proliferation rates between UV- and UV+ groups was considered positive for photoallergic reaction. After testing 13 chemicals, seven human photoallergens tested positive and the other six, with no evidence of causing photoallergic dermatitis or UV absorption, tested negative. Among these chemicals, both doxycycline hydrochloride and minocycline hydrochloride were tetracycline antibiotics with different photoallergic properties, and the new method clearly distinguished between the photoallergic properties of these chemicals. These findings suggested high predictability of our method; therefore, it is promising and effective in predicting human photoallergens. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Investigation of the validity of radiosity for sound-field prediction in cubic rooms

    NASA Astrophysics Data System (ADS)

    Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian

    2004-12-01

    This paper explores acoustical (or time-dependent) radiosity using predictions made in four cubic enclosures. The methods and algorithms used are those presented in a previous paper by the same authors [Nosal, Hodgson, and Ashdown, J. Acoust. Soc. Am. 116(2), 970-980 (2004)]. First, the algorithm, methods, and conditions for convergence are investigated by comparison of numerous predictions for the four cubic enclosures. Here, variables and parameters used in the predictions are varied to explore the effect of absorption distribution, the necessary conditions for convergence of the numerical solution to the analytical solution, form-factor prediction methods, and the computational requirements. The predictions are also used to investigate the effect of absorption distribution on sound fields in cubic enclosures with diffusely reflecting boundaries. Acoustical radiosity is then compared to predictions made in the four enclosures by a ray-tracing model that can account for diffuse reflection. Comparisons are made of echograms, room-acoustical parameters, and discretized echograms. .

  8. Investigation of the validity of radiosity for sound-field prediction in cubic rooms.

    PubMed

    Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian

    2004-12-01

    This paper explores acoustical (or time-dependent) radiosity using predictions made in four cubic enclosures. The methods and algorithms used are those presented in a previous paper by the same authors [Nosal, Hodgson, and Ashdown, J. Acoust. Soc. Am. 116(2), 970-980 (2004)]. First, the algorithm, methods, and conditions for convergence are investigated by comparison of numerous predictions for the four cubic enclosures. Here, variables and parameters used in the predictions are varied to explore the effect of absorption distribution, the necessary conditions for convergence of the numerical solution to the analytical solution, form-factor prediction methods, and the computational requirements. The predictions are also used to investigate the effect of absorption distribution on sound fields in cubic enclosures with diffusely reflecting boundaries. Acoustical radiosity is then compared to predictions made in the four enclosures by a ray-tracing model that can account for diffuse reflection. Comparisons are made of echograms, room-acoustical parameters, and discretized echograms.

  9. An improved method for predicting the effects of flight on jet mixing noise

    NASA Technical Reports Server (NTRS)

    Stone, J. R.

    1979-01-01

    The NASA method (1976) for predicting the effects of flight on jet mixing noise was improved. The earlier method agreed reasonably well with experimental flight data for jet velocities up to about 520 m/sec (approximately 1700 ft/sec). The poorer agreement at high jet velocities appeared to be due primarily to the manner in which supersonic convection effects were formulated. The purely empirical supersonic convection formulation of the earlier method was replaced by one based on theoretical considerations. Other improvements of an empirical nature included were based on model-jet/free-jet simulated flight tests. The revised prediction method is presented and compared with experimental data obtained from the Bertin Aerotrain with a J85 engine, the DC-10 airplane with JT9D engines, and the DC-9 airplane with refanned JT8D engines. It is shown that the new method agrees better with the data base than a recently proposed SAE method.

  10. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  11. Hierarchical Interactions Model for Predicting Mild Cognitive Impairment (MCI) to Alzheimer's Disease (AD) Conversion

    PubMed Central

    Li, Han; Liu, Yashu; Gong, Pinghua; Zhang, Changshui; Ye, Jieping

    2014-01-01

    Identifying patients with Mild Cognitive Impairment (MCI) who are likely to convert to dementia has recently attracted increasing attention in Alzheimer's disease (AD) research. An accurate prediction of conversion from MCI to AD can aid clinicians to initiate treatments at early stage and monitor their effectiveness. However, existing prediction systems based on the original biosignatures are not satisfactory. In this paper, we propose to fit the prediction models using pairwise biosignature interactions, thus capturing higher-order relationship among biosignatures. Specifically, we employ hierarchical constraints and sparsity regularization to prune the high-dimensional input features. Based on the significant biosignatures and underlying interactions identified, we build classifiers to predict the conversion probability based on the selected features. We further analyze the underlying interaction effects of different biosignatures based on the so-called stable expectation scores. We have used 293 MCI subjects from Alzheimer's Disease Neuroimaging Initiative (ADNI) database that have MRI measurements at the baseline to evaluate the effectiveness of the proposed method. Our proposed method achieves better classification performance than state-of-the-art methods. Moreover, we discover several significant interactions predictive of MCI-to-AD conversion. These results shed light on improving the prediction performance using interaction features. PMID:24416143

  12. Linear Elastic and Cohesive Fracture Analysis to Model Hydraulic Fracture in Brittle and Ductile Rocks

    NASA Astrophysics Data System (ADS)

    Yao, Yao

    2012-05-01

    Hydraulic fracturing technology is being widely used within the oil and gas industry for both waste injection and unconventional gas production wells. It is essential to predict the behavior of hydraulic fractures accurately based on understanding the fundamental mechanism(s). The prevailing approach for hydraulic fracture modeling continues to rely on computational methods based on Linear Elastic Fracture Mechanics (LEFM). Generally, these methods give reasonable predictions for hard rock hydraulic fracture processes, but still have inherent limitations, especially when fluid injection is performed in soft rock/sand or other non-conventional formations. These methods typically give very conservative predictions on fracture geometry and inaccurate estimation of required fracture pressure. One of the reasons the LEFM-based methods fail to give accurate predictions for these materials is that the fracture process zone ahead of the crack tip and softening effect should not be neglected in ductile rock fracture analysis. A 3D pore pressure cohesive zone model has been developed and applied to predict hydraulic fracturing under fluid injection. The cohesive zone method is a numerical tool developed to model crack initiation and growth in quasi-brittle materials considering the material softening effect. The pore pressure cohesive zone model has been applied to investigate the hydraulic fracture with different rock properties. The hydraulic fracture predictions of a three-layer water injection case have been compared using the pore pressure cohesive zone model with revised parameters, LEFM-based pseudo 3D model, a Perkins-Kern-Nordgren (PKN) model, and an analytical solution. Based on the size of the fracture process zone and its effect on crack extension in ductile rock, the fundamental mechanical difference of LEFM and cohesive fracture mechanics-based methods is discussed. An effective fracture toughness method has been proposed to consider the fracture process zone effect on the ductile rock fracture.

  13. Prediction of the effects of propeller operation on the static longitudinal stability of single-engine tractor monoplanes with flaps retracted

    NASA Technical Reports Server (NTRS)

    Weil, Joseph; Sleeman, William C , Jr

    1949-01-01

    The effects of propeller operation on the static longitudinal stability of single-engine tractor monoplanes are analyzed, and a simple method is presented for computing power-on pitching-moment curves for flap-retracted flight conditions. The methods evolved are based on the results of powered-model wind-tunnel investigations of 28 model configurations. Correlation curves are presented from which the effects of power on the downwash over the tail and the stabilizer effectiveness can be rapidly predicted. The procedures developed enable prediction of power-on longitudinal stability characteristics that are generally in very good agreement with experiment.

  14. Prediction of Central Nervous System Side Effects Through Drug Permeability to Blood-Brain Barrier and Recommendation Algorithm.

    PubMed

    Fan, Jun; Yang, Jing; Jiang, Zhenran

    2018-04-01

    Drug side effects are one of the public health concerns. Using powerful machine-learning methods to predict potential side effects before the drugs reach the clinical stages is of great importance to reduce time consumption and protect the security of patients. Recently, researchers have proved that the central nervous system (CNS) side effects of a drug are closely related to its permeability to the blood-brain barrier (BBB). Inspired by this, we proposed an extended neighborhood-based recommendation method to predict CNS side effects using drug permeability to the BBB and other known features of drug. To the best of our knowledge, this is the first attempt to predict CNS side effects considering drug permeability to the BBB. Computational experiments demonstrated that drug permeability to the BBB is an important factor in CNS side effects prediction. Moreover, we built an ensemble recommendation model and obtained higher AUC score (area under the receiver operating characteristic curve) and AUPR score (area under the precision-recall curve) on the data set of CNS side effects by integrating various features of drug.

  15. Diallel analysis for sex-linked and maternal effects.

    PubMed

    Zhu, J; Weir, B S

    1996-01-01

    Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.

  16. Predictive analysis effectiveness in determining the epidemic disease infected area

    NASA Astrophysics Data System (ADS)

    Ibrahim, Najihah; Akhir, Nur Shazwani Md.; Hassan, Fadratul Hafinaz

    2017-10-01

    Epidemic disease outbreak had caused nowadays community to raise their great concern over the infectious disease controlling, preventing and handling methods to diminish the disease dissemination percentage and infected area. Backpropagation method was used for the counter measure and prediction analysis of the epidemic disease. The predictive analysis based on the backpropagation method can be determine via machine learning process that promotes the artificial intelligent in pattern recognition, statistics and features selection. This computational learning process will be integrated with data mining by measuring the score output as the classifier to the given set of input features through classification technique. The classification technique is the features selection of the disease dissemination factors that likely have strong interconnection between each other in causing infectious disease outbreaks. The predictive analysis of epidemic disease in determining the infected area was introduced in this preliminary study by using the backpropagation method in observation of other's findings. This study will classify the epidemic disease dissemination factors as the features for weight adjustment on the prediction of epidemic disease outbreaks. Through this preliminary study, the predictive analysis is proven to be effective method in determining the epidemic disease infected area by minimizing the error value through the features classification.

  17. A Method of Calculating Functional Independence Measure at Discharge from Functional Independence Measure Effectiveness Predicted by Multiple Regression Analysis Has a High Degree of Predictive Accuracy.

    PubMed

    Tokunaga, Makoto; Watanabe, Susumu; Sonoda, Shigeru

    2017-09-01

    Multiple linear regression analysis is often used to predict the outcome of stroke rehabilitation. However, the predictive accuracy may not be satisfactory. The objective of this study was to elucidate the predictive accuracy of a method of calculating motor Functional Independence Measure (mFIM) at discharge from mFIM effectiveness predicted by multiple regression analysis. The subjects were 505 patients with stroke who were hospitalized in a convalescent rehabilitation hospital. The formula "mFIM at discharge = mFIM effectiveness × (91 points - mFIM at admission) + mFIM at admission" was used. By including the predicted mFIM effectiveness obtained through multiple regression analysis in this formula, we obtained the predicted mFIM at discharge (A). We also used multiple regression analysis to directly predict mFIM at discharge (B). The correlation between the predicted and the measured values of mFIM at discharge was compared between A and B. The correlation coefficients were .916 for A and .878 for B. Calculating mFIM at discharge from mFIM effectiveness predicted by multiple regression analysis had a higher degree of predictive accuracy of mFIM at discharge than that directly predicted. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  18. Fluid mechanics of dynamic stall. II - Prediction of full scale characteristics

    NASA Technical Reports Server (NTRS)

    Ericsson, L. E.; Reding, J. P.

    1988-01-01

    Analytical extrapolations are made from experimental subscale dynamics to predict full scale characteristics of dynamic stall. The method proceeds by establishing analytic relationships between dynamic and static aerodynamic characteristics induced by viscous flow effects. The method is then validated by predicting dynamic test results on the basis of corresponding static test data obtained at the same subscale flow conditions, and the effect of Reynolds number on the static aerodynamic characteristics are determined from subscale to full scale flow conditions.

  19. Predicting S-wave velocities for unconsolidated sediments at low effective pressure

    USGS Publications Warehouse

    Lee, Myung W.

    2010-01-01

    Accurate S-wave velocities for shallow sediments are important in performing a reliable elastic inversion for gas hydrate-bearing sediments and in evaluating velocity models for predicting S-wave velocities, but few S-wave velocities are measured at low effective pressure. Predicting S-wave velocities by using conventional methods based on the Biot-Gassmann theory appears to be inaccurate for laboratory-measured velocities at effective pressures less than about 4-5 megapascals (MPa). Measured laboratory and well log velocities show two distinct trends for S-wave velocities with respect to P-wave velocity: one for the S-wave velocity less than about 0.6 kilometer per second (km/s) which approximately corresponds to effective pressure of about 4-5 MPa, and the other for S-wave velocities greater than 0.6 km/s. To accurately predict S-wave velocities at low effective pressure less than about 4-5 MPa, a pressure-dependent parameter that relates the consolidation parameter to shear modulus of the sediments at low effective pressure is proposed. The proposed method in predicting S-wave velocity at low effective pressure worked well for velocities of water-saturated sands measured in the laboratory. However, this method underestimates the well-log S-wave velocities measured in the Gulf of Mexico, whereas the conventional method performs well for the well log velocities. The P-wave velocity dispersion due to fluid in the pore spaces, which is more pronounced at high frequency with low effective pressures less than about 4 MPa, is probably a cause for this discrepancy.

  20. A Coarse-Grained Elastic Network Atom Contact Model and Its Use in the Simulation of Protein Dynamics and the Prediction of the Effect of Mutations

    PubMed Central

    Frappier, Vincent; Najmanovich, Rafael J.

    2014-01-01

    Normal mode analysis (NMA) methods are widely used to study dynamic aspects of protein structures. Two critical components of NMA methods are coarse-graining in the level of simplification used to represent protein structures and the choice of potential energy functional form. There is a trade-off between speed and accuracy in different choices. In one extreme one finds accurate but slow molecular-dynamics based methods with all-atom representations and detailed atom potentials. On the other extreme, fast elastic network model (ENM) methods with Cα−only representations and simplified potentials that based on geometry alone, thus oblivious to protein sequence. Here we present ENCoM, an Elastic Network Contact Model that employs a potential energy function that includes a pairwise atom-type non-bonded interaction term and thus makes it possible to consider the effect of the specific nature of amino-acids on dynamics within the context of NMA. ENCoM is as fast as existing ENM methods and outperforms such methods in the generation of conformational ensembles. Here we introduce a new application for NMA methods with the use of ENCoM in the prediction of the effect of mutations on protein stability. While existing methods are based on machine learning or enthalpic considerations, the use of ENCoM, based on vibrational normal modes, is based on entropic considerations. This represents a novel area of application for NMA methods and a novel approach for the prediction of the effect of mutations. We compare ENCoM to a large number of methods in terms of accuracy and self-consistency. We show that the accuracy of ENCoM is comparable to that of the best existing methods. We show that existing methods are biased towards the prediction of destabilizing mutations and that ENCoM is less biased at predicting stabilizing mutations. PMID:24762569

  1. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    PubMed Central

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307

  2. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness.

    PubMed

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.

  3. Efficient network disintegration under incomplete information: the comic effect of link prediction.

    PubMed

    Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin

    2016-03-10

    The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the "comic effect" of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized.

  4. Predicting chaos in memristive oscillator via harmonic balance method.

    PubMed

    Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai

    2012-12-01

    This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.

  5. Application of ride quality technology to predict ride satisfaction for commuter-type aircraft

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.; Kuhlthau, A. R.; Richards, L. G.

    1975-01-01

    A method was developed to predict passenger satisfaction with the ride environment of a transportation vehicle. This method, a general approach, was applied to a commuter-type aircraft for illustrative purposes. The effect of terrain, altitude and seat location were examined. The method predicts the variation in passengers satisfied for any set of flight conditions. In addition several noncommuter aircraft were analyzed for comparison and other uses of the model described. The method has advantages for design, evaluation, and operating decisions.

  6. Single-Event Effects Ground Testing and On-Orbit Rate Prediction Methods: The Past, Present and Future

    NASA Technical Reports Server (NTRS)

    Reed, Robert A.; Kinnison, Jim; Pickel, Jim; Buchner, Stephen; Marshall, Paul W.; Kniffin, Scott; LaBel, Kenneth A.

    2003-01-01

    Over the past 27 years, or so, increased concern over single event effects in spacecraft systems has resulted in research, development and engineering activities centered around a better understanding of the space radiation environment, single event effects predictive methods, ground test protocols, and test facility developments. This research has led to fairly well developed methods for assessing the impact of the space radiation environment on systems that contain SEE sensitive devices and the development of mitigation strategies either at the system or device level.

  7. A comparison of LOD and UT1-UTC forecasts by different combined prediction techniques

    NASA Astrophysics Data System (ADS)

    Kosek, W.; Kalarus, M.; Johnson, T. J.; Wooden, W. H.; McCarthy, D. D.; Popiński, W.

    Stochastic prediction techniques including autocovariance, autoregressive, autoregressive moving average, and neural networks were applied to the UT1-UTC and Length of Day (LOD) International Earth Rotation and Reference Systems Servive (IERS) EOPC04 time series to evaluate the capabilities of each method. All known effects such as leap seconds and solid Earth zonal tides were first removed from the observed values of UT1-UTC and LOD. Two combination procedures were applied to predict the resulting LODR time series: 1) the combination of the least-squares (LS) extrapolation with a stochastic predition method, and 2) the combination of the discrete wavelet transform (DWT) filtering and a stochastic prediction method. The results of the combination of the LS extrapolation with different stochastic prediction techniques were compared with the results of the UT1-UTC prediction method currently used by the IERS Rapid Service/Prediction Centre (RS/PC). It was found that the prediction accuracy depends on the starting prediction epochs, and for the combined forecast methods, the mean prediction errors for 1 to about 70 days in the future are of the same order as those of the method used by the IERS RS/PC.

  8. Progress in space weather predictions and applications

    NASA Astrophysics Data System (ADS)

    Lundstedt, H.

    The methods of today's predictions of space weather and effects are so much more advanced and yesterday's statistical methods are now replaced by integrated knowledge-based neuro-computing models and MHD methods. Within the ESA Space Weather Programme Study a real-time forecast service has been developed for space weather and effects. This prototype is now being implemented for specific users. Today's applications are not only so many more but also so much more advanced and user-oriented. A scientist needs real-time predictions of a global index as input for an MHD model calculating the radiation dose for EVAs. A power company system operator needs a prediction of the local value of a geomagnetically induced current. A science tourist needs to know whether or not aurora will occur. Soon we might even be able to predict the tropospheric climate changes and weather caused by the space weather.

  9. Simplified welding distortion analysis for fillet welding using composite shell elements

    NASA Astrophysics Data System (ADS)

    Kim, Mingyu; Kang, Minseok; Chung, Hyun

    2015-09-01

    This paper presents the simplified welding distortion analysis method to predict the welding deformation of both plate and stiffener in fillet welds. Currently, the methods based on equivalent thermal strain like Strain as Direct Boundary (SDB) has been widely used due to effective prediction of welding deformation. Regarding the fillet welding, however, those methods cannot represent deformation of both members at once since the temperature degree of freedom is shared at the intersection nodes in both members. In this paper, we propose new approach to simulate deformation of both members. The method can simulate fillet weld deformations by employing composite shell element and using different thermal expansion coefficients according to thickness direction with fixed temperature at intersection nodes. For verification purpose, we compare of result from experiments, 3D thermo elastic plastic analysis, SDB method and proposed method. Compared of experiments results, the proposed method can effectively predict welding deformation for fillet welds.

  10. Development of Predictive Energy Management Strategies for Hybrid Electric Vehicles

    NASA Astrophysics Data System (ADS)

    Baker, David

    Studies have shown that obtaining and utilizing information about the future state of vehicles can improve vehicle fuel economy (FE). However, there has been a lack of research into the impact of real-world prediction error on FE improvements, and whether near-term technologies can be utilized to improve FE. This study seeks to research the effect of prediction error on FE. First, a speed prediction method is developed, and trained with real-world driving data gathered only from the subject vehicle (a local data collection method). This speed prediction method informs a predictive powertrain controller to determine the optimal engine operation for various prediction durations. The optimal engine operation is input into a high-fidelity model of the FE of a Toyota Prius. A tradeoff analysis between prediction duration and prediction fidelity was completed to determine what duration of prediction resulted in the largest FE improvement. Results demonstrate that 60-90 second predictions resulted in the highest FE improvement over the baseline, achieving up to a 4.8% FE increase. A second speed prediction method utilizing simulated vehicle-to-vehicle (V2V) communication was developed to understand if incorporating near-term technologies could be utilized to further improve prediction fidelity. This prediction method produced lower variation in speed prediction error, and was able to realize a larger FE improvement over the local prediction method for longer prediction durations, achieving up to 6% FE improvement. This study concludes that speed prediction and prediction-informed optimal vehicle energy management can produce FE improvements with real-world prediction error and drive cycle variability, as up to 85% of the FE benefit of perfect speed prediction was achieved with the proposed prediction methods.

  11. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    PubMed

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  12. Forecasting Occurrences of Activities.

    PubMed

    Minor, Bryan; Cook, Diane J

    2017-07-01

    While activity recognition has been shown to be valuable for pervasive computing applications, less work has focused on techniques for forecasting the future occurrence of activities. We present an activity forecasting method to predict the time that will elapse until a target activity occurs. This method generates an activity forecast using a regression tree classifier and offers an advantage over sequence prediction methods in that it can predict expected time until an activity occurs. We evaluate this algorithm on real-world smart home datasets and provide evidence that our proposed approach is most effective at predicting activity timings.

  13. Predicting missing links via correlation between nodes

    NASA Astrophysics Data System (ADS)

    Liao, Hao; Zeng, An; Zhang, Yi-Cheng

    2015-10-01

    As a fundamental problem in many different fields, link prediction aims to estimate the likelihood of an existing link between two nodes based on the observed information. Since this problem is related to many applications ranging from uncovering missing data to predicting the evolution of networks, link prediction has been intensively investigated recently and many methods have been proposed so far. The essential challenge of link prediction is to estimate the similarity between nodes. Most of the existing methods are based on the common neighbor index and its variants. In this paper, we propose to calculate the similarity between nodes by the Pearson correlation coefficient. This method is found to be very effective when applied to calculate similarity based on high order paths. We finally fuse the correlation-based method with the resource allocation method, and find that the combined method can substantially outperform the existing methods, especially in sparse networks.

  14. Prediction of missing links and reconstruction of complex networks

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng-Jun; Zeng, An

    2016-04-01

    Predicting missing links in complex networks is of great significance from both theoretical and practical point of view, which not only helps us understand the evolution of real systems but also relates to many applications in social, biological and online systems. In this paper, we study the features of different simple link prediction methods, revealing that they may lead to the distortion of networks’ structural and dynamical properties. Moreover, we find that high prediction accuracy is not definitely corresponding to a high performance in preserving the network properties when using link prediction methods to reconstruct networks. Our work highlights the importance of considering the feedback effect of the link prediction methods on network properties when designing the algorithms.

  15. Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process

    NASA Astrophysics Data System (ADS)

    Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.

    2018-05-01

    Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.

  16. Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process

    NASA Astrophysics Data System (ADS)

    Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.

    2017-12-01

    Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.

  17. An improved method for predicting the effects of flight on jet mixing noise

    NASA Technical Reports Server (NTRS)

    Stone, J. R.

    1979-01-01

    A method for predicting the effects of flight on jet mixing noise has been developed on the basis of the jet noise theory of Ffowcs-Williams (1963) and data derived from model-jet/free-jet simulated flight tests. Predicted and experimental values are compared for the J85 turbojet engine on the Bertin Aerotrain, the low-bypass refanned JT8D engine on a DC-9, and the high-bypass JT9D engine on a DC-10. Over the jet velocity range from 280 to 680 m/sec, the predictions show a standard deviation of 1.5 dB.

  18. Simplified methods of predicting aircraft rolling moments due to vortex encounters

    DOT National Transportation Integrated Search

    1977-05-01

    Computational methods suitable for fast and accurate prediction of rolling moments on aircraft : encountering wake vortices are presented. Appropriate modifications to strip theory are developed which account for the effects of finite wingspan. It is...

  19. Ensemble positive unlabeled learning for disease gene identification.

    PubMed

    Yang, Peng; Li, Xiaoli; Chua, Hon-Nian; Kwoh, Chee-Keong; Ng, See-Kiong

    2014-01-01

    An increasing number of genes have been experimentally confirmed in recent years as causative genes to various human diseases. The newly available knowledge can be exploited by machine learning methods to discover additional unknown genes that are likely to be associated with diseases. In particular, positive unlabeled learning (PU learning) methods, which require only a positive training set P (confirmed disease genes) and an unlabeled set U (the unknown candidate genes) instead of a negative training set N, have been shown to be effective in uncovering new disease genes in the current scenario. Using only a single source of data for prediction can be susceptible to bias due to incompleteness and noise in the genomic data and a single machine learning predictor prone to bias caused by inherent limitations of individual methods. In this paper, we propose an effective PU learning framework that integrates multiple biological data sources and an ensemble of powerful machine learning classifiers for disease gene identification. Our proposed method integrates data from multiple biological sources for training PU learning classifiers. A novel ensemble-based PU learning method EPU is then used to integrate multiple PU learning classifiers to achieve accurate and robust disease gene predictions. Our evaluation experiments across six disease groups showed that EPU achieved significantly better results compared with various state-of-the-art prediction methods as well as ensemble learning classifiers. Through integrating multiple biological data sources for training and the outputs of an ensemble of PU learning classifiers for prediction, we are able to minimize the potential bias and errors in individual data sources and machine learning algorithms to achieve more accurate and robust disease gene predictions. In the future, our EPU method provides an effective framework to integrate the additional biological and computational resources for better disease gene predictions.

  20. Contraceptive Method Choice Among Young Adults: Influence of Individual and Relationship Factors.

    PubMed

    Harvey, S Marie; Oakley, Lisa P; Washburn, Isaac; Agnew, Christopher R

    2018-01-26

    Because decisions related to contraceptive behavior are often made by young adults in the context of specific relationships, the relational context likely influences use of contraceptives. Data presented here are from in-person structured interviews with 536 Black, Hispanic, and White young adults from East Los Angeles, California. We collected partner-specific relational and contraceptive data on all sexual partnerships for each individual, on four occasions, over one year. Using three-level multinomial logistic regression models, we examined individual and relationship factors predictive of contraceptive use. Results indicated that both individual and relationship factors predicted contraceptive use, but factors varied by method. Participants reporting greater perceived partner exclusivity and relationship commitment were more likely to use hormonal/long-acting methods only or a less effective method/no method versus condoms only. Those with greater participation in sexual decision making were more likely to use any method over a less effective method/no method and were more likely to use condoms only or dual methods versus a hormonal/long-acting method only. In addition, for women only, those who reported greater relationship commitment were more likely to use hormonal/long-acting methods or a less effective method/no method versus a dual method. In summary, interactive relationship qualities and dynamics (commitment and sexual decision making) significantly predicted contraceptive use.

  1. Recent NASA Research on Aerodynamic Modeling of Post-Stall and Spin Dynamics of Large Transport Airplanes

    NASA Technical Reports Server (NTRS)

    Murch, Austin M.; Foster, John V.

    2007-01-01

    A simulation study was conducted to investigate aerodynamic modeling methods for prediction of post-stall flight dynamics of large transport airplanes. The research approach involved integrating dynamic wind tunnel data from rotary balance and forced oscillation testing with static wind tunnel data to predict aerodynamic forces and moments during highly dynamic departure and spin motions. Several state-of-the-art aerodynamic modeling methods were evaluated and predicted flight dynamics using these various approaches were compared. Results showed the different modeling methods had varying effects on the predicted flight dynamics and the differences were most significant during uncoordinated maneuvers. Preliminary wind tunnel validation data indicated the potential of the various methods for predicting steady spin motions.

  2. Developing and utilizing an Euler computational method for predicting the airframe/propulsion effects for an aft-mounted turboprop transport. Volume 2: User guide

    NASA Technical Reports Server (NTRS)

    Chen, H. C.; Neback, H. E.; Kao, T. J.; Yu, N. Y.; Kusunose, K.

    1991-01-01

    This manual explains how to use an Euler based computational method for predicting the airframe/propulsion integration effects for an aft-mounted turboprop transport. The propeller power effects are simulated by the actuator disk concept. This method consists of global flow field analysis and the embedded flow solution for predicting the detailed flow characteristics in the local vicinity of an aft-mounted propfan engine. The computational procedure includes the use of several computer programs performing four main functions: grid generation, Euler solution, grid embedding, and streamline tracing. This user's guide provides information for these programs, including input data preparations with sample input decks, output descriptions, and sample Unix scripts for program execution in the UNICOS environment.

  3. Link prediction based on nonequilibrium cooperation effect

    NASA Astrophysics Data System (ADS)

    Li, Lanxi; Zhu, Xuzhen; Tian, Hui

    2018-04-01

    Link prediction in complex networks has become a common focus of many researchers. But most existing methods concentrate on neighbors, and rarely consider degree heterogeneity of two endpoints. Node degree represents the importance or status of endpoints. We describe the large-degree heterogeneity as the nonequilibrium between nodes. This nonequilibrium facilitates a stable cooperation between endpoints, so that two endpoints with large-degree heterogeneity tend to connect stably. We name such a phenomenon as the nonequilibrium cooperation effect. Therefore, this paper proposes a link prediction method based on the nonequilibrium cooperation effect to improve accuracy. Theoretical analysis will be processed in advance, and at the end, experiments will be performed in 12 real-world networks to compare the mainstream methods with our indices in the network through numerical analysis.

  4. Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis.

    PubMed

    Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun

    2015-01-01

    In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it.

  5. Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis

    PubMed Central

    Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun

    2015-01-01

    In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it. PMID:26241496

  6. A Hybrid Supervised/Unsupervised Machine Learning Approach to Solar Flare Prediction

    NASA Astrophysics Data System (ADS)

    Benvenuto, Federico; Piana, Michele; Campi, Cristina; Massone, Anna Maria

    2018-01-01

    This paper introduces a novel method for flare forecasting, combining prediction accuracy with the ability to identify the most relevant predictive variables. This result is obtained by means of a two-step approach: first, a supervised regularization method for regression, namely, LASSO is applied, where a sparsity-enhancing penalty term allows the identification of the significance with which each data feature contributes to the prediction; then, an unsupervised fuzzy clustering technique for classification, namely, Fuzzy C-Means, is applied, where the regression outcome is partitioned through the minimization of a cost function and without focusing on the optimization of a specific skill score. This approach is therefore hybrid, since it combines supervised and unsupervised learning; realizes classification in an automatic, skill-score-independent way; and provides effective prediction performances even in the case of imbalanced data sets. Its prediction power is verified against NOAA Space Weather Prediction Center data, using as a test set, data in the range between 1996 August and 2010 December and as training set, data in the range between 1988 December and 1996 June. To validate the method, we computed several skill scores typically utilized in flare prediction and compared the values provided by the hybrid approach with the ones provided by several standard (non-hybrid) machine learning methods. The results showed that the hybrid approach performs classification better than all other supervised methods and with an effectiveness comparable to the one of clustering methods; but, in addition, it provides a reliable ranking of the weights with which the data properties contribute to the forecast.

  7. Performance and robustness of penalized and unpenalized methods for genetic prediction of complex human disease.

    PubMed

    Abraham, Gad; Kowalczyk, Adam; Zobel, Justin; Inouye, Michael

    2013-02-01

    A central goal of medical genetics is to accurately predict complex disease from genotypes. Here, we present a comprehensive analysis of simulated and real data using lasso and elastic-net penalized support-vector machine models, a mixed-effects linear model, a polygenic score, and unpenalized logistic regression. In simulation, the sparse penalized models achieved lower false-positive rates and higher precision than the other methods for detecting causal SNPs. The common practice of prefiltering SNP lists for subsequent penalized modeling was examined and shown to substantially reduce the ability to recover the causal SNPs. Using genome-wide SNP profiles across eight complex diseases within cross-validation, lasso and elastic-net models achieved substantially better predictive ability in celiac disease, type 1 diabetes, and Crohn's disease, and had equivalent predictive ability in the rest, with the results in celiac disease strongly replicating between independent datasets. We investigated the effect of linkage disequilibrium on the predictive models, showing that the penalized methods leverage this information to their advantage, compared with methods that assume SNP independence. Our findings show that sparse penalized approaches are robust across different disease architectures, producing as good as or better phenotype predictions and variance explained. This has fundamental ramifications for the selection and future development of methods to genetically predict human disease. © 2012 WILEY PERIODICALS, INC.

  8. Prediction of Process-Induced Distortions in L-Shaped Composite Profiles Using Path-Dependent Constitutive Law

    NASA Astrophysics Data System (ADS)

    Ding, Anxin; Li, Shuxin; Wang, Jihui; Ni, Aiqing; Sun, Liangliang; Chang, Lei

    2016-10-01

    In this paper, the corner spring-in angles of AS4/8552 L-shaped composite profiles with different thicknesses are predicted using path-dependent constitutive law with the consideration of material properties variation due to phase change during curing. The prediction accuracy mainly depends on the properties in the rubbery and glassy states obtained by homogenization method rather than experimental measurements. Both analytical and finite element (FE) homogenization methods are applied to predict the overall properties of AS4/8552 composite. The effect of fiber volume fraction on the properties is investigated for both rubbery and glassy states using both methods. And the predicted results are compared with experimental measurements for the glassy state. Good agreement is achieved between the predicted results and available experimental data, showing the reliability of the homogenization method. Furthermore, the corner spring-in angles of L-shaped composite profiles are measured experimentally and the reliability of path-dependent constitutive law is validated as well as the properties prediction by FE homogenization method.

  9. Chronic and Acute Stress, Gender, and Serotonin Transporter Gene-Environment Interactions Predicting Depression Symptoms in Youth

    ERIC Educational Resources Information Center

    Hammen, Constance; Brennan, Patricia A.; Keenan-Miller, Danielle; Hazel, Nicholas A.; Najman, Jake M.

    2010-01-01

    Background: Many recent studies of serotonin transporter gene by environment effects predicting depression have used stress assessments with undefined or poor psychometric methods, possibly contributing to wide variation in findings. The present study attempted to distinguish between effects of acute and chronic stress to predict depressive…

  10. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  11. Improved nonlinear prediction method

    NASA Astrophysics Data System (ADS)

    Adenan, Nur Hamiza; Md Noorani, Mohd Salmi

    2014-06-01

    The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.

  12. Computational Prediction of Metabolism: Sites, Products, SAR, P450 Enzyme Dynamics, and Mechanisms

    PubMed Central

    2012-01-01

    Metabolism of xenobiotics remains a central challenge for the discovery and development of drugs, cosmetics, nutritional supplements, and agrochemicals. Metabolic transformations are frequently related to the incidence of toxic effects that may result from the emergence of reactive species, the systemic accumulation of metabolites, or by induction of metabolic pathways. Experimental investigation of the metabolism of small organic molecules is particularly resource demanding; hence, computational methods are of considerable interest to complement experimental approaches. This review provides a broad overview of structure- and ligand-based computational methods for the prediction of xenobiotic metabolism. Current computational approaches to address xenobiotic metabolism are discussed from three major perspectives: (i) prediction of sites of metabolism (SOMs), (ii) elucidation of potential metabolites and their chemical structures, and (iii) prediction of direct and indirect effects of xenobiotics on metabolizing enzymes, where the focus is on the cytochrome P450 (CYP) superfamily of enzymes, the cardinal xenobiotics metabolizing enzymes. For each of these domains, a variety of approaches and their applications are systematically reviewed, including expert systems, data mining approaches, quantitative structure–activity relationships (QSARs), and machine learning-based methods, pharmacophore-based algorithms, shape-focused techniques, molecular interaction fields (MIFs), reactivity-focused techniques, protein–ligand docking, molecular dynamics (MD) simulations, and combinations of methods. Predictive metabolism is a developing area, and there is still enormous potential for improvement. However, it is clear that the combination of rapidly increasing amounts of available ligand- and structure-related experimental data (in particular, quantitative data) with novel and diverse simulation and modeling approaches is accelerating the development of effective tools for prediction of in vivo metabolism, which is reflected by the diverse and comprehensive data sources and methods for metabolism prediction reviewed here. This review attempts to survey the range and scope of computational methods applied to metabolism prediction and also to compare and contrast their applicability and performance. PMID:22339582

  13. Propeller noise prediction

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.

    1983-01-01

    Analytic propeller noise prediction involves a sequence of computations culminating in the application of acoustic equations. The prediction sequence currently used by NASA in its ANOPP (aircraft noise prediction) program is described. The elements of the sequence are called program modules. The first group of modules analyzes the propeller geometry, the aerodynamics, including both potential and boundary layer flow, the propeller performance, and the surface loading distribution. This group of modules is based entirely on aerodynamic strip theory. The next group of modules deals with the actual noise prediction, based on data from the first group. Deterministic predictions of periodic thickness and loading noise are made using Farassat's time-domain methods. Broadband noise is predicted by the semi-empirical Schlinker-Amiet method. Near-field predictions of fuselage surface pressures include the effects of boundary layer refraction and (for a cylinder) scattering. Far-field predictions include atmospheric and ground effects. Experimental data from subsonic and transonic propellers are compared and NASA's future direction is propeller noise technology development are indicated.

  14. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    PubMed

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.

  15. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  16. Analysis of algae growth mechanism and water bloom prediction under the effect of multi-affecting factor.

    PubMed

    Wang, Li; Wang, Xiaoyi; Jin, Xuebo; Xu, Jiping; Zhang, Huiyan; Yu, Jiabin; Sun, Qian; Gao, Chong; Wang, Lingbin

    2017-03-01

    The formation process of algae is described inaccurately and water blooms are predicted with a low precision by current methods. In this paper, chemical mechanism of algae growth is analyzed, and a correlation analysis of chlorophyll-a and algal density is conducted by chemical measurement. Taking into account the influence of multi-factors on algae growth and water blooms, the comprehensive prediction method combined with multivariate time series and intelligent model is put forward in this paper. Firstly, through the process of photosynthesis, the main factors that affect the reproduction of the algae are analyzed. A compensation prediction method of multivariate time series analysis based on neural network and Support Vector Machine has been put forward which is combined with Kernel Principal Component Analysis to deal with dimension reduction of the influence factors of blooms. Then, Genetic Algorithm is applied to improve the generalization ability of the BP network and Least Squares Support Vector Machine. Experimental results show that this method could better compensate the prediction model of multivariate time series analysis which is an effective way to improve the description accuracy of algae growth and prediction precision of water blooms.

  17. Prediction of global and local model quality in CASP8 using the ModFOLD server.

    PubMed

    McGuffin, Liam J

    2009-01-01

    The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0--an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/. Copyright 2009 Wiley-Liss, Inc.

  18. A temperature match based optimization method for daily load prediction considering DLC effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Z.

    This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less

  19. Effects of Barometric Fluctuations on Well Water-Level Measurements and Aquifer Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spane, Frank A.

    1999-12-16

    This report examines the effects of barometric fluctuations on well water-level measurements and evaluates adjustment and removal methods for determining areal aquifer head conditions and aquifer test analysis. Two examples of Hanford Site unconfined aquifer tests are examined that demonstrate baro-metric response analysis and illustrate the predictive/removal capabilities of various methods for well water-level and aquifer total head values. Good predictive/removal characteristics were demonstrated with best corrective results provided by multiple-regression deconvolution methods.

  20. Failure prediction using machine learning and time series in optical network.

    PubMed

    Wang, Zhilong; Zhang, Min; Wang, Danshi; Song, Chuang; Liu, Min; Li, Jin; Lou, Liqi; Liu, Zhuo

    2017-08-07

    In this paper, we propose a performance monitoring and failure prediction method in optical networks based on machine learning. The primary algorithms of this method are the support vector machine (SVM) and double exponential smoothing (DES). With a focus on risk-aware models in optical networks, the proposed protection plan primarily investigates how to predict the risk of an equipment failure. To the best of our knowledge, this important problem has not yet been fully considered. Experimental results showed that the average prediction accuracy of our method was 95% when predicting the optical equipment failure state. This finding means that our method can forecast an equipment failure risk with high accuracy. Therefore, our proposed DES-SVM method can effectively improve traditional risk-aware models to protect services from possible failures and enhance the optical network stability.

  1. The Effect of General Statistical Fiber Misalignment on Predicted Damage Initiation in Composites

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.

    2014-01-01

    A micromechanical method is employed for the prediction of unidirectional composites in which the fiber orientation can possess various statistical misalignment distributions. The method relies on the probability-weighted averaging of the appropriate concentration tensor, which is established by the micromechanical procedure. This approach provides access to the local field quantities throughout the constituents, from which initiation of damage in the composite can be predicted. In contrast, a typical macromechanical procedure can determine the effective composite elastic properties in the presence of statistical fiber misalignment, but cannot provide the local fields. Fully random fiber distribution is presented as a special case using the proposed micromechanical method. Results are given that illustrate the effects of various amounts of fiber misalignment in terms of the standard deviations of in-plane and out-of-plane misalignment angles, where normal distributions have been employed. Damage initiation envelopes, local fields, effective moduli, and strengths are predicted for polymer and ceramic matrix composites with given normal distributions of misalignment angles, as well as fully random fiber orientation.

  2. Effect of reference genome selection on the performance of computational methods for genome-wide protein-protein interaction prediction.

    PubMed

    Muley, Vijaykumar Yogesh; Ranjan, Akash

    2012-01-01

    Recent progress in computational methods for predicting physical and functional protein-protein interactions has provided new insights into the complexity of biological processes. Most of these methods assume that functionally interacting proteins are likely to have a shared evolutionary history. This history can be traced out for the protein pairs of a query genome by correlating different evolutionary aspects of their homologs in multiple genomes known as the reference genomes. These methods include phylogenetic profiling, gene neighborhood and co-occurrence of the orthologous protein coding genes in the same cluster or operon. These are collectively known as genomic context methods. On the other hand a method called mirrortree is based on the similarity of phylogenetic trees between two interacting proteins. Comprehensive performance analyses of these methods have been frequently reported in literature. However, very few studies provide insight into the effect of reference genome selection on detection of meaningful protein interactions. We analyzed the performance of four methods and their variants to understand the effect of reference genome selection on prediction efficacy. We used six sets of reference genomes, sampled in accordance with phylogenetic diversity and relationship between organisms from 565 bacteria. We used Escherichia coli as a model organism and the gold standard datasets of interacting proteins reported in DIP, EcoCyc and KEGG databases to compare the performance of the prediction methods. Higher performance for predicting protein-protein interactions was achievable even with 100-150 bacterial genomes out of 565 genomes. Inclusion of archaeal genomes in the reference genome set improves performance. We find that in order to obtain a good performance, it is better to sample few genomes of related genera of prokaryotes from the large number of available genomes. Moreover, such a sampling allows for selecting 50-100 genomes for comparable accuracy of predictions when computational resources are limited.

  3. Effect of heteroscedasticity treatment in residual error models on model calibration and prediction uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli

    2017-11-01

    The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.

  4. On the behavior of isolated and embedded carbon nano-tubes in a polymeric matrix

    NASA Astrophysics Data System (ADS)

    Rahimian-Koloor, Seyed Mostafa; Moshrefzadeh-Sani, Hadi; Mehrdad Shokrieh, Mahmood; Majid Hashemianzadeh, Seyed

    2018-02-01

    In the classical micro-mechanical method, the moduli of the reinforcement and the matrix are used to predict the stiffness of composites. However, using the classical micro-mechanical method to predict the stiffness of CNT/epoxy nanocomposites leads to overestimated results. One of the main reasons for this overestimation is using the stiffness of the isolated CNT and ignoring the CNT nanoscale effect by the method. In the present study the non-equilibrium molecular dynamics simulation was used to consider the influence of CNT length on the stiffness of the nanocomposites through the isothermal-isobaric ensemble. The results indicated that, due to the nanoscale effects, the reinforcing efficiency of the embedded CNT is not constant and decreases with decreasing its length. Based on the results, a relationship was derived, which predicts the effective stiffness of an embedded CNT in terms of its length. It was shown that using this relationship leads to predict more accurate elastic modulus of nanocomposite, which was validated by some experimental counterparts.

  5. Incorporating information on predicted solvent accessibility to the co-evolution-based study of protein interactions.

    PubMed

    Ochoa, David; García-Gutiérrez, Ponciano; Juan, David; Valencia, Alfonso; Pazos, Florencio

    2013-01-27

    A widespread family of methods for studying and predicting protein interactions using sequence information is based on co-evolution, quantified as similarity of phylogenetic trees. Part of the co-evolution observed between interacting proteins could be due to co-adaptation caused by inter-protein contacts. In this case, the co-evolution is expected to be more evident when evaluated on the surface of the proteins or the internal layers close to it. In this work we study the effect of incorporating information on predicted solvent accessibility to three methods for predicting protein interactions based on similarity of phylogenetic trees. We evaluate the performance of these methods in predicting different types of protein associations when trees based on positions with different characteristics of predicted accessibility are used as input. We found that predicted accessibility improves the results of two recent versions of the mirrortree methodology in predicting direct binary physical interactions, while it neither improves these methods, nor the original mirrortree method, in predicting other types of interactions. That improvement comes at no cost in terms of applicability since accessibility can be predicted for any sequence. We also found that predictions of protein-protein interactions are improved when multiple sequence alignments with a richer representation of sequences (including paralogs) are incorporated in the accessibility prediction.

  6. Prediction of essential proteins based on gene expression programming.

    PubMed

    Zhong, Jiancheng; Wang, Jianxin; Peng, Wei; Zhang, Zhen; Pan, Yi

    2013-01-01

    Essential proteins are indispensable for cell survive. Identifying essential proteins is very important for improving our understanding the way of a cell working. There are various types of features related to the essentiality of proteins. Many methods have been proposed to combine some of them to predict essential proteins. However, it is still a big challenge for designing an effective method to predict them by integrating different features, and explaining how these selected features decide the essentiality of protein. Gene expression programming (GEP) is a learning algorithm and what it learns specifically is about relationships between variables in sets of data and then builds models to explain these relationships. In this work, we propose a GEP-based method to predict essential protein by combing some biological features and topological features. We carry out experiments on S. cerevisiae data. The experimental results show that the our method achieves better prediction performance than those methods using individual features. Moreover, our method outperforms some machine learning methods and performs as well as a method which is obtained by combining the outputs of eight machine learning methods. The accuracy of predicting essential proteins can been improved by using GEP method to combine some topological features and biological features.

  7. Optimization of global model composed of radial basis functions using the term-ranking approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Peng; Tao, Chao, E-mail: taochao@nju.edu.cn; Liu, Xiao-Jun

    2014-03-15

    A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.

  8. A new method for enhancer prediction based on deep belief network.

    PubMed

    Bu, Hongda; Gan, Yanglan; Wang, Yang; Zhou, Shuigeng; Guan, Jihong

    2017-10-16

    Studies have shown that enhancers are significant regulatory elements to play crucial roles in gene expression regulation. Since enhancers are unrelated to the orientation and distance to their target genes, it is a challenging mission for scholars and researchers to accurately predicting distal enhancers. In the past years, with the high-throughout ChiP-seq technologies development, several computational techniques emerge to predict enhancers using epigenetic or genomic features. Nevertheless, the inconsistency of computational models across different cell-lines and the unsatisfactory prediction performance call for further research in this area. Here, we propose a new Deep Belief Network (DBN) based computational method for enhancer prediction, which is called EnhancerDBN. This method combines diverse features, composed of DNA sequence compositional features, DNA methylation and histone modifications. Our computational results indicate that 1) EnhancerDBN outperforms 13 existing methods in prediction, and 2) GC content and DNA methylation can serve as relevant features for enhancer prediction. Deep learning is effective in boosting the performance of enhancer prediction.

  9. Construction of ontology augmented networks for protein complex prediction.

    PubMed

    Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian

    2013-01-01

    Protein complexes are of great importance in understanding the principles of cellular organization and function. The increase in available protein-protein interaction data, gene ontology and other resources make it possible to develop computational methods for protein complex prediction. Most existing methods focus mainly on the topological structure of protein-protein interaction networks, and largely ignore the gene ontology annotation information. In this article, we constructed ontology augmented networks with protein-protein interaction data and gene ontology, which effectively unified the topological structure of protein-protein interaction networks and the similarity of gene ontology annotations into unified distance measures. After constructing ontology augmented networks, a novel method (clustering based on ontology augmented networks) was proposed to predict protein complexes, which was capable of taking into account the topological structure of the protein-protein interaction network, as well as the similarity of gene ontology annotations. Our method was applied to two different yeast protein-protein interaction datasets and predicted many well-known complexes. The experimental results showed that (i) ontology augmented networks and the unified distance measure can effectively combine the structure closeness and gene ontology annotation similarity; (ii) our method is valuable in predicting protein complexes and has higher F1 and accuracy compared to other competing methods.

  10. Does experience of the 'occult' predict use of complementary medicine? Experience of, and beliefs about, both complementary medicine and ways of telling the future.

    PubMed

    Furnham, A

    2000-12-01

    This study looked at the relationship between ratings of the perceived effectiveness of 24 methods for telling the future, 39 complementary therapies (CM) and 12 specific attitude statements about science and medicine. A total of 159 participants took part. The results showed that the participants were deeply sceptical of the effectiveness of the methods for telling the future which factored into meaningful and interpretable factors. Participants were much more positive about particular, but not all, specialties of complementary medicine (CM). These also factored into a meaningful factor structure. Finally, the 12 attitude to science/medicine statements revealed four factors: scepticism of medicine; the importance of psychological factors; patient protection; and the importance of scientific evaluation. Regressional analysis showed that belief in the total effectiveness of different ways of predicting the future was best predicted by beliefs in the effectiveness of the CM therapies. Although interest in the occult was associated with interest in CM, participants were able to distinguish between the two, and displayed scepticism about the effectiveness of methods of predicting the future and some CM therapies. Copyright 2000 Harcourt Publishers Ltd.

  11. Internal performance predictions for Langley scramjet engine module

    NASA Technical Reports Server (NTRS)

    Pinckney, S. Z.

    1978-01-01

    A one dimensional theoretical method for the prediction of the internal performance of a scramjet engine is presented. The effects of changes in vehicle forebody flow parameters and characteristics on predicted thrust for the scramjet engine were evaluated using this method, and results are presented. A theoretical evaluation of the effects of changes in the scramjet engine's internal parameters is also presented. Theoretical internal performance predictions, in terms thrust coefficient and specific impulse, are provided for the scramjet engine for free stream Mach numbers of 5, 6, and 7 free stream dynamic pressure of 23,940 N/sq m forebody surface angles of 4.6 deg to 14.6 deg, and fuel equivalence ratio of 1.0.

  12. Aircraft noise prediction program theoretical manual, part 1

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.

    1982-01-01

    Aircraft noise prediction theoretical methods are given. The prediction of data which affect noise generation and propagation is addressed. These data include the aircraft flight dynamics, the source noise parameters, and the propagation effects.

  13. Predicting flight delay based on multiple linear regression

    NASA Astrophysics Data System (ADS)

    Ding, Yi

    2017-08-01

    Delay of flight has been regarded as one of the toughest difficulties in aviation control. How to establish an effective model to handle the delay prediction problem is a significant work. To solve the problem that the flight delay is difficult to predict, this study proposes a method to model the arriving flights and a multiple linear regression algorithm to predict delay, comparing with Naive-Bayes and C4.5 approach. Experiments based on a realistic dataset of domestic airports show that the accuracy of the proposed model approximates 80%, which is further improved than the Naive-Bayes and C4.5 approach approaches. The result testing shows that this method is convenient for calculation, and also can predict the flight delays effectively. It can provide decision basis for airport authorities.

  14. Guidelines for reporting and using prediction tools for genetic variation analysis.

    PubMed

    Vihinen, Mauno

    2013-02-01

    Computational prediction methods are widely used for the analysis of human genome sequence variants and their effects on gene/protein function, splice site aberration, pathogenicity, and disease risk. New methods are frequently developed. We believe that guidelines are essential for those writing articles about new prediction methods, as well as for those applying these tools in their research, so that the necessary details are reported. This will enable readers to gain the full picture of technical information, performance, and interpretation of results, and to facilitate comparisons of related methods. Here, we provide instructions on how to describe new methods, report datasets, and assess the performance of predictive tools. We also discuss what details of predictor implementation are essential for authors to understand. Similarly, these guidelines for the use of predictors provide instructions on what needs to be delineated in the text, as well as how researchers can avoid unwarranted conclusions. They are applicable to most prediction methods currently utilized. By applying these guidelines, authors will help reviewers, editors, and readers to more fully comprehend prediction methods and their use. © 2012 Wiley Periodicals, Inc.

  15. Comparative Study on Prediction Effects of Short Fatigue Crack Propagation Rate by Two Different Calculation Methods

    NASA Astrophysics Data System (ADS)

    Yang, Bing; Liao, Zhen; Qin, Yahang; Wu, Yayun; Liang, Sai; Xiao, Shoune; Yang, Guangwu; Zhu, Tao

    2017-05-01

    To describe the complicated nonlinear process of the fatigue short crack evolution behavior, especially the change of the crack propagation rate, two different calculation methods are applied. The dominant effective short fatigue crack propagation rates are calculated based on the replica fatigue short crack test with nine smooth funnel-shaped specimens and the observation of the replica films according to the effective short fatigue cracks principle. Due to the fast decay and the nonlinear approximation ability of wavelet analysis, the self-learning ability of neural network, and the macroscopic searching and global optimization of genetic algorithm, the genetic wavelet neural network can reflect the implicit complex nonlinear relationship when considering multi-influencing factors synthetically. The effective short fatigue cracks and the dominant effective short fatigue crack are simulated and compared by the Genetic Wavelet Neural Network. The simulation results show that Genetic Wavelet Neural Network is a rational and available method for studying the evolution behavior of fatigue short crack propagation rate. Meanwhile, a traditional data fitting method for a short crack growth model is also utilized for fitting the test data. It is reasonable and applicable for predicting the growth rate. Finally, the reason for the difference between the prediction effects by these two methods is interpreted.

  16. Evaluation of methods for determining hardware projected life

    NASA Technical Reports Server (NTRS)

    1971-01-01

    An investigation of existing methods of predicting hardware life is summarized by reviewing programs having long life requirements, current research efforts on long life problems, and technical papers reporting work on life predicting techniques. The results indicate that there are no accurate quantitative means to predict hardware life for system level hardware. The effectiveness of test programs and the cause of hardware failures is considered.

  17. A study of the limitations of linear theory methods as applied to sonic boom calculations

    NASA Technical Reports Server (NTRS)

    Darden, Christine M.

    1990-01-01

    Current sonic boom minimization theories have been reviewed to emphasize the capabilities and flexibilities of the methods. Flexibility is important because it is necessary for the designer to meet optimized area constraints while reducing the impact on vehicle aerodynamic performance. Preliminary comparisons of sonic booms predicted for two Mach 3 concepts illustrate the benefits of shaping. Finally, for very simple bodies of revolution, sonic boom predictions were made using two methods - a modified linear theory method and a nonlinear method - for signature shapes which were both farfield N-waves and midfield waves. Preliminary analysis on these simple bodies verified that current modified linear theory prediction methods become inadequate for predicting midfield signatures for Mach numbers above 3. The importance of impulse is sonic boom disturbance and the importance of three-dimensional effects which could not be simulated with the bodies of revolution will determine the validity of current modified linear theory methods in predicting midfield signatures at lower Mach numbers.

  18. Self-Adaptive Prediction of Cloud Resource Demands Using Ensemble Model and Subtractive-Fuzzy Clustering Based Fuzzy Neural Network

    PubMed Central

    Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong

    2015-01-01

    In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands. PMID:25691896

  19. HIV-1 protease cleavage site prediction based on two-stage feature selection method.

    PubMed

    Niu, Bing; Yuan, Xiao-Cheng; Roeper, Preston; Su, Qiang; Peng, Chun-Rong; Yin, Jing-Yuan; Ding, Juan; Li, HaiPeng; Lu, Wen-Cong

    2013-03-01

    Knowledge of the mechanism of HIV protease cleavage specificity is critical to the design of specific and effective HIV inhibitors. Searching for an accurate, robust, and rapid method to correctly predict the cleavage sites in proteins is crucial when searching for possible HIV inhibitors. In this article, HIV-1 protease specificity was studied using the correlation-based feature subset (CfsSubset) selection method combined with Genetic Algorithms method. Thirty important biochemical features were found based on a jackknife test from the original data set containing 4,248 features. By using the AdaBoost method with the thirty selected features the prediction model yields an accuracy of 96.7% for the jackknife test and 92.1% for an independent set test, with increased accuracy over the original dataset by 6.7% and 77.4%, respectively. Our feature selection scheme could be a useful technique for finding effective competitive inhibitors of HIV protease.

  20. Comparison of four statistical and machine learning methods for crash severity prediction.

    PubMed

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A Comparison of Prediction Methods for Design of Pump as Turbine for Small Hydro Plant: Implemented Plant

    NASA Astrophysics Data System (ADS)

    Naeimi, Hossein; Nayebi Shahabi, Mina; Mohammadi, Sohrab

    2017-08-01

    In developing countries, small and micro hydropower plants are very effective source for electricity generation with energy pay-back time (EPBT) less than other conventional electricity generation systems. Using pump as turbine (PAT) is an attractive, significant and cost-effective alternative. Pump manufacturers do not normally provide the characteristic curves of their pumps working as turbines. Therefore, choosing an appropriate Pump to work as a turbine is essential in implementing the small-hydro plants. In this paper, in order to find the best fitting method to choose a PAT, the results of a small-hydro plant implemented on the by-pass of a Pressure Reducing Valve (PRV) in Urmia city in Iran are presented. Some of the prediction methods of Best Efficiency Point of PATs are derived. Then, the results of implemented project have been compared to the prediction methods results and the deviation of from measured data were considered and discussed and the best method that predicts the specifications of PAT more accurately determined. Finally, the energy pay-back time for the plant is calculated.

  2. [Effect of stock abundance and environmental factors on the recruitment success of small yellow croaker in the East China Sea].

    PubMed

    Liu, Zun-lei; Yuan, Xing-wei; Yang, Lin-lin; Yan, Li-ping; Zhang, Hui; Cheng, Jia-hua

    2015-02-01

    Multiple hypotheses are available to explain recruitment rate. Model selection methods can be used to identify the best model that supports a particular hypothesis. However, using a single model for estimating recruitment success is often inadequate for overexploited population because of high model uncertainty. In this study, stock-recruitment data of small yellow croaker in the East China Sea collected from fishery dependent and independent surveys between 1992 and 2012 were used to examine density-dependent effects on recruitment success. Model selection methods based on frequentist (AIC, maximum adjusted R2 and P-values) and Bayesian (Bayesian model averaging, BMA) methods were applied to identify the relationship between recruitment and environment conditions. Interannual variability of the East China Sea environment was indicated by sea surface temperature ( SST) , meridional wind stress (MWS), zonal wind stress (ZWS), sea surface pressure (SPP) and runoff of Changjiang River ( RCR). Mean absolute error, mean squared predictive error and continuous ranked probability score were calculated to evaluate the predictive performance of recruitment success. The results showed that models structures were not consistent based on three kinds of model selection methods, predictive variables of models were spawning abundance and MWS by AIC, spawning abundance by P-values, spawning abundance, MWS and RCR by maximum adjusted R2. The recruitment success decreased linearly with stock abundance (P < 0.01), suggesting overcompensation effect in the recruitment success might be due to cannibalism or food competition. Meridional wind intensity showed marginally significant and positive effects on the recruitment success (P = 0.06), while runoff of Changjiang River showed a marginally negative effect (P = 0.07). Based on mean absolute error and continuous ranked probability score, predictive error associated with models obtained from BMA was the smallest amongst different approaches, while that from models selected based on the P-value of the independent variables was the highest. However, mean squared predictive error from models selected based on the maximum adjusted R2 was highest. We found that BMA method could improve the prediction of recruitment success, derive more accurate prediction interval and quantitatively evaluate model uncertainty.

  3. Progressive damage, fracture predictions and post mortem correlations for fiber composites

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Lewis Research Center is involved in the development of computational mechanics methods for predicting the structural behavior and response of composite structures. In conjunction with the analytical methods development, experimental programs including post failure examination are conducted to study various factors affecting composite fracture such as laminate thickness effects, ply configuration, and notch sensitivity. Results indicate that the analytical capabilities incorporated in the CODSTRAN computer code are effective in predicting the progressive damage and fracture of composite structures. In addition, the results being generated are establishing a data base which will aid in the characterization of composite fracture.

  4. Rectification of depth measurement using pulsed thermography with logarithmic peak second derivative method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Zeng, Zhi; Shen, Jingling; Zhang, Cunlin; Zhao, Yuejin

    2018-03-01

    Logarithmic peak second derivative (LPSD) method is the most popular method for depth prediction in pulsed thermography. It is widely accepted that this method is independent of defect size. The theoretical model for LPSD method is based on the one-dimensional solution of heat conduction without considering the effect of defect size. When a decay term considering defect aspect ratio is introduced into the solution to correct the three-dimensional thermal diffusion effect, we found that LPSD method is affected by defect size by analytical model. Furthermore, we constructed the relation between the characteristic time of LPSD method and defect aspect ratio, which was verified with the experimental results of stainless steel and glass fiber reinforced plate (GFRP) samples. We also proposed an improved LPSD method for depth prediction when the effect of defect size was considered, and the rectification results of stainless steel and GFRP samples were presented and discussed.

  5. SPRINT: ultrafast protein-protein interaction prediction of the entire human interactome.

    PubMed

    Li, Yiwei; Ilie, Lucian

    2017-11-15

    Proteins perform their functions usually by interacting with other proteins. Predicting which proteins interact is a fundamental problem. Experimental methods are slow, expensive, and have a high rate of error. Many computational methods have been proposed among which sequence-based ones are very promising. However, so far no such method is able to predict effectively the entire human interactome: they require too much time or memory. We present SPRINT (Scoring PRotein INTeractions), a new sequence-based algorithm and tool for predicting protein-protein interactions. We comprehensively compare SPRINT with state-of-the-art programs on seven most reliable human PPI datasets and show that it is more accurate while running orders of magnitude faster and using very little memory. SPRINT is the only sequence-based program that can effectively predict the entire human interactome: it requires between 15 and 100 min, depending on the dataset. Our goal is to transform the very challenging problem of predicting the entire human interactome into a routine task. The source code of SPRINT is freely available from https://github.com/lucian-ilie/SPRINT/ and the datasets and predicted PPIs from www.csd.uwo.ca/faculty/ilie/SPRINT/ .

  6. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  7. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  8. Computational Prediction of the Global Functional Genomic Landscape: Applications, Methods and Challenges

    PubMed Central

    Zhou, Weiqiang; Sherwood, Ben; Ji, Hongkai

    2017-01-01

    Technological advances have led to an explosive growth of high-throughput functional genomic data. Exploiting the correlation among different data types, it is possible to predict one functional genomic data type from other data types. Prediction tools are valuable in understanding the relationship among different functional genomic signals. They also provide a cost-efficient solution to inferring the unknown functional genomic profiles when experimental data are unavailable due to resource or technological constraints. The predicted data may be used for generating hypotheses, prioritizing targets, interpreting disease variants, facilitating data integration, quality control, and many other purposes. This article reviews various applications of prediction methods in functional genomics, discusses analytical challenges, and highlights some common and effective strategies used to develop prediction methods for functional genomic data. PMID:28076869

  9. Application of statistical classification methods for predicting the acceptability of well-water quality

    NASA Astrophysics Data System (ADS)

    Cameron, Enrico; Pilla, Giorgio; Stella, Fabio A.

    2018-06-01

    The application of statistical classification methods is investigated—in comparison also to spatial interpolation methods—for predicting the acceptability of well-water quality in a situation where an effective quantitative model of the hydrogeological system under consideration cannot be developed. In the example area in northern Italy, in particular, the aquifer is locally affected by saline water and the concentration of chloride is the main indicator of both saltwater occurrence and groundwater quality. The goal is to predict if the chloride concentration in a water well will exceed the allowable concentration so that the water is unfit for the intended use. A statistical classification algorithm achieved the best predictive performances and the results of the study show that statistical classification methods provide further tools for dealing with groundwater quality problems concerning hydrogeological systems that are too difficult to describe analytically or to simulate effectively.

  10. Degradation Prediction Model Based on a Neural Network with Dynamic Windows

    PubMed Central

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-01-01

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873

  11. A Bayesian antedependence model for whole genome prediction.

    PubMed

    Yang, Wenzhao; Tempelman, Robert J

    2012-04-01

    Hierarchical mixed effects models have been demonstrated to be powerful for predicting genomic merit of livestock and plants, on the basis of high-density single-nucleotide polymorphism (SNP) marker panels, and their use is being increasingly advocated for genomic predictions in human health. Two particularly popular approaches, labeled BayesA and BayesB, are based on specifying all SNP-associated effects to be independent of each other. BayesB extends BayesA by allowing a large proportion of SNP markers to be associated with null effects. We further extend these two models to specify SNP effects as being spatially correlated due to the chromosomally proximal effects of causal variants. These two models, that we respectively dub as ante-BayesA and ante-BayesB, are based on a first-order nonstationary antedependence specification between SNP effects. In a simulation study involving 20 replicate data sets, each analyzed at six different SNP marker densities with average LD levels ranging from r(2) = 0.15 to 0.31, the antedependence methods had significantly (P < 0.01) higher accuracies than their corresponding classical counterparts at higher LD levels (r(2) > 0. 24) with differences exceeding 3%. A cross-validation study was also conducted on the heterogeneous stock mice data resource (http://mus.well.ox.ac.uk/mouse/HS/) using 6-week body weights as the phenotype. The antedependence methods increased cross-validation prediction accuracies by up to 3.6% compared to their classical counterparts (P < 0.001). Finally, we applied our method to other benchmark data sets and demonstrated that the antedependence methods were more accurate than their classical counterparts for genomic predictions, even for individuals several generations beyond the training data.

  12. NEW METHODS TO SCREEN FOR DEVELOPMENTAL NEUROTOXICITY.

    EPA Science Inventory

    The development of alternative methods for toxicity testing is driven by the need for scientifically valid data (i.e. predictive of a toxic effect) that can be obtained in a rapid and cost-efficient manner. These predictions will enable decisions to be made as to whether further ...

  13. The importance of information on relatives for the prediction of genomic breeding values and the implications for the makeup of reference data sets in livestock breeding schemes.

    PubMed

    Clark, Samuel A; Hickey, John M; Daetwyler, Hans D; van der Werf, Julius H J

    2012-02-09

    The theory of genomic selection is based on the prediction of the effects of genetic markers in linkage disequilibrium with quantitative trait loci. However, genomic selection also relies on relationships between individuals to accurately predict genetic value. This study aimed to examine the importance of information on relatives versus that of unrelated or more distantly related individuals on the estimation of genomic breeding values. Simulated and real data were used to examine the effects of various degrees of relationship on the accuracy of genomic selection. Genomic Best Linear Unbiased Prediction (gBLUP) was compared to two pedigree based BLUP methods, one with a shallow one generation pedigree and the other with a deep ten generation pedigree. The accuracy of estimated breeding values for different groups of selection candidates that had varying degrees of relationships to a reference data set of 1750 animals was investigated. The gBLUP method predicted breeding values more accurately than BLUP. The most accurate breeding values were estimated using gBLUP for closely related animals. Similarly, the pedigree based BLUP methods were also accurate for closely related animals, however when the pedigree based BLUP methods were used to predict unrelated animals, the accuracy was close to zero. In contrast, gBLUP breeding values, for animals that had no pedigree relationship with animals in the reference data set, allowed substantial accuracy. An animal's relationship to the reference data set is an important factor for the accuracy of genomic predictions. Animals that share a close relationship to the reference data set had the highest accuracy from genomic predictions. However a baseline accuracy that is driven by the reference data set size and the overall population effective population size enables gBLUP to estimate a breeding value for unrelated animals within a population (breed), using information previously ignored by pedigree based BLUP methods.

  14. High performance computation of residual stress and distortion in laser welded 301L stainless sheets

    DOE PAGES

    Huang, Hui; Tsutsumi, Seiichiro; Wang, Jiandong; ...

    2017-07-11

    Transient thermo-mechanical simulation of stainless plate laser welding process was performed by a highly efficient and accurate approach-hybrid iterative substructure and adaptive mesh method. Especially, residual stress prediction was enhanced by considering various heat effects in the numerical model. The influence of laser welding heat input on residual stress and welding distortion of stainless thin sheets were investigated by experiment and simulation. X-ray diffraction (XRD) and contour method were used to measure the surficial and internal residual stress respectively. Effect of strain hardening, annealing and melting on residual stress prediction was clarified through a parametric study. It was shown thatmore » these heat effects must be taken into account for accurate prediction of residual stresses in laser welded stainless sheets. Reasonable agreement among residual stresses by numerical method, XRD and contour method was obtained. Buckling type welding distortion was also well reproduced by the developed thermo-mechanical FEM.« less

  15. The prediction of pressure distributions on an arrow-wing configuration including the effect of camber, twist, and a wing fin

    NASA Technical Reports Server (NTRS)

    Bobbitt, P. J.; Manro, M. E.; Kulfan, R. M.

    1980-01-01

    Wind tunnel tests of an arrow wing body configuration consisting of flat, twisted, and cambered twisted wings were conducted at Mach numbers from 0.40 to 2.50 to provide an experimental data base for comparison with theoretical methods. A variety of leading and trailing edge control surface deflections were included in these tests, and in addition, the cambered twisted wing was tested with an outboard vertical fin to determine its effect on wing and control surface loads. Theory experiment comparisons show that current state of the art linear and nonlinear attached flow methods were adequate at small angles of attack typical of cruise conditions. The incremental effects of outboard fin, wing twist, and wing camber are most accurately predicted by the advanced panel method PANAIR. Results of the advanced panel separated flow method, obtained with an early version of the program, show promise that accurate detailed pressure predictions may soon be possible for an aeroelasticity deformed wing at high angles of attack.

  16. High performance computation of residual stress and distortion in laser welded 301L stainless sheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hui; Tsutsumi, Seiichiro; Wang, Jiandong

    Transient thermo-mechanical simulation of stainless plate laser welding process was performed by a highly efficient and accurate approach-hybrid iterative substructure and adaptive mesh method. Especially, residual stress prediction was enhanced by considering various heat effects in the numerical model. The influence of laser welding heat input on residual stress and welding distortion of stainless thin sheets were investigated by experiment and simulation. X-ray diffraction (XRD) and contour method were used to measure the surficial and internal residual stress respectively. Effect of strain hardening, annealing and melting on residual stress prediction was clarified through a parametric study. It was shown thatmore » these heat effects must be taken into account for accurate prediction of residual stresses in laser welded stainless sheets. Reasonable agreement among residual stresses by numerical method, XRD and contour method was obtained. Buckling type welding distortion was also well reproduced by the developed thermo-mechanical FEM.« less

  17. A Comparative study of two RVE modelling methods for chopped carbon fiber SMC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zhangxing; Li, Yi; Shao, Yimin

    To achieve vehicle light-weighting, the chopped carbon fiber sheet molding compound (SMC) is identified as a promising material to replace metals. However, there are no effective tools and methods to predict the mechanical property of the chopped carbon fiber SMC due to the high complexity in microstructure features and the anisotropic properties. In this paper, the Representative Volume Element (RVE) approach is used to model the SMC microstructure. Two modeling methods, the Voronoi diagram-based method and the chip packing method, are developed for material RVE property prediction. The two methods are compared in terms of the predicted elastic modulus andmore » the predicted results are validated using the Digital Image Correlation (DIC) tensile test results. Furthermore, the advantages and shortcomings of these two methods are discussed in terms of the required input information and the convenience of use in the integrated processing-microstructure-property analysis.« less

  18. Self-consistency test reveals systematic bias in programs for prediction change of stability upon mutation.

    PubMed

    Usmanova, Dinara R; Bogatyreva, Natalya S; Ariño Bernad, Joan; Eremina, Aleksandra A; Gorshkova, Anastasiya A; Kanevskiy, German M; Lonishin, Lyubov R; Meister, Alexander V; Yakupova, Alisa G; Kondrashov, Fyodor A; Ivankov, Dmitry N

    2018-05-02

    Computational prediction of the effect of mutations on protein stability is used by researchers in many fields. The utility of the prediction methods is affected by their accuracy and bias. Bias, a systematic shift of the predicted change of stability, has been noted as an issue for several methods, but has not been investigated systematically. Presence of the bias may lead to misleading results especially when exploring the effects of combination of different mutations. Here we use a protocol to measure the bias as a function of the number of introduced mutations. It is based on a self-consistency test of the reciprocity the effect of a mutation. An advantage of the used approach is that it relies solely on crystal structures without experimentally measured stability values. We applied the protocol to four popular algorithms predicting change of protein stability upon mutation, FoldX, Eris, Rosetta, and I-Mutant, and found an inherent bias. For one program, FoldX, we manage to substantially reduce the bias using additional relaxation by Modeller. Authors using algorithms for predicting effects of mutations should be aware of the bias described here. ivankov13@gmail.com. Supplementary data are available at Bioinformatics online.

  19. Network-based de-noising improves prediction from microarray data.

    PubMed

    Kato, Tsuyoshi; Murata, Yukio; Miura, Koh; Asai, Kiyoshi; Horton, Paul B; Koji, Tsuda; Fujibuchi, Wataru

    2006-03-20

    Prediction of human cell response to anti-cancer drugs (compounds) from microarray data is a challenging problem, due to the noise properties of microarrays as well as the high variance of living cell responses to drugs. Hence there is a strong need for more practical and robust methods than standard methods for real-value prediction. We devised an extended version of the off-subspace noise-reduction (de-noising) method to incorporate heterogeneous network data such as sequence similarity or protein-protein interactions into a single framework. Using that method, we first de-noise the gene expression data for training and test data and also the drug-response data for training data. Then we predict the unknown responses of each drug from the de-noised input data. For ascertaining whether de-noising improves prediction or not, we carry out 12-fold cross-validation for assessment of the prediction performance. We use the Pearson's correlation coefficient between the true and predicted response values as the prediction performance. De-noising improves the prediction performance for 65% of drugs. Furthermore, we found that this noise reduction method is robust and effective even when a large amount of artificial noise is added to the input data. We found that our extended off-subspace noise-reduction method combining heterogeneous biological data is successful and quite useful to improve prediction of human cell cancer drug responses from microarray data.

  20. A deep learning-based multi-model ensemble method for cancer prediction.

    PubMed

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM

    PubMed Central

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei

    2018-01-01

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942

  2. Development of a program for toric intraocular lens calculation considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and effective lens position.

    PubMed

    Eom, Youngsub; Ryu, Dongok; Kim, Dae Wook; Yang, Seul Ki; Song, Jong Suk; Kim, Sug-Whan; Kim, Hyo Myung

    2016-10-01

    To evaluate the toric intraocular lens (IOL) calculation considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and effective lens position (ELP). Two thousand samples of corneal parameters with keratometric astigmatism ≥ 1.0 D were obtained using bootstrap methods. The probability distributions for incision-induced keratometric and posterior corneal astigmatisms, as well as ELP were estimated from the literature review. The predicted residual astigmatism error using method D with an IOL add power calculator (IAPC) was compared with those derived using methods A, B, and C through Monte-Carlo simulation. Method A considered the keratometric astigmatism and incision-induced keratometric astigmatism, method B considered posterior corneal astigmatism in addition to the A method, method C considered incision-induced posterior corneal astigmatism in addition to the B method, and method D considered ELP in addition to the C method. To verify the IAPC used in this study, the predicted toric IOL cylinder power and its axis using the IAPC were compared with ray-tracing simulation results. The median magnitude of the predicted residual astigmatism error using method D (0.25 diopters [D]) was smaller than that derived using methods A (0.42 D), B (0.38 D), and C (0.28 D) respectively. Linear regression analysis indicated that the predicted toric IOL cylinder power and its axis had excellent goodness-of-fit between the IAPC and ray-tracing simulation. The IAPC is a simple but accurate method for predicting the toric IOL cylinder power and its axis considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and ELP.

  3. Prediction of Drug-Target Interaction Networks from the Integration of Protein Sequences and Drug Chemical Structures.

    PubMed

    Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Zhou, Yong; An, Ji-Yong

    2017-07-05

    Knowledge of drug-target interaction (DTI) plays an important role in discovering new drug candidates. Unfortunately, there are unavoidable shortcomings; including the time-consuming and expensive nature of the experimental method to predict DTI. Therefore, it motivates us to develop an effective computational method to predict DTI based on protein sequence. In the paper, we proposed a novel computational approach based on protein sequence, namely PDTPS (Predicting Drug Targets with Protein Sequence) to predict DTI. The PDTPS method combines Bi-gram probabilities (BIGP), Position Specific Scoring Matrix (PSSM), and Principal Component Analysis (PCA) with Relevance Vector Machine (RVM). In order to evaluate the prediction capacity of the PDTPS, the experiment was carried out on enzyme, ion channel, GPCR, and nuclear receptor datasets by using five-fold cross-validation tests. The proposed PDTPS method achieved average accuracy of 97.73%, 93.12%, 86.78%, and 87.78% on enzyme, ion channel, GPCR and nuclear receptor datasets, respectively. The experimental results showed that our method has good prediction performance. Furthermore, in order to further evaluate the prediction performance of the proposed PDTPS method, we compared it with the state-of-the-art support vector machine (SVM) classifier on enzyme and ion channel datasets, and other exiting methods on four datasets. The promising comparison results further demonstrate that the efficiency and robust of the proposed PDTPS method. This makes it a useful tool and suitable for predicting DTI, as well as other bioinformatics tasks.

  4. Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory

    PubMed Central

    Tao, Qing

    2017-01-01

    Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM. PMID:29391864

  5. Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory.

    PubMed

    Yang, Haimin; Pan, Zhisong; Tao, Qing

    2017-01-01

    Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM.

  6. The 'Arm Force Field' method to predict manual arm strength based on only hand location and force direction.

    PubMed

    La Delfa, Nicholas J; Potvin, Jim R

    2017-03-01

    This paper describes the development of a novel method (termed the 'Arm Force Field' or 'AFF') to predict manual arm strength (MAS) for a wide range of body orientations, hand locations and any force direction. This method used an artificial neural network (ANN) to predict the effects of hand location and force direction on MAS, and included a method to estimate the contribution of the arm's weight to the predicted strength. The AFF method predicted the MAS values very well (r 2  = 0.97, RMSD = 5.2 N, n = 456) and maintained good generalizability with external test data (r 2  = 0.842, RMSD = 13.1 N, n = 80). The AFF can be readily integrated within any DHM ergonomics software, and appears to be a more robust, reliable and valid method of estimating the strength capabilities of the arm, when compared to current approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Integrative Approaches for Predicting in vivo Effects of Chemicals from their Structural Descriptors and the Results of Short-term Biological Assays

    PubMed Central

    Low, Yen S.; Sedykh, Alexander; Rusyn, Ivan; Tropsha, Alexander

    2017-01-01

    Cheminformatics approaches such as Quantitative Structure Activity Relationship (QSAR) modeling have been used traditionally for predicting chemical toxicity. In recent years, high throughput biological assays have been increasingly employed to elucidate mechanisms of chemical toxicity and predict toxic effects of chemicals in vivo. The data generated in such assays can be considered as biological descriptors of chemicals that can be combined with molecular descriptors and employed in QSAR modeling to improve the accuracy of toxicity prediction. In this review, we discuss several approaches for integrating chemical and biological data for predicting biological effects of chemicals in vivo and compare their performance across several data sets. We conclude that while no method consistently shows superior performance, the integrative approaches rank consistently among the best yet offer enriched interpretation of models over those built with either chemical or biological data alone. We discuss the outlook for such interdisciplinary methods and offer recommendations to further improve the accuracy and interpretability of computational models that predict chemical toxicity. PMID:24805064

  8. Prediction of cancer cell sensitivity to natural products based on genomic and chemical properties.

    PubMed

    Yue, Zhenyu; Zhang, Wenna; Lu, Yongming; Yang, Qiaoyue; Ding, Qiuying; Xia, Junfeng; Chen, Yan

    2015-01-01

    Natural products play a significant role in cancer chemotherapy. They are likely to provide many lead structures, which can be used as templates for the construction of novel drugs with enhanced antitumor activity. Traditional research approaches studied structure-activity relationship of natural products and obtained key structural properties, such as chemical bond or group, with the purpose of ascertaining their effect on a single cell line or a single tissue type. Here, for the first time, we develop a machine learning method to comprehensively predict natural products responses against a panel of cancer cell lines based on both the gene expression and the chemical properties of natural products. The results on two datasets, training set and independent test set, show that this proposed method yields significantly better prediction accuracy. In addition, we also demonstrate the predictive power of our proposed method by modeling the cancer cell sensitivity to two natural products, Curcumin and Resveratrol, which indicate that our method can effectively predict the response of cancer cell lines to these two natural products. Taken together, the method will facilitate the identification of natural products as cancer therapies and the development of precision medicine by linking the features of patient genomes to natural product sensitivity.

  9. OVERBURDEN MINERALOGY AS RELATED TO GROUND-WATER CHEMICAL CHANGES IN COAL STRIP MINING

    EPA Science Inventory

    A research program was initiated to define and develop an inclusive, effective, and economical method for predicting potential ground-water quality changes resulting from the strip mining of coal in the Western United States. To utilize the predictive method, it is necessary to s...

  10. Research agenda for integrated landscape modeling

    Treesearch

    Samuel A. Cushman; Donald McKenzie; David L. Peterson; Jeremy Littell; Kevin S. McKelvey

    2007-01-01

    Reliable predictions of how changing climate and disturbance regimes will affect forest ecosystems are crucial for effective forest management. Current fire and climate research in forest ecosystem and community ecology offers data and methods that can inform such predictions. However, research in these fields occurs at different scales, with disparate goals, methods,...

  11. Utilization of the Generalized Method of Cells to Analyze the Deformation Response of Laminated Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.

    2012-01-01

    In order to practically utilize ceramic matrix composites in aircraft engine components, robust analysis tools are required that can simulate the material response in a computationally efficient manner. The MAC/GMC software developed at NASA Glenn Research Center, based on the Generalized Method of Cells micromechanics method, has the potential to meet this need. Utilizing MAC/GMC, the effective stiffness properties, proportional limit stress and ultimate strength can be predicted based on the properties and response of the individual constituents. In this paper, the effective stiffness and strength properties for a representative laminated ceramic matrix composite with a large diameter fiber are predicted for a variety of fiber orientation angles and laminate orientations. As part of the analytical study, methods to determine the in-situ stiffness and strength properties of the constituents required to appropriately simulate the effective composite response are developed. The stiffness properties of the representative composite have been adequately predicted for all of the fiber orientations and laminate configurations examined in this study. The proportional limit stresses and strains and ultimate stresses and strains were predicted with varying levels of accuracy, depending on the laminate orientation. However, for the cases where the predictions did not have the desired level of accuracy, the specific issues related to the micromechanics theory were identified which could lead to difficulties that were encountered that could be addressed in future work.

  12. The Causal Meaning of Genomic Predictors and How It Affects Construction and Comparison of Genome-Enabled Selection Models

    PubMed Central

    Valente, Bruno D.; Morota, Gota; Peñagaricano, Francisco; Gianola, Daniel; Weigel, Kent; Rosa, Guilherme J. M.

    2015-01-01

    The term “effect” in additive genetic effect suggests a causal meaning. However, inferences of such quantities for selection purposes are typically viewed and conducted as a prediction task. Predictive ability as tested by cross-validation is currently the most acceptable criterion for comparing models and evaluating new methodologies. Nevertheless, it does not directly indicate if predictors reflect causal effects. Such evaluations would require causal inference methods that are not typical in genomic prediction for selection. This suggests that the usual approach to infer genetic effects contradicts the label of the quantity inferred. Here we investigate if genomic predictors for selection should be treated as standard predictors or if they must reflect a causal effect to be useful, requiring causal inference methods. Conducting the analysis as a prediction or as a causal inference task affects, for example, how covariates of the regression model are chosen, which may heavily affect the magnitude of genomic predictors and therefore selection decisions. We demonstrate that selection requires learning causal genetic effects. However, genomic predictors from some models might capture noncausal signal, providing good predictive ability but poorly representing true genetic effects. Simulated examples are used to show that aiming for predictive ability may lead to poor modeling decisions, while causal inference approaches may guide the construction of regression models that better infer the target genetic effect even when they underperform in cross-validation tests. In conclusion, genomic selection models should be constructed to aim primarily for identifiability of causal genetic effects, not for predictive ability. PMID:25908318

  13. Predicting Drug Combination Index and Simulating the Network-Regulation Dynamics by Mathematical Modeling of Drug-Targeted EGFR-ERK Signaling Pathway

    NASA Astrophysics Data System (ADS)

    Huang, Lu; Jiang, Yuyang; Chen, Yuzong

    2017-01-01

    Synergistic drug combinations enable enhanced therapeutics. Their discovery typically involves the measurement and assessment of drug combination index (CI), which can be facilitated by the development and applications of in-silico CI predictive tools. In this work, we developed and tested the ability of a mathematical model of drug-targeted EGFR-ERK pathway in predicting CIs and in analyzing multiple synergistic drug combinations against observations. Our mathematical model was validated against the literature reported signaling, drug response dynamics, and EGFR-MEK drug combination effect. The predicted CIs and combination therapeutic effects of the EGFR-BRaf, BRaf-MEK, FTI-MEK, and FTI-BRaf inhibitor combinations showed consistent synergism. Our results suggest that existing pathway models may be potentially extended for developing drug-targeted pathway models to predict drug combination CI values, isobolograms, and drug-response surfaces as well as to analyze the dynamics of individual and combinations of drugs. With our model, the efficacy of potential drug combinations can be predicted. Our method complements the developed in-silico methods (e.g. the chemogenomic profile and the statistically-inferenced network models) by predicting drug combination effects from the perspectives of pathway dynamics using experimental or validated molecular kinetic constants, thereby facilitating the collective prediction of drug combination effects in diverse ranges of disease systems.

  14. DETERMINING EFFECTIVE INTERFACIAL TENSION AND PREDICTING FINGER SPACING FOR DNAPL PENETRATION INTO WATER-SATURATED POROUS MEDIA. (R826157)

    EPA Science Inventory

    The difficulty in determining the effective interfacial tension limits the prediction of the wavelength of fingering of immiscible fluids in porous media. A method to estimate the effective interfacial tension using fractal concepts was presented by Chang et al. [Water Resour. Re...

  15. RRCRank: a fusion method using rank strategy for residue-residue contact prediction.

    PubMed

    Jing, Xiaoyang; Dong, Qiwen; Lu, Ruqian

    2017-09-02

    In structural biology area, protein residue-residue contacts play a crucial role in protein structure prediction. Some researchers have found that the predicted residue-residue contacts could effectively constrain the conformational search space, which is significant for de novo protein structure prediction. In the last few decades, related researchers have developed various methods to predict residue-residue contacts, especially, significant performance has been achieved by using fusion methods in recent years. In this work, a novel fusion method based on rank strategy has been proposed to predict contacts. Unlike the traditional regression or classification strategies, the contact prediction task is regarded as a ranking task. First, two kinds of features are extracted from correlated mutations methods and ensemble machine-learning classifiers, and then the proposed method uses the learning-to-rank algorithm to predict contact probability of each residue pair. First, we perform two benchmark tests for the proposed fusion method (RRCRank) on CASP11 dataset and CASP12 dataset respectively. The test results show that the RRCRank method outperforms other well-developed methods, especially for medium and short range contacts. Second, in order to verify the superiority of ranking strategy, we predict contacts by using the traditional regression and classification strategies based on the same features as ranking strategy. Compared with these two traditional strategies, the proposed ranking strategy shows better performance for three contact types, in particular for long range contacts. Third, the proposed RRCRank has been compared with several state-of-the-art methods in CASP11 and CASP12. The results show that the RRCRank could achieve comparable prediction precisions and is better than three methods in most assessment metrics. The learning-to-rank algorithm is introduced to develop a novel rank-based method for the residue-residue contact prediction of proteins, which achieves state-of-the-art performance based on the extensive assessment.

  16. Efficient network disintegration under incomplete information: the comic effect of link prediction

    NASA Astrophysics Data System (ADS)

    Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin

    2016-03-01

    The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the “comic effect” of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized.

  17. A prediction model of drug-induced ototoxicity developed by an optimal support vector machine (SVM) method.

    PubMed

    Zhou, Shu; Li, Guo-Bo; Huang, Lu-Yi; Xie, Huan-Zhang; Zhao, Ying-Lan; Chen, Yu-Zong; Li, Lin-Li; Yang, Sheng-Yong

    2014-08-01

    Drug-induced ototoxicity, as a toxic side effect, is an important issue needed to be considered in drug discovery. Nevertheless, current experimental methods used to evaluate drug-induced ototoxicity are often time-consuming and expensive, indicating that they are not suitable for a large-scale evaluation of drug-induced ototoxicity in the early stage of drug discovery. We thus, in this investigation, established an effective computational prediction model of drug-induced ototoxicity using an optimal support vector machine (SVM) method, GA-CG-SVM. Three GA-CG-SVM models were developed based on three training sets containing agents bearing different risk levels of drug-induced ototoxicity. For comparison, models based on naïve Bayesian (NB) and recursive partitioning (RP) methods were also used on the same training sets. Among all the prediction models, the GA-CG-SVM model II showed the best performance, which offered prediction accuracies of 85.33% and 83.05% for two independent test sets, respectively. Overall, the good performance of the GA-CG-SVM model II indicates that it could be used for the prediction of drug-induced ototoxicity in the early stage of drug discovery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Efficient network disintegration under incomplete information: the comic effect of link prediction

    PubMed Central

    Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin

    2016-01-01

    The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the “comic effect” of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized. PMID:26960247

  19. Effects of Prediction and Contextual Support on Lexical Processing: Prediction takes Precedence

    PubMed Central

    Brothers, Trevor; Swaab, Tamara Y.; Traxler, Matthew J.

    2014-01-01

    Readers may use contextual information to anticipate and pre-activate specific lexical items during reading. However, prior studies have not clearly dissociated the effects of accurate lexical prediction from other forms of contextual facilitation such as plausibility or semantic priming. In this study, we measured electrophysiological responses to predicted and unpredicted target words in passages providing varying levels of contextual support. This method was used to isolate the neural effects of prediction from other potential contextual influences on lexical processing. While both prediction and discourse context influenced ERP amplitudes within the time range of the N400, the effects of prediction occurred much more rapidly, preceding contextual facilitation by approximately 100ms. In addition, a frontal, post-N400 positivity (PNP) was modulated by both prediction accuracy and the overall plausibility of the preceding passage. These results suggest a unique temporal primacy for prediction in facilitating lexical access. They also suggest that the frontal PNP may index the costs of revising discourse representations following an incorrect lexical prediction. PMID:25497522

  20. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.

    PubMed

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José

    2018-03-28

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.

  1. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    PubMed Central

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  2. The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network

    NASA Astrophysics Data System (ADS)

    Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.

    2017-05-01

    The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.

  3. Motion prediction of a non-cooperative space target

    NASA Astrophysics Data System (ADS)

    Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan

    2018-01-01

    Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion prediction of a free-floating non-cooperative target in space is studied and a motion prediction algorithm is proposed. In order to predict the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the predicted motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. Accurate prediction needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the predicted motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an accurate identification can be obtained in time. The numerical results show that the proposed motion prediction algorithm is able to predict the motion of the target.

  4. Prediction of Protein-Protein Interaction Sites with Machine-Learning-Based Data-Cleaning and Post-Filtering Procedures.

    PubMed

    Liu, Guang-Hui; Shen, Hong-Bin; Yu, Dong-Jun

    2016-04-01

    Accurately predicting protein-protein interaction sites (PPIs) is currently a hot topic because it has been demonstrated to be very useful for understanding disease mechanisms and designing drugs. Machine-learning-based computational approaches have been broadly utilized and demonstrated to be useful for PPI prediction. However, directly applying traditional machine learning algorithms, which often assume that samples in different classes are balanced, often leads to poor performance because of the severe class imbalance that exists in the PPI prediction problem. In this study, we propose a novel method for improving PPI prediction performance by relieving the severity of class imbalance using a data-cleaning procedure and reducing predicted false positives with a post-filtering procedure: First, a machine-learning-based data-cleaning procedure is applied to remove those marginal targets, which may potentially have a negative effect on training a model with a clear classification boundary, from the majority samples to relieve the severity of class imbalance in the original training dataset; then, a prediction model is trained on the cleaned dataset; finally, an effective post-filtering procedure is further used to reduce potential false positive predictions. Stringent cross-validation and independent validation tests on benchmark datasets demonstrated the efficacy of the proposed method, which exhibits highly competitive performance compared with existing state-of-the-art sequence-based PPIs predictors and should supplement existing PPI prediction methods.

  5. Extending Theory-Based Quantitative Predictions to New Health Behaviors.

    PubMed

    Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O

    2016-04-01

    Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.

  6. Prospects and Potential Uses of Genomic Prediction of Key Performance Traits in Tetraploid Potato.

    PubMed

    Stich, Benjamin; Van Inghelandt, Delphine

    2018-01-01

    Genomic prediction is a routine tool in breeding programs of most major animal and plant species. However, its usefulness for potato breeding has not yet been evaluated in detail. The objectives of this study were to (i) examine the prospects of genomic prediction of key performance traits in a diversity panel of tetraploid potato modeling additive, dominance, and epistatic effects, (ii) investigate the effects of size and make up of training set, number of test environments and molecular markers on prediction accuracy, and (iii) assess the effect of including markers from candidate genes on the prediction accuracy. With genomic best linear unbiased prediction (GBLUP), BayesA, BayesCπ, and Bayesian LASSO, four different prediction methods were used for genomic prediction of relative area under disease progress curve after a Phytophthora infestans infection, plant maturity, maturity corrected resistance, tuber starch content, tuber starch yield (TSY), and tuber yield (TY) of 184 tetraploid potato clones or subsets thereof genotyped with the SolCAP 8.3k SNP array. The cross-validated prediction accuracies with GBLUP and the three Bayesian approaches for the six evaluated traits ranged from about 0.5 to about 0.8. For traits with a high expected genetic complexity, such as TSY and TY, we observed an 8% higher prediction accuracy using a model with additive and dominance effects compared with a model with additive effects only. Our results suggest that for oligogenic traits in general and when diagnostic markers are available in particular, the use of Bayesian methods for genomic prediction is highly recommended and that the diagnostic markers should be modeled as fixed effects. The evaluation of the relative performance of genomic prediction vs. phenotypic selection indicated that the former is superior, assuming cycle lengths and selection intensities that are possible to realize in commercial potato breeding programs.

  7. Prospects and Potential Uses of Genomic Prediction of Key Performance Traits in Tetraploid Potato

    PubMed Central

    Stich, Benjamin; Van Inghelandt, Delphine

    2018-01-01

    Genomic prediction is a routine tool in breeding programs of most major animal and plant species. However, its usefulness for potato breeding has not yet been evaluated in detail. The objectives of this study were to (i) examine the prospects of genomic prediction of key performance traits in a diversity panel of tetraploid potato modeling additive, dominance, and epistatic effects, (ii) investigate the effects of size and make up of training set, number of test environments and molecular markers on prediction accuracy, and (iii) assess the effect of including markers from candidate genes on the prediction accuracy. With genomic best linear unbiased prediction (GBLUP), BayesA, BayesCπ, and Bayesian LASSO, four different prediction methods were used for genomic prediction of relative area under disease progress curve after a Phytophthora infestans infection, plant maturity, maturity corrected resistance, tuber starch content, tuber starch yield (TSY), and tuber yield (TY) of 184 tetraploid potato clones or subsets thereof genotyped with the SolCAP 8.3k SNP array. The cross-validated prediction accuracies with GBLUP and the three Bayesian approaches for the six evaluated traits ranged from about 0.5 to about 0.8. For traits with a high expected genetic complexity, such as TSY and TY, we observed an 8% higher prediction accuracy using a model with additive and dominance effects compared with a model with additive effects only. Our results suggest that for oligogenic traits in general and when diagnostic markers are available in particular, the use of Bayesian methods for genomic prediction is highly recommended and that the diagnostic markers should be modeled as fixed effects. The evaluation of the relative performance of genomic prediction vs. phenotypic selection indicated that the former is superior, assuming cycle lengths and selection intensities that are possible to realize in commercial potato breeding programs. PMID:29563919

  8. Biot-Gassmann theory for velocities of gas hydrate-bearing sediments

    USGS Publications Warehouse

    Lee, M.W.

    2002-01-01

    Elevated elastic velocities are a distinct physical property of gas hydrate-bearing sediments. A number of velocity models and equations (e.g., pore-filling model, cementation model, effective medium theories, weighted equations, and time-average equations) have been used to describe this effect. In particular, the weighted equation and effective medium theory predict reasonably well the elastic properties of unconsolidated gas hydrate-bearing sediments. A weakness of the weighted equation is its use of the empirical relationship of the time-average equation as one element of the equation. One drawback of the effective medium theory is its prediction of unreasonably higher shear-wave velocity at high porosities, so that the predicted velocity ratio does not agree well with the observed velocity ratio. To overcome these weaknesses, a method is proposed, based on Biot-Gassmann theories and assuming the formation velocity ratio (shear to compressional velocity) of an unconsolidated sediment is related to the velocity ratio of the matrix material of the formation and its porosity. Using the Biot coefficient calculated from either the weighted equation or from the effective medium theory, the proposed method accurately predicts the elastic properties of unconsolidated sediments with or without gas hydrate concentration. This method was applied to the observed velocities at the Mallik 2L-39 well, Mackenzie Delta, Canada.

  9. [Study on the experimental application of floating-reference method to noninvasive blood glucose sensing].

    PubMed

    Yu, Hui; Qi, Dan; Li, Heng-da; Xu, Ke-xin; Yuan, Wei-jie

    2012-03-01

    Weak signal, low instrument signal-to-noise ratio, continuous variation of human physiological environment and the interferences from other components in blood make it difficult to extract the blood glucose information from near infrared spectrum in noninvasive blood glucose measurement. The floating-reference method, which analyses the effect of glucose concentration variation on absorption coefficient and scattering coefficient, gets spectrum at the reference point and the measurement point where the light intensity variations from absorption and scattering are counteractive and biggest respectively. By using the spectrum from reference point as reference, floating-reference method can reduce the interferences from variation of physiological environment and experiment circumstance. In the present paper, the effectiveness of floating-reference method working on improving prediction precision and stability was assessed through application experiments. The comparison was made between models whose data were processed with and without floating-reference method. The results showed that the root mean square error of prediction (RMSEP) decreased by 34.7% maximally. The floating-reference method could reduce the influences of changes of samples' state, instrument noises and drift, and improve the models' prediction precision and stability effectively.

  10. AOP-informed assessment of endocrine disruption in freshwater crustaceans

    EPA Science Inventory

    To date, most research focused on developing more efficient and cost effective methods to predict toxicity have focused on human biology. However, there is also a need for effective high throughput tools to predict toxicity to other species that perform critical ecosystem functio...

  11. Advanced prediction technique for the low speed aerodynamics of V/STOL aircraft. Volume 1: Technical discussion

    NASA Technical Reports Server (NTRS)

    Beatty, T. D.; Worthey, M. K.

    1984-01-01

    The V/STOL Aircraft Propulsive Effects (VAPE) computerized prediction method is evaluated. The program analyzes viscous effects, various jet, inlet, and Short TakeOff and Landing (STOL) models, and examines the aerodynamic configurations of V/STOL aircraft.

  12. Predicting locations of rare aquatic species’ habitat with a combination of species-specific and assemblage-based models

    USGS Publications Warehouse

    McKenna, James E.; Carlson, Douglas M.; Payne-Wynne, Molly L.

    2013-01-01

    Aim: Rare aquatic species are a substantial component of biodiversity, and their conservation is a major objective of many management plans. However, they are difficult to assess, and their optimal habitats are often poorly known. Methods to effectively predict the likely locations of suitable rare aquatic species habitats are needed. We combine two modelling approaches to predict occurrence and general abundance of several rare fish species. Location: Allegheny watershed of western New York State (USA) Methods: Our method used two empirical neural network modelling approaches (species specific and assemblage based) to predict stream-by-stream occurrence and general abundance of rare darters, based on broad-scale habitat conditions. Species-specific models were developed for longhead darter (Percina macrocephala), spotted darter (Etheostoma maculatum) and variegate darter (Etheostoma variatum) in the Allegheny drainage. An additional model predicted the type of rare darter-containing assemblage expected in each stream reach. Predictions from both models were then combined inclusively and exclusively and compared with additional independent data. Results Example rare darter predictions demonstrate the method's effectiveness. Models performed well (R2 ≥ 0.79), identified where suitable darter habitat was most likely to occur, and predictions matched well to those of collection sites. Additional independent data showed that the most conservative (exclusive) model slightly underestimated the distributions of these rare darters or predictions were displaced by one stream reach, suggesting that new darter habitat types were detected in the later collections. Main conclusions Broad-scale habitat variables can be used to effectively identify rare species' habitats. Combining species-specific and assemblage-based models enhances our ability to make use of the sparse data on rare species and to identify habitat units most likely and least likely to support those species. This hybrid approach may assist managers with the prioritization of habitats to be examined or conserved for rare species.

  13. Accurate prediction of protein-protein interactions by integrating potential evolutionary information embedded in PSSM profile and discriminative vector machine classifier.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Li, Li-Ping; Huang, De-Shuang; Yan, Gui-Ying; Nie, Ru; Huang, Yu-An

    2017-04-04

    Identification of protein-protein interactions (PPIs) is of critical importance for deciphering the underlying mechanisms of almost all biological processes of cell and providing great insight into the study of human disease. Although much effort has been devoted to identifying PPIs from various organisms, existing high-throughput biological techniques are time-consuming, expensive, and have high false positive and negative results. Thus it is highly urgent to develop in silico methods to predict PPIs efficiently and accurately in this post genomic era. In this article, we report a novel computational model combining our newly developed discriminative vector machine classifier (DVM) and an improved Weber local descriptor (IWLD) for the prediction of PPIs. Two components, differential excitation and orientation, are exploited to build evolutionary features for each protein sequence. The main characteristics of the proposed method lies in introducing an effective feature descriptor IWLD which can capture highly discriminative evolutionary information from position-specific scoring matrixes (PSSM) of protein data, and employing the powerful and robust DVM classifier. When applying the proposed method to Yeast and H. pylori data sets, we obtained excellent prediction accuracies as high as 96.52% and 91.80%, respectively, which are significantly better than the previous methods. Extensive experiments were then performed for predicting cross-species PPIs and the predictive results were also pretty promising. To further validate the performance of the proposed method, we compared it with the state-of-the-art support vector machine (SVM) classifier on Human data set. The experimental results obtained indicate that our method is highly effective for PPIs prediction and can be taken as a supplementary tool for future proteomics research.

  14. Use of QSARs in international decision-making frameworks to predict ecologic effects and environmental fate of chemical substances.

    PubMed Central

    Cronin, Mark T D; Walker, John D; Jaworska, Joanna S; Comber, Michael H I; Watts, Christopher D; Worth, Andrew P

    2003-01-01

    This article is a review of the use, by regulatory agencies and authorities, of quantitative structure-activity relationships (QSARs) to predict ecologic effects and environmental fate of chemicals. For many years, the U.S. Environmental Protection Agency has been the most prominent regulatory agency using QSARs to predict the ecologic effects and environmental fate of chemicals. However, as increasing numbers of standard QSAR methods are developed and validated to predict ecologic effects and environmental fate of chemicals, it is anticipated that more regulatory agencies and authorities will find them to be acceptable alternatives to chemical testing. PMID:12896861

  15. Prediction of aircraft sideline noise attenuation

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.

    1978-01-01

    A computational study is made using the recommended ground effect theory by Pao, Wenzel, and Oncley. It is shown that this theory adequately predicts the measured ground attenuation data by Parkin and Scholes, which is the only available large data set. It is also shown, however, that the ground effect theory does not predict the measured lateral attenuations from actual aircraft flyovers. There remain one or more important lateral effects on aircraft noise, such as sideline shielding of sources, which must be incorporated in the prediction methods. Experiments at low elevation angles (0 deg to 10 deg) and low-to-intermediate frequencies are recommended to further validate the ground effect theory.

  16. Analysis of conditional genetic effects and variance components in developmental genetics.

    PubMed

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  17. Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics

    PubMed Central

    Zhu, J.

    1995-01-01

    A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500

  18. Simulation analysis of adaptive cruise prediction control

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Cui, Sheng Min

    2017-09-01

    Predictive control is suitable for multi-variable and multi-constraint system control.In order to discuss the effect of predictive control on the vehicle longitudinal motion, this paper establishes the expected spacing model by combining variable pitch spacing and the of safety distance strategy. The model predictive control theory and the optimization method based on secondary planning are designed to obtain and track the best expected acceleration trajectory quickly. Simulation models are established including predictive and adaptive fuzzy control. Simulation results show that predictive control can realize the basic function of the system while ensuring the safety. The application of predictive and fuzzy adaptive algorithm in cruise condition indicates that the predictive control effect is better.

  19. Computational chemistry in 25 years

    NASA Astrophysics Data System (ADS)

    Abagyan, Ruben

    2012-01-01

    Here we are making some predictions based on three methods: a straightforward extrapolations of the existing trends; a self-fulfilling prophecy; and picking some current grievances and predicting that they will be addressed or solved. We predict the growth of multicore computing and dramatic growth of data, as well as the improvements in force fields and sampling methods. We also predict that effects of therapeutic and environmental molecules on human body, as well as complex natural chemical signalling will be understood in terms of three dimensional models of their binding to specific pockets.

  20. Seventy-meter antenna performance predictions: GTD analysis compared with traditional ray-tracing methods

    NASA Technical Reports Server (NTRS)

    Schredder, J. M.

    1988-01-01

    A comparative analysis was performed, using both the Geometrical Theory of Diffraction (GTD) and traditional pathlength error analysis techniques, for predicting RF antenna gain performance and pointing corrections. The NASA/JPL 70 meter antenna with its shaped surface was analyzed for gravity loading over the range of elevation angles. Also analyzed were the effects of lateral and axial displacements of the subreflector. Significant differences were noted between the predictions of the two methods, in the effect of subreflector displacements, and in the optimal subreflector positions to focus a gravity-deformed main reflector. The results are of relevance to future design procedure.

  1. Development of a Jet Noise Prediction Method for Installed Jet Configurations

    NASA Technical Reports Server (NTRS)

    Hunter, Craig A.; Thomas, Russell H.

    2003-01-01

    This paper describes development of the Jet3D noise prediction method and its application to heated jets with complex three-dimensional flow fields and installation effects. Noise predictions were made for four separate flow bypass ratio five nozzle configurations tested in the NASA Langley Jet Noise Laboratory. These configurations consist of a round core and fan nozzle with and without pylon, and an eight chevron core nozzle and round fan nozzle with and without pylon. Predicted SPL data were in good agreement with experimental noise measurements up to 121 inlet angle, beyond which Jet3D under predicted low frequency levels. This is due to inherent limitations in the formulation of Lighthill's Acoustic Analogy used in Jet3D, and will be corrected in ongoing development. Jet3D did an excellent job predicting full scale EPNL for nonchevron configurations, and captured the effect of the pylon, correctly predicting a reduction in EPNL. EPNL predictions for chevron configurations were not in good agreement with measured data, likely due to the lower mixing and longer potential cores in the CFD simulations of these cases.

  2. CONFOLD2: improved contact-driven ab initio protein structure modeling.

    PubMed

    Adhikari, Badri; Cheng, Jianlin

    2018-01-25

    Contact-guided protein structure prediction methods are becoming more and more successful because of the latest advances in residue-residue contact prediction. To support contact-driven structure prediction, effective tools that can quickly build tertiary structural models of good quality from predicted contacts need to be developed. We develop an improved contact-driven protein modelling method, CONFOLD2, and study how it may be effectively used for ab initio protein structure prediction with predicted contacts as input. It builds models using various subsets of input contacts to explore the fold space under the guidance of a soft square energy function, and then clusters the models to obtain the top five models. CONFOLD2 obtains an average reconstruction accuracy of 0.57 TM-score for the 150 proteins in the PSICOV contact prediction dataset. When benchmarked on the CASP11 contacts predicted using CONSIP2 and CASP12 contacts predicted using Raptor-X, CONFOLD2 achieves a mean TM-score of 0.41 on both datasets. CONFOLD2 allows to quickly generate top five structural models for a protein sequence when its secondary structures and contacts predictions at hand. The source code of CONFOLD2 is publicly available at https://github.com/multicom-toolbox/CONFOLD2/ .

  3. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and Analysis of Cosmic Ray Effects in Electronics). The Single Event Figure of Merit method was also revised to use the solar minimum galactic cosmic ray spectrum and extended to circular orbits down to 200 km at any inclination. More recently a series of commercial codes was developed by TRAD (Test & Radiations) which includes the OMERE code which calculates single event effects. There are other error rate prediction methods which use Monte Carlo techniques. In this chapter the analytic methods for estimating the environment within spacecraft will be discussed.

  4. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    NASA Astrophysics Data System (ADS)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  5. [Analytic methods for seed models with genotype x environment interactions].

    PubMed

    Zhu, J

    1996-01-01

    Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.

  6. Relative Packing Groups in Template-Based Structure Prediction: Cooperative Effects of True Positive Constraints

    PubMed Central

    Day, Ryan; Qu, Xiaotao; Swanson, Rosemarie; Bohannan, Zach; Bliss, Robert

    2011-01-01

    Abstract Most current template-based structure prediction methods concentrate on finding the correct backbone conformation and then packing sidechains within that backbone. Our packing-based method derives distance constraints from conserved relative packing groups (RPGs). In our refinement approach, the RPGs provide a level of resolution that restrains global topology while allowing conformational sampling. In this study, we test our template-based structure prediction method using 51 prediction units from CASP7 experiments. RPG-based constraints are able to substantially improve approximately two-thirds of starting templates. Upon deeper investigation, we find that true positive spatial constraints, especially those non-local in sequence, derived from the RPGs were important to building nearer native models. Surprisingly, the fraction of incorrect or false positive constraints does not strongly influence the quality of the final candidate. This result indicates that our RPG-based true positive constraints sample the self-consistent, cooperative interactions of the native structure. The lack of such reinforcing cooperativity explains the weaker effect of false positive constraints. Generally, these findings are encouraging indications that RPGs will improve template-based structure prediction. PMID:21210729

  7. Influence of outliers on accuracy estimation in genomic prediction in plant breeding.

    PubMed

    Estaghvirou, Sidi Boubacar Ould; Ogutu, Joseph O; Piepho, Hans-Peter

    2014-10-01

    Outliers often pose problems in analyses of data in plant breeding, but their influence on the performance of methods for estimating predictive accuracy in genomic prediction studies has not yet been evaluated. Here, we evaluate the influence of outliers on the performance of methods for accuracy estimation in genomic prediction studies using simulation. We simulated 1000 datasets for each of 10 scenarios to evaluate the influence of outliers on the performance of seven methods for estimating accuracy. These scenarios are defined by the number of genotypes, marker effect variance, and magnitude of outliers. To mimic outliers, we added to one observation in each simulated dataset, in turn, 5-, 8-, and 10-times the error SD used to simulate small and large phenotypic datasets. The effect of outliers on accuracy estimation was evaluated by comparing deviations in the estimated and true accuracies for datasets with and without outliers. Outliers adversely influenced accuracy estimation, more so at small values of genetic variance or number of genotypes. A method for estimating heritability and predictive accuracy in plant breeding and another used to estimate accuracy in animal breeding were the most accurate and resistant to outliers across all scenarios and are therefore preferable for accuracy estimation in genomic prediction studies. The performances of the other five methods that use cross-validation were less consistent and varied widely across scenarios. The computing time for the methods increased as the size of outliers and sample size increased and the genetic variance decreased. Copyright © 2014 Ould Estaghvirou et al.

  8. Cloud prediction of protein structure and function with PredictProtein for Debian.

    PubMed

    Kaján, László; Yachdav, Guy; Vicedo, Esmeralda; Steinegger, Martin; Mirdita, Milot; Angermüller, Christof; Böhm, Ariane; Domke, Simon; Ertl, Julia; Mertes, Christian; Reisinger, Eva; Staniewski, Cedric; Rost, Burkhard

    2013-01-01

    We report the release of PredictProtein for the Debian operating system and derivatives, such as Ubuntu, Bio-Linux, and Cloud BioLinux. The PredictProtein suite is available as a standard set of open source Debian packages. The release covers the most popular prediction methods from the Rost Lab, including methods for the prediction of secondary structure and solvent accessibility (profphd), nuclear localization signals (predictnls), and intrinsically disordered regions (norsnet). We also present two case studies that successfully utilize PredictProtein packages for high performance computing in the cloud: the first analyzes protein disorder for whole organisms, and the second analyzes the effect of all possible single sequence variants in protein coding regions of the human genome.

  9. Cloud Prediction of Protein Structure and Function with PredictProtein for Debian

    PubMed Central

    Kaján, László; Yachdav, Guy; Vicedo, Esmeralda; Steinegger, Martin; Mirdita, Milot; Angermüller, Christof; Böhm, Ariane; Domke, Simon; Ertl, Julia; Mertes, Christian; Reisinger, Eva; Rost, Burkhard

    2013-01-01

    We report the release of PredictProtein for the Debian operating system and derivatives, such as Ubuntu, Bio-Linux, and Cloud BioLinux. The PredictProtein suite is available as a standard set of open source Debian packages. The release covers the most popular prediction methods from the Rost Lab, including methods for the prediction of secondary structure and solvent accessibility (profphd), nuclear localization signals (predictnls), and intrinsically disordered regions (norsnet). We also present two case studies that successfully utilize PredictProtein packages for high performance computing in the cloud: the first analyzes protein disorder for whole organisms, and the second analyzes the effect of all possible single sequence variants in protein coding regions of the human genome. PMID:23971032

  10. Effect of missing data on multitask prediction methods.

    PubMed

    de la Vega de León, Antonio; Chen, Beining; Gillet, Valerie J

    2018-05-22

    There has been a growing interest in multitask prediction in chemoinformatics, helped by the increasing use of deep neural networks in this field. This technique is applied to multitarget data sets, where compounds have been tested against different targets, with the aim of developing models to predict a profile of biological activities for a given compound. However, multitarget data sets tend to be sparse; i.e., not all compound-target combinations have experimental values. There has been little research on the effect of missing data on the performance of multitask methods. We have used two complete data sets to simulate sparseness by removing data from the training set. Different models to remove the data were compared. These sparse sets were used to train two different multitask methods, deep neural networks and Macau, which is a Bayesian probabilistic matrix factorization technique. Results from both methods were remarkably similar and showed that the performance decrease because of missing data is at first small before accelerating after large amounts of data are removed. This work provides a first approximation to assess how much data is required to produce good performance in multitask prediction exercises.

  11. Representation of DNA sequences in genetic codon context with applications in exon and intron prediction.

    PubMed

    Yin, Changchuan

    2015-04-01

    To apply digital signal processing (DSP) methods to analyze DNA sequences, the sequences first must be specially mapped into numerical sequences. Thus, effective numerical mappings of DNA sequences play key roles in the effectiveness of DSP-based methods such as exon prediction. Despite numerous mappings of symbolic DNA sequences to numerical series, the existing mapping methods do not include the genetic coding features of DNA sequences. We present a novel numerical representation of DNA sequences using genetic codon context (GCC) in which the numerical values are optimized by simulation annealing to maximize the 3-periodicity signal to noise ratio (SNR). The optimized GCC representation is then applied in exon and intron prediction by Short-Time Fourier Transform (STFT) approach. The results show the GCC method enhances the SNR values of exon sequences and thus increases the accuracy of predicting protein coding regions in genomes compared with the commonly used 4D binary representation. In addition, this study offers a novel way to reveal specific features of DNA sequences by optimizing numerical mappings of symbolic DNA sequences.

  12. Uncertainty in the Bayesian meta-analysis of normally distributed surrogate endpoints

    PubMed Central

    Thompson, John R; Spata, Enti; Abrams, Keith R

    2015-01-01

    We investigate the effect of the choice of parameterisation of meta-analytic models and related uncertainty on the validation of surrogate endpoints. Different meta-analytical approaches take into account different levels of uncertainty which may impact on the accuracy of the predictions of treatment effect on the target outcome from the treatment effect on a surrogate endpoint obtained from these models. A range of Bayesian as well as frequentist meta-analytical methods are implemented using illustrative examples in relapsing–remitting multiple sclerosis, where the treatment effect on disability worsening is the primary outcome of interest in healthcare evaluation, while the effect on relapse rate is considered as a potential surrogate to the effect on disability progression, and in gastric cancer, where the disease-free survival has been shown to be a good surrogate endpoint to the overall survival. Sensitivity analysis was carried out to assess the impact of distributional assumptions on the predictions. Also, sensitivity to modelling assumptions and performance of the models were investigated by simulation. Although different methods can predict mean true outcome almost equally well, inclusion of uncertainty around all relevant parameters of the model may lead to less certain and hence more conservative predictions. When investigating endpoints as candidate surrogate outcomes, a careful choice of the meta-analytical approach has to be made. Models underestimating the uncertainty of available evidence may lead to overoptimistic predictions which can then have an effect on decisions made based on such predictions. PMID:26271918

  13. Uncertainty in the Bayesian meta-analysis of normally distributed surrogate endpoints.

    PubMed

    Bujkiewicz, Sylwia; Thompson, John R; Spata, Enti; Abrams, Keith R

    2017-10-01

    We investigate the effect of the choice of parameterisation of meta-analytic models and related uncertainty on the validation of surrogate endpoints. Different meta-analytical approaches take into account different levels of uncertainty which may impact on the accuracy of the predictions of treatment effect on the target outcome from the treatment effect on a surrogate endpoint obtained from these models. A range of Bayesian as well as frequentist meta-analytical methods are implemented using illustrative examples in relapsing-remitting multiple sclerosis, where the treatment effect on disability worsening is the primary outcome of interest in healthcare evaluation, while the effect on relapse rate is considered as a potential surrogate to the effect on disability progression, and in gastric cancer, where the disease-free survival has been shown to be a good surrogate endpoint to the overall survival. Sensitivity analysis was carried out to assess the impact of distributional assumptions on the predictions. Also, sensitivity to modelling assumptions and performance of the models were investigated by simulation. Although different methods can predict mean true outcome almost equally well, inclusion of uncertainty around all relevant parameters of the model may lead to less certain and hence more conservative predictions. When investigating endpoints as candidate surrogate outcomes, a careful choice of the meta-analytical approach has to be made. Models underestimating the uncertainty of available evidence may lead to overoptimistic predictions which can then have an effect on decisions made based on such predictions.

  14. Recent advances, and unresolved issues, in the application of computational modelling to the prediction of the biological effects of nanomaterials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkler, David A., E-mail: dave.winkler@csiro.au

    2016-05-15

    Nanomaterials research is one of the fastest growing contemporary research areas. The unprecedented properties of these materials have meant that they are being incorporated into products very quickly. Regulatory agencies are concerned they cannot assess the potential hazards of these materials adequately, as data on the biological properties of nanomaterials are still relatively limited and expensive to acquire. Computational modelling methods have much to offer in helping understand the mechanisms by which toxicity may occur, and in predicting the likelihood of adverse biological impacts of materials not yet tested experimentally. This paper reviews the progress these methods, particularly those QSAR-based,more » have made in understanding and predicting potentially adverse biological effects of nanomaterials, and also the limitations and pitfalls of these methods. - Highlights: • Nanomaterials regulators need good information to make good decisions. • Nanomaterials and their interactions with biology are very complex. • Computational methods use existing data to predict properties of new nanomaterials. • Statistical, data driven modelling methods have been successfully applied to this task. • Much more must be learnt before robust toolkits will be widely usable by regulators.« less

  15. Methods for using groundwater model predictions to guide hydrogeologic data collection, with application to the Death Valley regional groundwater flow system

    USGS Publications Warehouse

    Tiedeman, C.R.; Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.

    2003-01-01

    Calibrated models of groundwater systems can provide substantial information for guiding data collection. This work considers using such models to guide hydrogeologic data collection for improving model predictions by identifying model parameters that are most important to the predictions. Identification of these important parameters can help guide collection of field data about parameter values and associated flow system features and can lead to improved predictions. Methods for identifying parameters important to predictions include prediction scaled sensitivities (PSS), which account for uncertainty on individual parameters as well as prediction sensitivity to parameters, and a new "value of improved information" (VOII) method presented here, which includes the effects of parameter correlation in addition to individual parameter uncertainty and prediction sensitivity. In this work, the PSS and VOII methods are demonstrated and evaluated using a model of the Death Valley regional groundwater flow system. The predictions of interest are advective transport paths originating at sites of past underground nuclear testing. Results show that for two paths evaluated the most important parameters include a subset of five or six of the 23 defined model parameters. Some of the parameters identified as most important are associated with flow system attributes that do not lie in the immediate vicinity of the paths. Results also indicate that the PSS and VOII methods can identify different important parameters. Because the methods emphasize somewhat different criteria for parameter importance, it is suggested that parameters identified by both methods be carefully considered in subsequent data collection efforts aimed at improving model predictions.

  16. [Theory, method and application of method R on estimation of (co)variance components].

    PubMed

    Liu, Wen-Zhong

    2004-07-01

    Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.

  17. Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating

    PubMed Central

    Wang, Bingkun; Huang, Yongfeng; Li, Xing

    2016-01-01

    E-commerce develops rapidly. Learning and taking good advantage of the myriad reviews from online customers has become crucial to the success in this game, which calls for increasingly more accuracy in sentiment classification of these reviews. Therefore the finer-grained review rating prediction is preferred over the rough binary sentiment classification. There are mainly two types of method in current review rating prediction. One includes methods based on review text content which focus almost exclusively on textual content and seldom relate to those reviewers and items remarked in other relevant reviews. The other one contains methods based on collaborative filtering which extract information from previous records in the reviewer-item rating matrix, however, ignoring review textual content. Here we proposed a framework for review rating prediction which shows the effective combination of the two. Then we further proposed three specific methods under this framework. Experiments on two movie review datasets demonstrate that our review rating prediction framework has better performance than those previous methods. PMID:26880879

  18. Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating.

    PubMed

    Wang, Bingkun; Huang, Yongfeng; Li, Xing

    2016-01-01

    E-commerce develops rapidly. Learning and taking good advantage of the myriad reviews from online customers has become crucial to the success in this game, which calls for increasingly more accuracy in sentiment classification of these reviews. Therefore the finer-grained review rating prediction is preferred over the rough binary sentiment classification. There are mainly two types of method in current review rating prediction. One includes methods based on review text content which focus almost exclusively on textual content and seldom relate to those reviewers and items remarked in other relevant reviews. The other one contains methods based on collaborative filtering which extract information from previous records in the reviewer-item rating matrix, however, ignoring review textual content. Here we proposed a framework for review rating prediction which shows the effective combination of the two. Then we further proposed three specific methods under this framework. Experiments on two movie review datasets demonstrate that our review rating prediction framework has better performance than those previous methods.

  19. Artificial Neural Network with Regular Graph for Maximum Air Temperature Forecasting:. the Effect of Decrease in Nodes Degree on Learning

    NASA Astrophysics Data System (ADS)

    Ghaderi, A. H.; Darooneh, A. H.

    The behavior of nonlinear systems can be analyzed by artificial neural networks. Air temperature change is one example of the nonlinear systems. In this work, a new neural network method is proposed for forecasting maximum air temperature in two cities. In this method, the regular graph concept is used to construct some partially connected neural networks that have regular structures. The learning results of fully connected ANN and networks with proposed method are compared. In some case, the proposed method has the better result than conventional ANN. After specifying the best network, the effect of input pattern numbers on the prediction is studied and the results show that the increase of input patterns has a direct effect on the prediction accuracy.

  20. Temporal effects in trend prediction: identifying the most popular nodes in the future.

    PubMed

    Zhou, Yanbo; Zeng, An; Wang, Wei-Hong

    2015-01-01

    Prediction is an important problem in different science domains. In this paper, we focus on trend prediction in complex networks, i.e. to identify the most popular nodes in the future. Due to the preferential attachment mechanism in real systems, nodes' recent degree and cumulative degree have been successfully applied to design trend prediction methods. Here we took into account more detailed information about the network evolution and proposed a temporal-based predictor (TBP). The TBP predicts the future trend by the node strength in the weighted network with the link weight equal to its exponential aging. Three data sets with time information are used to test the performance of the new method. We find that TBP have high general accuracy in predicting the future most popular nodes. More importantly, it can identify many potential objects with low popularity in the past but high popularity in the future. The effect of the decay speed in the exponential aging on the results is discussed in detail.

  1. Temporal Effects in Trend Prediction: Identifying the Most Popular Nodes in the Future

    PubMed Central

    Zhou, Yanbo; Zeng, An; Wang, Wei-Hong

    2015-01-01

    Prediction is an important problem in different science domains. In this paper, we focus on trend prediction in complex networks, i.e. to identify the most popular nodes in the future. Due to the preferential attachment mechanism in real systems, nodes’ recent degree and cumulative degree have been successfully applied to design trend prediction methods. Here we took into account more detailed information about the network evolution and proposed a temporal-based predictor (TBP). The TBP predicts the future trend by the node strength in the weighted network with the link weight equal to its exponential aging. Three data sets with time information are used to test the performance of the new method. We find that TBP have high general accuracy in predicting the future most popular nodes. More importantly, it can identify many potential objects with low popularity in the past but high popularity in the future. The effect of the decay speed in the exponential aging on the results is discussed in detail. PMID:25806810

  2. A hybrid intelligent method for three-dimensional short-term prediction of dissolved oxygen content in aquaculture.

    PubMed

    Chen, Yingyi; Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang

    2018-01-01

    A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies.

  3. Rolling Bearing Life Prediction-Past, Present, and Future

    NASA Technical Reports Server (NTRS)

    Zaretsky, E V; Poplawski, J. V.; Miller, C. R.

    2000-01-01

    Comparisons were made between the life prediction formulas of Lundberg and Palmgren, Ioannides and Harris, and Zaretsky and full-scale ball and roller bearing life data. The effect of Weibull slope on bearing life prediction was determined. Life factors are proposed to adjust the respective life formulas to the normalized statistical life distribution of each bearing type. The Lundberg-Palmgren method resulted in the most conservative life predictions compared to Ioannides and Harris, and Zaretsky methods which produced statistically similar results. Roller profile can have significant effects on bearing life prediction results. Roller edge loading can reduce life by as much as 98 percent. The resultant predicted life not only depends on the life equation used but on the Weibull slope assumed, the least variation occurring with the Zaretsky equation. The load-life exponent p of 10/3 used in the American National Standards Institute (ANSI)/American Bearing Manufacturers Association (ABMA)/International Organization for Standardization (ISO) standards is inconsistent with the majority roller bearings designed and used today.

  4. Prediction of pi-turns in proteins using PSI-BLAST profiles and secondary structure information.

    PubMed

    Wang, Yan; Xue, Zhi-Dong; Shi, Xiao-Hong; Xu, Jin

    2006-09-01

    Due to the structural and functional importance of tight turns, some methods have been proposed to predict gamma-turns, beta-turns, and alpha-turns in proteins. In the past, studies of pi-turns were made, but not a single prediction approach has been developed so far. It will be useful to develop a method for identifying pi-turns in a protein sequence. In this paper, the support vector machine (SVM) method has been introduced to predict pi-turns from the amino acid sequence. The training and testing of this approach is performed with a newly collected data set of 640 non-homologous protein chains containing 1931 pi-turns. Different sequence encoding schemes have been explored in order to investigate their effects on the prediction performance. With multiple sequence alignment and predicted secondary structure, the final SVM model yields a Matthews correlation coefficient (MCC) of 0.556 by a 7-fold cross-validation. A web server implementing the prediction method is available at the following URL: http://210.42.106.80/piturn/.

  5. An improved predictive functional control method with application to PMSM systems

    NASA Astrophysics Data System (ADS)

    Li, Shihua; Liu, Huixian; Fu, Wenshu

    2017-01-01

    In common design of prediction model-based control method, usually disturbances are not considered in the prediction model as well as the control design. For the control systems with large amplitude or strong disturbances, it is difficult to precisely predict the future outputs according to the conventional prediction model, and thus the desired optimal closed-loop performance will be degraded to some extent. To this end, an improved predictive functional control (PFC) method is developed in this paper by embedding disturbance information into the system model. Here, a composite prediction model is thus obtained by embedding the estimated value of disturbances, where disturbance observer (DOB) is employed to estimate the lumped disturbances. So the influence of disturbances on system is taken into account in optimisation procedure. Finally, considering the speed control problem for permanent magnet synchronous motor (PMSM) servo system, a control scheme based on the improved PFC method is designed to ensure an optimal closed-loop performance even in the presence of disturbances. Simulation and experimental results based on a hardware platform are provided to confirm the effectiveness of the proposed algorithm.

  6. Investigation on the Accuracy of Superposition Predictions of Film Cooling Effectiveness

    NASA Astrophysics Data System (ADS)

    Meng, Tong; Zhu, Hui-ren; Liu, Cun-liang; Wei, Jian-sheng

    2018-05-01

    Film cooling effectiveness on flat plates with double rows of holes has been studied experimentally and numerically in this paper. This configuration is widely used to simulate the multi-row film cooling on turbine vane. Film cooling effectiveness of double rows of holes and each single row was used to study the accuracy of superposition predictions. Method of stable infrared measurement technique was used to measure the surface temperature on the flat plate. This paper analyzed the factors that affect the film cooling effectiveness including hole shape, hole arrangement, row-to-row spacing and blowing ratio. Numerical simulations were performed to analyze the flow structure and film cooling mechanisms between each film cooling row. Results show that the blowing ratio within the range of 0.5 to 2 has a significant influence on the accuracy of superposition predictions. At low blowing ratios, results obtained by superposition method agree well with the experimental data. While at high blowing ratios, the accuracy of superposition prediction decreases. Another significant factor is hole arrangement. Results obtained by superposition prediction are nearly the same as experimental values of staggered arrangement structures. For in-line configurations, the superposition values of film cooling effectiveness are much higher than experimental data. For different hole shapes, the accuracy of superposition predictions on converging-expanding holes is better than cylinder holes and compound angle holes. For two different hole spacing structures in this paper, predictions show good agreement with the experiment results.

  7. Multi-step-ahead Method for Wind Speed Prediction Correction Based on Numerical Weather Prediction and Historical Measurement Data

    NASA Astrophysics Data System (ADS)

    Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing

    2017-11-01

    Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.

  8. Applications of Protein Thermodynamic Database for Understanding Protein Mutant Stability and Designing Stable Mutants.

    PubMed

    Gromiha, M Michael; Anoosha, P; Huang, Liang-Tsung

    2016-01-01

    Protein stability is the free energy difference between unfolded and folded states of a protein, which lies in the range of 5-25 kcal/mol. Experimentally, protein stability is measured with circular dichroism, differential scanning calorimetry, and fluorescence spectroscopy using thermal and denaturant denaturation methods. These experimental data have been accumulated in the form of a database, ProTherm, thermodynamic database for proteins and mutants. It also contains sequence and structure information of a protein, experimental methods and conditions, and literature information. Different features such as search, display, and sorting options and visualization tools have been incorporated in the database. ProTherm is a valuable resource for understanding/predicting the stability of proteins and it can be accessed at http://www.abren.net/protherm/ . ProTherm has been effectively used to examine the relationship among thermodynamics, structure, and function of proteins. We describe the recent progress on the development of methods for understanding/predicting protein stability, such as (1) general trends on mutational effects on stability, (2) relationship between the stability of protein mutants and amino acid properties, (3) applications of protein three-dimensional structures for predicting their stability upon point mutations, (4) prediction of protein stability upon single mutations from amino acid sequence, and (5) prediction methods for addressing double mutants. A list of online resources for predicting has also been provided.

  9. Micromechanical Prediction of the Effective Behavior of Fully Coupled Electro-Magneto-Thermo-Elastic Multiphase Composites

    NASA Technical Reports Server (NTRS)

    Aboudi, Jacob

    2000-01-01

    The micromechanical generalized method of cells model is employed for the prediction of the effective moduli of electro-magneto-thermo-elastic composites. These include the effective elastic, piezoelectric, piezomagnetic, dielectric, magnetic permeability, electromagnetic coupling moduli, as well as the effective thermal expansion coefficients and the associated pyroelectric and pyromagnetic constants. Results are given for fibrous and periodically bilaminated composites.

  10. Prediction of Frequency for Simulation of Asphalt Mix Fatigue Tests Using MARS and ANN

    PubMed Central

    Fakhri, Mansour

    2014-01-01

    Fatigue life of asphalt mixes in laboratory tests is commonly determined by applying a sinusoidal or haversine waveform with specific frequency. The pavement structure and loading conditions affect the shape and the frequency of tensile response pulses at the bottom of asphalt layer. This paper introduces two methods for predicting the loading frequency in laboratory asphalt fatigue tests for better simulation of field conditions. Five thousand (5000) four-layered pavement sections were analyzed and stress and strain response pulses in both longitudinal and transverse directions was determined. After fitting the haversine function to the response pulses by the concept of equal-energy pulse, the effective length of the response pulses were determined. Two methods including Multivariate Adaptive Regression Splines (MARS) and Artificial Neural Network (ANN) methods were then employed to predict the effective length (i.e., frequency) of tensile stress and strain pulses in longitudinal and transverse directions based on haversine waveform. It is indicated that, under controlled stress and strain modes, both methods (MARS and ANN) are capable of predicting the frequency of loading in HMA fatigue tests with very good accuracy. The accuracy of ANN method is, however, more than MARS method. It is furthermore shown that the results of the present study can be generalized to sinusoidal waveform by a simple equation. PMID:24688400

  11. Prediction of frequency for simulation of asphalt mix fatigue tests using MARS and ANN.

    PubMed

    Ghanizadeh, Ali Reza; Fakhri, Mansour

    2014-01-01

    Fatigue life of asphalt mixes in laboratory tests is commonly determined by applying a sinusoidal or haversine waveform with specific frequency. The pavement structure and loading conditions affect the shape and the frequency of tensile response pulses at the bottom of asphalt layer. This paper introduces two methods for predicting the loading frequency in laboratory asphalt fatigue tests for better simulation of field conditions. Five thousand (5000) four-layered pavement sections were analyzed and stress and strain response pulses in both longitudinal and transverse directions was determined. After fitting the haversine function to the response pulses by the concept of equal-energy pulse, the effective length of the response pulses were determined. Two methods including Multivariate Adaptive Regression Splines (MARS) and Artificial Neural Network (ANN) methods were then employed to predict the effective length (i.e., frequency) of tensile stress and strain pulses in longitudinal and transverse directions based on haversine waveform. It is indicated that, under controlled stress and strain modes, both methods (MARS and ANN) are capable of predicting the frequency of loading in HMA fatigue tests with very good accuracy. The accuracy of ANN method is, however, more than MARS method. It is furthermore shown that the results of the present study can be generalized to sinusoidal waveform by a simple equation.

  12. Compensation for the distortion in satellite laser range predictions due to varying pulse travel times

    NASA Technical Reports Server (NTRS)

    Paunonen, Matti

    1993-01-01

    A method for compensating for the effect of the varying travel time of a transmitted laser pulse to a satellite is described. The 'observed minus predicted' range differences then appear to be linear, which makes data screening or use in range gating more effective.

  13. Research agenda for integrated landscape modeling

    Treesearch

    Samuel A. Cushman; Donald McKenzie; David L. Peterson; Jeremy Littell; Kevin S. McKelvey

    2006-01-01

    Reliable predictions of the effects changing climate and disturbance regimes will have on forest ecosystems are crucial for effective forest management. Current fire and climate research in forest ecosystem and community ecology offers data and methods that can inform such predictions. However, research in these fields occurs at different scales, with disparate goals,...

  14. Critical Features of Fragment Libraries for Protein Structure Prediction

    PubMed Central

    dos Santos, Karina Baptista

    2017-01-01

    The use of fragment libraries is a popular approach among protein structure prediction methods and has proven to substantially improve the quality of predicted structures. However, some vital aspects of a fragment library that influence the accuracy of modeling a native structure remain to be determined. This study investigates some of these features. Particularly, we analyze the effect of using secondary structure prediction guiding fragments selection, different fragments sizes and the effect of structural clustering of fragments within libraries. To have a clearer view of how these factors affect protein structure prediction, we isolated the process of model building by fragment assembly from some common limitations associated with prediction methods, e.g., imprecise energy functions and optimization algorithms, by employing an exact structure-based objective function under a greedy algorithm. Our results indicate that shorter fragments reproduce the native structure more accurately than the longer. Libraries composed of multiple fragment lengths generate even better structures, where longer fragments show to be more useful at the beginning of the simulations. The use of many different fragment sizes shows little improvement when compared to predictions carried out with libraries that comprise only three different fragment sizes. Models obtained from libraries built using only sequence similarity are, on average, better than those built with a secondary structure prediction bias. However, we found that the use of secondary structure prediction allows greater reduction of the search space, which is invaluable for prediction methods. The results of this study can be critical guidelines for the use of fragment libraries in protein structure prediction. PMID:28085928

  15. Critical Features of Fragment Libraries for Protein Structure Prediction.

    PubMed

    Trevizani, Raphael; Custódio, Fábio Lima; Dos Santos, Karina Baptista; Dardenne, Laurent Emmanuel

    2017-01-01

    The use of fragment libraries is a popular approach among protein structure prediction methods and has proven to substantially improve the quality of predicted structures. However, some vital aspects of a fragment library that influence the accuracy of modeling a native structure remain to be determined. This study investigates some of these features. Particularly, we analyze the effect of using secondary structure prediction guiding fragments selection, different fragments sizes and the effect of structural clustering of fragments within libraries. To have a clearer view of how these factors affect protein structure prediction, we isolated the process of model building by fragment assembly from some common limitations associated with prediction methods, e.g., imprecise energy functions and optimization algorithms, by employing an exact structure-based objective function under a greedy algorithm. Our results indicate that shorter fragments reproduce the native structure more accurately than the longer. Libraries composed of multiple fragment lengths generate even better structures, where longer fragments show to be more useful at the beginning of the simulations. The use of many different fragment sizes shows little improvement when compared to predictions carried out with libraries that comprise only three different fragment sizes. Models obtained from libraries built using only sequence similarity are, on average, better than those built with a secondary structure prediction bias. However, we found that the use of secondary structure prediction allows greater reduction of the search space, which is invaluable for prediction methods. The results of this study can be critical guidelines for the use of fragment libraries in protein structure prediction.

  16. Research study on high energy radiation effect and environment solar cell degradation methods

    NASA Technical Reports Server (NTRS)

    Horne, W. E.; Wilkinson, M. C.

    1974-01-01

    The most detailed and comprehensively verified analytical model was used to evaluate the effects of simplifying assumptions on the accuracy of predictions made by the external damage coefficient method. It was found that the most serious discrepancies were present in heavily damaged cells, particularly proton damaged cells, in which a gradient in damage across the cell existed. In general, it was found that the current damage coefficient method tends to underestimate damage at high fluences. An exception to this rule was thick cover-slipped cells experiencing heavy degradation due to omnidirectional electrons. In such cases, the damage coefficient method overestimates the damage. Comparisons of degradation predictions made by the two methods and measured flight data confirmed the above findings.

  17. An improved method for predicting the lightning performance of high and extra-high-voltage substation shielding

    NASA Astrophysics Data System (ADS)

    Vinh, T.

    1980-08-01

    There is a need for better and more effective lightning protection for transmission and switching substations. In the past, a number of empirical methods were utilized to design systems to protect substations and transmission lines from direct lightning strokes. The need exists for convenient analytical lightning models adequate for engineering usage. In this study, analytical lightning models were developed along with a method for improved analysis of the physical properties of lightning through their use. This method of analysis is based upon the most recent statistical field data. The result is an improved method for predicting the occurrence of sheilding failure and for designing more effective protection for high and extra high voltage substations from direct strokes.

  18. A correlation method to predict the surface pressure distribution on an infinite plate from which a jet is issuing. [effects of a lifting jet

    NASA Technical Reports Server (NTRS)

    Perkins, S. C., Jr.; Menhall, M. R.

    1978-01-01

    A correlation method to predict pressures induced on an infinite plate by a jet issuing from the plate into a subsonic free stream was developed. The complete method consists of an analytical method which models the blockage and entrainment properties of the jet and a correlation which accounts for the effects of separation. The method was developed for jet velocity ratios up to ten and for radial distances up to five diameters from the jet. Correlation curves and data comparisons are presented for jets issuing normally from a flat plate with velocity ratios one to twelve. Also, a list of references which deal with jets in a crossflow is presented.

  19. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  20. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  1. Relative Humidity Has Uneven Effects on Shifts From Snow to Rain Over the Western U.S.

    NASA Astrophysics Data System (ADS)

    Harpold, A. A.; Rajagopal, S.; Crews, J. B.; Winchell, T.; Schumer, R.

    2017-10-01

    Predicting the phase of precipitation is fundamental to water supply and hazard forecasting. Phase prediction methods (PPMs) are used to predict snow fraction, or the ratio of snowfall to total precipitation. Common temperature-based regression (Dai method) and threshold at freezing (0°C) PPMs had comparable accuracy to a humidity-based PPM (TRH method) using 6 and 24 h observations. Using a daily climate data set from 1980 to 2015, the TRH method estimates 14% and 6% greater precipitation-weighted snow fraction than the 0°C and Dai methods, respectively. The TRH method predicts four times less area with declining snow fraction than the Dai method (2.1% and 8.1% of the study domain, respectively) from 1980 to 2015, with the largest differences in the Cascade and Sierra Nevada mountains and southwestern U.S. Future Representative Concentration Pathway (RCP) 8.5 projections suggest warming temperatures of 4.2°C and declining relative humidity of 1% over the 21st century. The TRH method predicts a smaller reduction in snow fraction than temperature-only PPMs by 2100, consistent with lower humidity buffering declines in snow fraction caused by regional warming.

  2. Tools for in silico target fishing.

    PubMed

    Cereto-Massagué, Adrià; Ojeda, María José; Valls, Cristina; Mulero, Miquel; Pujadas, Gerard; Garcia-Vallve, Santiago

    2015-01-01

    Computational target fishing methods are designed to identify the most probable target of a query molecule. This process may allow the prediction of the bioactivity of a compound, the identification of the mode of action of known drugs, the detection of drug polypharmacology, drug repositioning or the prediction of the adverse effects of a compound. The large amount of information regarding the bioactivity of thousands of small molecules now allows the development of these types of methods. In recent years, we have witnessed the emergence of many methods for in silico target fishing. Most of these methods are based on the similarity principle, i.e., that similar molecules might bind to the same targets and have similar bioactivities. However, the difficult validation of target fishing methods hinders comparisons of the performance of each method. In this review, we describe the different methods developed for target prediction, the bioactivity databases most frequently used by these methods, and the publicly available programs and servers that enable non-specialist users to obtain these types of predictions. It is expected that target prediction will have a large impact on drug development and on the functional food industry. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Safe Life Propulsion Design Technologies (3rd Generation Propulsion Research and Technology)

    NASA Technical Reports Server (NTRS)

    Ellis, Rod

    2000-01-01

    The tasks outlined in this viewgraph presentation on safe life propulsion design technologies (third generation propulsion research and technology) include the following: (1) Ceramic matrix composite (CMC) life prediction methods; (2) Life prediction methods for ultra high temperature polymer matrix composites for reusable launch vehicle (RLV) airframe and engine application; (3) Enabling design and life prediction technology for cost effective large-scale utilization of MMCs and innovative metallic material concepts; (4) Probabilistic analysis methods for brittle materials and structures; (5) Damage assessment in CMC propulsion components using nondestructive characterization techniques; and (6) High temperature structural seals for RLV applications.

  4. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  5. Applicability of a panel method, which includes nonlinear effects, to a forward-swept-wing aircraft

    NASA Technical Reports Server (NTRS)

    Ross, J. C.

    1984-01-01

    The ability of a lower order panel method VSAERO, to accurately predict the lift and pitching moment of a complete forward-swept-wing/canard configuration was investigated. The program can simulate nonlinear effects including boundary-layer displacement thickness, wake roll up, and to a limited extent, separated wakes. The predictions were compared with experimental data obtained using a small-scale model in the 7- by 10- Foot Wind Tunnel at NASA Ames Research Center. For the particular configuration under investigation, wake roll up had only a small effect on the force and moment predictions. The effect of the displacement thickness modeling was to reduce the lift curve slope slightly, thus bringing the predicted lift into good agreement with the measured value. Pitching moment predictions were also improved by the boundary-layer simulation. The separation modeling was found to be sensitive to user inputs, but appears to give a reasonable representation of a separated wake. In general, the nonlinear capabilities of the code were found to improve the agreement with experimental data. The usefullness of the code would be enhanced by improving the reliability of the separated wake modeling and by the addition of a leading edge separation model.

  6. Solvation effect on isomer stability and electronic structures of protonated serotonin

    NASA Astrophysics Data System (ADS)

    Omidyan, Reza; Amanollahi, Zohreh; Azimi, Gholamhassan

    2017-07-01

    Microsolvation effect on geometry and transition energies of protonated serotonin has been investigated by MP2 and CC2 quantum chemical methods. Also, conductor-like screening model, implemented recently in the MP2 and ADC(2) methods, was examined to address the bulk water environment's effect on the isomer stability and electronic transition energies of protonated serotonin. It has been predicted that the dipole moment of gas phase isomers plays the main role on the isomer stabilization in water solution and electronic transition shifts. Also, both red- and blue-shift effects have been predicted to take place on electronic transition energies, upon hydration.

  7. [An ADAA model and its analysis method for agronomic traits based on the double-cross mating design].

    PubMed

    Xu, Z C; Zhu, J

    2000-01-01

    According to the double-cross mating design and using principles of Cockerham's general genetic model, a genetic model with additive, dominance and epistatic effects (ADAA model) was proposed for the analysis of agronomic traits. Components of genetic effects were derived for different generations. Monte Carlo simulation was conducted for analyzing the ADAA model and its reduced AD model by using different generations. It was indicated that genetic variance components could be estimated without bias by MINQUE(1) method and genetic effects could be predicted effectively by AUP method; at least three generations (including parent, F1 of single cross and F1 of double-cross) were necessary for analyzing the ADAA model and only two generations (including parent and F1 of double-cross) were enough for the reduced AD model. When epistatic effects were taken into account, a new approach for predicting the heterosis of agronomic traits of double-crosses was given on the basis of unbiased prediction of genotypic merits of parents and their crosses. In addition, genotype x environment interaction effects and interaction heterosis due to G x E interaction were discussed briefly.

  8. Generalized self-consistent method for predicting the effective elastic properties of composites with random hybrid structures

    NASA Astrophysics Data System (ADS)

    Pan'kov, A. A.

    1997-05-01

    The feasibility of using a generalized self-consistent method for predicting the effective elastic properties of composites with random hybrid structures has been examined. Using this method, the problem is reduced to solution of simpler special averaged problems for composites with single inclusions and corresponding transition layers in the medium examined. The dimensions of the transition layers are defined by correlation radii of the composite random structure of the composite, while the heterogeneous elastic properties of the transition layers take account of the probabilities for variation of the size and configuration of the inclusions using averaged special indicator functions. Results are given for a numerical calculation of the averaged indicator functions and analysis of the effect of the micropores in the matrix-fiber interface region on the effective elastic properties of unidirectional fiberglass—epoxy using the generalized self-consistent method and compared with experimental data and reported solutions.

  9. Predicting double negativity using transmitted phase in space coiling metamaterials.

    PubMed

    Maurya, Santosh K; Pandey, Abhishek; Shukla, Shobha; Saxena, Sumit

    2018-05-01

    Metamaterials are engineered materials that offer the flexibility to manipulate the incident waves leading to exotic applications such as cloaking, extraordinary transmission, sub-wavelength imaging and negative refraction. These concepts have largely been explored in the context of electromagnetic waves. Acoustic metamaterials, similar to their optical counterparts, demonstrate anomalous effective elastic properties. Recent developments have shown that coiling up the propagation path of acoustic wave results in effective elastic response of the metamaterial beyond the natural response of its constituent materials. The effective response of metamaterials is generally evaluated using the 'S' parameter retrieval method based on amplitude of the waves. The phase of acoustic waves contains information of wave pressure and particle velocity. Here, we show using finite-element methods that phase reversal of transmitted waves may be used to predict extreme acoustic properties in space coiling metamaterials. This change is the difference in the phase of the transmitted wave with respect to the incident wave. This method is simpler when compared with the more rigorous 'S' parameter retrieval method. The inferences drawn using this method have been verified experimentally for labyrinthine metamaterials by showing negative refraction for the predicted band of frequencies.

  10. Fatigue life prediction of liquid rocket engine combustor with subscale test verification

    NASA Astrophysics Data System (ADS)

    Sung, In-Kyung

    Reusable rocket systems such as the Space Shuttle introduced a new era in propulsion system design for economic feasibility. Practical reusable systems require an order of magnitude increase in life. To achieve this improved methods are needed to assess failure mechanisms and to predict life cycles of rocket combustor. A general goal of the research was to demonstrate the use of subscale rocket combustor prototype in a cost-effective test program. Life limiting factors and metal behaviors under repeated loads were surveyed and reviewed. The life prediction theories are presented, with an emphasis on studies that used subscale test hardware for model validation. From this review, low cycle fatigue (LCF) and creep-fatigue interaction (ratcheting) were identified as the main life limiting factors of the combustor. Several life prediction methods such as conventional and advanced viscoplastic models were used to predict life cycle due to low cycle thermal stress, transient effects, and creep rupture damage. Creep-fatigue interaction and cyclic hardening were also investigated. A prediction method based on 2D beam theory was modified using 3D plate deformation theory to provide an extended prediction method. For experimental validation two small scale annular plug nozzle thrusters were designed, built and tested. The test article was composed of a water-cooled liner, plug annular nozzle and 200 psia precombustor that used decomposed hydrogen peroxide as the oxidizer and JP-8 as the fuel. The first combustor was tested cyclically at the Advanced Propellants and Combustion Laboratory at Purdue University. Testing was stopped after 140 cycles due to an unpredicted failure mechanism due to an increasing hot spot in the location where failure was predicted. A second combustor was designed to avoid the previous failure, however, it was over pressurized and deformed beyond repair during cold-flow test. The test results are discussed and compared to the analytical and numerical predictions. A detailed comparison was not performed, however, due to the lack of test data resulting from a failure of the test article. Some theoretical and experimental aspects such as fin effect and round corner were found to reduce the discrepancy between prediction and test results.

  11. Feature selection using probabilistic prediction of support vector regression.

    PubMed

    Yang, Jian-Bo; Ong, Chong-Jin

    2011-06-01

    This paper presents a new wrapper-based feature selection method for support vector regression (SVR) using its probabilistic predictions. The method computes the importance of a feature by aggregating the difference, over the feature space, of the conditional density functions of the SVR prediction with and without the feature. As the exact computation of this importance measure is expensive, two approximations are proposed. The effectiveness of the measure using these approximations, in comparison to several other existing feature selection methods for SVR, is evaluated on both artificial and real-world problems. The result of the experiments show that the proposed method generally performs better than, or at least as well as, the existing methods, with notable advantage when the dataset is sparse.

  12. Numerical method for predicting flow characteristics and performance of nonaxisymmetric nozzles. Part 2: Applications

    NASA Technical Reports Server (NTRS)

    Thomas, P. D.

    1980-01-01

    A computer implemented numerical method for predicting the flow in and about an isolated three dimensional jet exhaust nozzle is summarized. The approach is based on an implicit numerical method to solve the unsteady Navier-Stokes equations in a boundary conforming curvilinear coordinate system. Recent improvements to the original numerical algorithm are summarized. Equations are given for evaluating nozzle thrust and discharge coefficient in terms of computed flowfield data. The final formulation of models that are used to simulate flow turbulence effect is presented. Results are presented from numerical experiments to explore the effect of various quantities on the rate of convergence to steady state and on the final flowfield solution. Detailed flowfield predictions for several two and three dimensional nozzle configurations are presented and compared with wind tunnel experimental data.

  13. Real-time implementation of model predictive control on Maricopa-Stanfield irrigation and drainage district's WM canal

    USDA-ARS?s Scientific Manuscript database

    Water resources are limited in many agricultural areas. One method to improve the effective use of water is to improve delivery service from irrigation canals. This can be done by applying automatic control methods that control the gates in an irrigation canal. The model predictive control MPC is ...

  14. Univariate Time Series Prediction of Solar Power Using a Hybrid Wavelet-ARMA-NARX Prediction Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng

    This paper proposes a new hybrid method for super short-term solar power prediction. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An accurate super short-time prediction is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to predict one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA)more » models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for prediction. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA prediction. The proposed method is applied to the data captured from UCLA solar PV panels and the results are compared with some of the common and most recent solar power prediction methods. The results validate the effectiveness of the proposed approach and show a considerable improvement in the prediction precision.« less

  15. Functional Mixed Effects Model for Small Area Estimation.

    PubMed

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  16. Measurement of pattern roughness and local size variation using CD-SEM: current status

    NASA Astrophysics Data System (ADS)

    Fukuda, Hiroshi; Kawasaki, Takahiro; Kawada, Hiroki; Sakai, Kei; Kato, Takashi; Yamaguchi, Satoru; Ikota, Masami; Momonoi, Yoshinori

    2018-03-01

    Measurement of line edge roughness (LER) is discussed from four aspects: edge detection, PSD prediction, sampling strategy, and noise mitigation, and general guidelines and practical solutions for LER measurement today are introduced. Advanced edge detection algorithms such as wave-matching method are shown effective for robustly detecting edges from low SNR images, while conventional algorithm with weak filtering is still effective in suppressing SEM noise and aliasing. Advanced PSD prediction method such as multi-taper method is effective in suppressing sampling noise within a line edge to analyze, while number of lines is still required for suppressing line to line variation. Two types of SEM noise mitigation methods, "apparent noise floor" subtraction method and LER-noise decomposition using regression analysis are verified to successfully mitigate SEM noise from PSD curves. These results are extended to LCDU measurement to clarify the impact of SEM noise and sampling noise on LCDU.

  17. Correlation and prediction of the transport properties of refrigerants using two modified rough hard-sphere models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teja, A.S.; King, R.K.; Sun, T.F.

    1999-01-01

    Two methods are presented for the correlation and prediction of the viscosities and thermal conductivities of refrigerants R11, R12, R22, R32, R124, R125, R134a, R141b, and R152 and their mixtures. The first (termed RHS1) is a modified rough-hard-sphere method based on the smooth hard-sphere correlations of Assael et al. The method requires two or three parameters for characterizing each refrigerant but is able to correlate transport properties over wide ranges of pressure and temperature. The second method (RHS2) is also a modified rough-hard-sphere method, but based on an effective hard-sphere diameter for Lennard-Jones (LJ) fluids. The LJ parameters and themore » effective hard-sphere diameter required in this method are determined from a knowledge of the density-temperature behavior of the fluid at saturation. Comparisons with the rough-hard-sphere method of Assael and co-workers (RHS3) are shown. They also show that the RHS2 method can be used to correlate as well as predict the transport properties of refrigerants.« less

  18. Prediction of forces and moments for hypersonic flight vehicle control effectors

    NASA Technical Reports Server (NTRS)

    Maughmer, Mark D.; Long, Lyle N.; Guilmette, Neal; Pagano, Peter

    1993-01-01

    This research project includes three distinct phases. For completeness, all three phases of the work are briefly described in this report. The goal was to develop methods of predicting flight control forces and moments for hypersonic vehicles which could be used in a preliminary design environment. The first phase included a preliminary assessment of subsonic/supersonic panel methods and hypersonic local flow inclination methods for such predictions. While these findings clearly indicated the usefulness of such methods for conceptual design activities, deficiencies exist in some areas. Thus, a second phase of research was conducted in which a better understanding was sought for the reasons behind the successes and failures of the methods considered, particularly for the cases at hypersonic Mach numbers. This second phase involved using computational fluid dynamics methods to examine the flow fields in detail. Through these detailed predictions, the deficiencies in the simple surface inclination methods were determined. In the third phase of this work, an improvement to the surface inclination methods was developed. This used a novel method for including viscous effects by modifying the geometry to include the viscous/shock layer.

  19. Mortality prediction system for heart failure with orthogonal relief and dynamic radius means.

    PubMed

    Wang, Zhe; Yao, Lijuan; Li, Dongdong; Ruan, Tong; Liu, Min; Gao, Ju

    2018-07-01

    This paper constructs a mortality prediction system based on a real-world dataset. This mortality prediction system aims to predict mortality in heart failure (HF) patients. Effective mortality prediction can improve resources allocation and clinical outcomes, avoiding inappropriate overtreatment of low-mortality patients and discharging of high-mortality patients. This system covers three mortality prediction targets: prediction of in-hospital mortality, prediction of 30-day mortality and prediction of 1-year mortality. HF data are collected from the Shanghai Shuguang hospital. 10,203 in-patients records are extracted from encounters occurring between March 2009 and April 2016. The records involve 4682 patients, including 539 death cases. A feature selection method called Orthogonal Relief (OR) algorithm is first used to reduce the dimensionality. Then, a classification algorithm named Dynamic Radius Means (DRM) is proposed to predict the mortality in HF patients. The comparative experimental results demonstrate that mortality prediction system achieves high performance in all targets by DRM. It is noteworthy that the performance of in-hospital mortality prediction achieves 87.3% in AUC (35.07% improvement). Moreover, the AUC of 30-day and 1-year mortality prediction reach to 88.45% and 84.84%, respectively. Especially, the system could keep itself effective and not deteriorate when the dimension of samples is sharply reduced. The proposed system with its own method DRM can predict mortality in HF patients and achieve high performance in all three mortality targets. Furthermore, effective feature selection strategy can boost the system. This system shows its importance in real-world applications, assisting clinicians in HF treatment by providing crucial decision information. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Development of procedures for calculating stiffness and damping properties of elastomers. Part 3: The effects of temperature, dissipation level and geometry

    NASA Technical Reports Server (NTRS)

    Smalley, A. J.; Tessarzik, J. M.

    1975-01-01

    Effects of temperature, dissipation level and geometry on the dynamic behavior of elastomer elements were investigated. Force displacement relationships in elastomer elements and the effects of frequency, geometry and temperature upon these relationships are reviewed. Based on this review, methods of reducing stiffness and damping data for shear and compression test elements to material properties (storage and loss moduli) and empirical geometric factors are developed and tested using previously generated experimental data. A prediction method which accounts for large amplitudes of deformation is developed on the assumption that their effect is to increase temperature through the elastomers, thereby modifying the local material properties. Various simple methods of predicting the radial stiffness of ring cartridge elements are developed and compared. Material properties were determined from the shear specimen tests as a function of frequency and temperature. Using these material properties, numerical predictions of stiffness and damping for cartridge and compression specimens were made and compared with corresponding measurements at different temperatures, with encouraging results.

  1. Comparison of Biophysical Characteristics and Predicted Thermophysiological Responses of Three Prototype Body Armor Systems Versus Baseline U.S. Army Body Armor Systems

    DTIC Science & Technology

    2015-06-19

    effective and scientifically valid method of making comparisons of clothing and equipment changes prior to conducting human research. predictive modeling...valid method of making comparisons of clothing and equipment changes prior to conducting human research. 2 INTRODUCTION Modern day...clothing and equipment changes prior to conducting human research. METHODS Ensembles Three different body armor (BA) plus clothing ensembles were

  2. Transient flow thrust prediction for an ejector propulsion concept

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1989-01-01

    A method for predicting transient thrust augmenting ejector characteristics is introduced. The analysis blends classic self-similar turbulent jet descriptions with a mixing region control volume analysis to predict transient effects in a new way. Details of the theoretical foundation, the solution algorithm, and sample calculations are given.

  3. Application of clustering analysis in the prediction of photovoltaic power generation based on neural network

    NASA Astrophysics Data System (ADS)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.

  4. Groundwater depth prediction in a shallow aquifer in north China by a quantile regression model

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Wei, Wan; Zhao, Yong; Qiao, Jiale

    2017-01-01

    There is a close relationship between groundwater level in a shallow aquifer and the surface ecological environment; hence, it is important to accurately simulate and predict the groundwater level in eco-environmental construction projects. The multiple linear regression (MLR) model is one of the most useful methods to predict groundwater level (depth); however, the predicted values by this model only reflect the mean distribution of the observations and cannot effectively fit the extreme distribution data (outliers). The study reported here builds a prediction model of groundwater-depth dynamics in a shallow aquifer using the quantile regression (QR) method on the basis of the observed data of groundwater depth and related factors. The proposed approach was applied to five sites in Tianjin city, north China, and the groundwater depth was calculated in different quantiles, from which the optimal quantile was screened out according to the box plot method and compared to the values predicted by the MLR model. The results showed that the related factors in the five sites did not follow the standard normal distribution and that there were outliers in the precipitation and last-month (initial state) groundwater-depth factors because the basic assumptions of the MLR model could not be achieved, thereby causing errors. Nevertheless, these conditions had no effect on the QR model, as it could more effectively describe the distribution of original data and had a higher precision in fitting the outliers.

  5. An Economical Semi-Analytical Orbit Theory for Retarded Satellite Motion About an Oblate Planet

    NASA Technical Reports Server (NTRS)

    Gordon, R. A.

    1980-01-01

    Brouwer and Brouwer-Lyddanes' use of the Von Zeipel-Delaunay method is employed to develop an efficient analytical orbit theory suitable for microcomputers. A succinctly simple pseudo-phenomenologically conceptualized algorithm is introduced which accurately and economically synthesizes modeling of drag effects. The method epitomizes and manifests effortless efficient computer mechanization. Simulated trajectory data is employed to illustrate the theory's ability to accurately accommodate oblateness and drag effects for microcomputer ground based or onboard predicted orbital representation. Real tracking data is used to demonstrate that the theory's orbit determination and orbit prediction capabilities are favorably adaptable to and are comparable with results obtained utilizing complex definitive Cowell method solutions on satellites experiencing significant drag effects.

  6. Properties of a Formal Method for Prediction of Emergent Behaviors in Swarm-based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James

    2004-01-01

    Autonomous intelligent swarms of satellites are being proposed for NASA missions that have complex behaviors and interactions. The emergent properties of swarms make these missions powerful, but at the same time more difficult to design and assure that proper behaviors will emerge. This paper gives the results of research into formal methods techniques for verification and validation of NASA swarm-based missions. Multiple formal methods were evaluated to determine their effectiveness in modeling and assuring the behavior of swarms of spacecraft. The NASA ANTS mission was used as an example of swarm intelligence for which to apply the formal methods. This paper will give the evaluation of these formal methods and give partial specifications of the ANTS mission using four selected methods. We then give an evaluation of the methods and the needed properties of a formal method for effective specification and prediction of emergent behavior in swarm-based systems.

  7. Influence of Food on Paediatric Gastrointestinal Drug Absorption Following Oral Administration: A Review

    PubMed Central

    Batchelor, Hannah K.

    2015-01-01

    The objective of this paper was to review existing information regarding food effects on drug absorption within paediatric populations. Mechanisms that underpin food–drug interactions were examined to consider potential differences between adult and paediatric populations, to provide insights into how this may alter the pharmacokinetic profile in a child. Relevant literature was searched to retrieve information on food–drug interaction studies undertaken on: (i) paediatric oral drug formulations; and (ii) within paediatric populations. The applicability of existing methodology to predict food effects in adult populations was evaluated with respect to paediatric populations where clinical data was available. Several differences in physiology, anatomy and the composition of food consumed within a paediatric population are likely to lead to food–drug interactions that cannot be predicted based on adult studies. Existing methods to predict food effects cannot be directly extrapolated to allow predictions within paediatric populations. Development of systematic methods and guidelines is needed to address the general lack of information on examining food–drug interactions within paediatric populations. PMID:27417362

  8. Perinatal Health Belief Scales: A Cost Effective Technique for Predicting Prenatal Appointment Keeping Rates amongst Pregnant Teenagers.

    ERIC Educational Resources Information Center

    Wells, Robert D.; And Others

    Prenatal appointment keeping is an important predictor of birth outcomes, yet many pregnant adolescents miss an excessive number of appointments. Since effective strategies for increasing appointment keeping require costly staff time, methods to predict relative risk for noncompliance with appointments might help delineate a circumscribed…

  9. Evaluating the Effect of Educational Media Exposure on Aggression in Early Childhood

    ERIC Educational Resources Information Center

    Ostrov, Jamie M.; Gentile, Douglas A.; Mullins, Adam D.

    2013-01-01

    Preschool-aged children (M = 42.44 months-old, SD = 8.02) participated in a short-term longitudinal study investigating the effect of educational media exposure on social development (i.e., aggression and prosocial behavior) using multiple informants and methods. As predicted, educational media exposure significantly predicted increases in both…

  10. Mixed Model Methods for Genomic Prediction and Variance Component Estimation of Additive and Dominance Effects Using SNP Markers

    PubMed Central

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005–0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level. PMID:24498162

  11. Mixed model methods for genomic prediction and variance component estimation of additive and dominance effects using SNP markers.

    PubMed

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005-0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level.

  12. Predicting the Thermal Conductivity of AlSi/Polyester Abradable Coatings: Effects of the Numerical Method

    NASA Astrophysics Data System (ADS)

    Bolot, Rodolphe; Seichepine, Jean-Louis; Qiao, Jiang Hao; Coddet, Christian

    2011-01-01

    The final target of this study is to achieve a better understanding of the behavior of thermally sprayed abradable seals such as AlSi/polyester composites. These coatings are used as seals between the static and the rotating parts in aero-engines. The machinability of the composite coatings during the friction of the blades depends on their mechanical and thermal effective properties. In order to predict these properties from micrographs, numerical studies were performed with different software packages such as OOF developed by NIST and TS2C developed at the UTBM. In 2008, differences were reported concerning predictions of effective thermal conductivities obtained with the two codes. In this article, a particular attention was paid to the mathematical formulation of the problem. In particular, results obtained with a finite difference method using a cell-centered approach or a nodal formulation allow explaining the discrepancies previously noticed. A comparison of the predictions of the computed effective thermal conductivities is thus proposed. This study is part of the NEWAC project, funded by the European Commission within the 6th RTD Framework programm (FP6).

  13. Modified Displacement Transfer Functions for Deformed Shape Predictions of Slender Curved Structures with Varying Curvatives

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Fleischer, Van Tran

    2014-01-01

    To eliminate the need to use finite-element modeling for structure shape predictions, a new method was invented. This method is to use the Displacement Transfer Functions to transform the measured surface strains into deflections for mapping out overall structural deformed shapes. The Displacement Transfer Functions are expressed in terms of rectilinearly distributed surface strains, and contain no material properties. This report is to apply the patented method to the shape predictions of non-symmetrically loaded slender curved structures with different curvatures up to a full circle. Because the measured surface strains are not available, finite-element analysis had to be used to analytically generate the surface strains. Previously formulated straight-beam Displacement Transfer Functions were modified by introducing the curvature-effect correction terms. Through single-point or dual-point collocations with finite-elementgenerated deflection curves, functional forms of the curvature-effect correction terms were empirically established. The resulting modified Displacement Transfer Functions can then provide quite accurate shape predictions. Also, the uniform straight-beam Displacement Transfer Function was applied to the shape predictions of a section-cut of a generic capsule (GC) outer curved sandwich wall. The resulting GC shape predictions are quite accurate in partial regions where the radius of curvature does not change sharply.

  14. The DoE method as an efficient tool for modeling the behavior of monocrystalline Si-PV module

    NASA Astrophysics Data System (ADS)

    Kessaissia, Fatma Zohra; Zegaoui, Abdallah; Boutoubat, Mohamed; Allouache, Hadj; Aillerie, Michel; Charles, Jean-Pierre

    2018-05-01

    The objective of this paper is to apply the Design of Experiments (DoE) method to study and to obtain a predictive model of any marketed monocrystalline photovoltaic (mc-PV) module. This technique allows us to have a mathematical model that represents the predicted responses depending upon input factors and experimental data. Therefore, the DoE model for characterization and modeling of mc-PV module behavior can be obtained by just performing a set of experimental trials. The DoE model of the mc-PV panel evaluates the predictive maximum power, as a function of irradiation and temperature in a bounded domain of study for inputs. For the mc-PV panel, the predictive model for both one level and two levels were developed taking into account both influences of the main effect and the interactive effects on the considered factors. The DoE method is then implemented by developing a code under Matlab software. The code allows us to simulate, characterize, and validate the predictive model of the mc-PV panel. The calculated results were compared to the experimental data, errors were estimated, and an accurate validation of the predictive models was evaluated by the surface response. Finally, we conclude that the predictive models reproduce the experimental trials and are defined within a good accuracy.

  15. Improved Method for Prediction of Attainable Wing Leading-Edge Thrust

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; McElroy, Marcus O.; Lessard, Wendy B.; McCullers, L. Arnold

    1996-01-01

    Prediction of the loss of wing leading-edge thrust and the accompanying increase in drag due to lift, when flow is not completely attached, presents a difficult but commonly encountered problem. A method (called the previous method) for the prediction of attainable leading-edge thrust and the resultant effect on airplane aerodynamic performance has been in use for more than a decade. Recently, the method has been revised to enhance its applicability to current airplane design and evaluation problems. The improved method (called the present method) provides for a greater range of airfoil shapes from very sharp to very blunt leading edges. It is also based on a wider range of Reynolds numbers than was available for the previous method. The present method, when employed in computer codes for aerodynamic analysis, generally results in improved correlation with experimental wing-body axial-force data and provides reasonable estimates of the measured drag.

  16. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    PubMed

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  17. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network

    PubMed Central

    Ghaderi, Forouzan; Ghaderi, Amir H.; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose. PMID:29188217

  18. Applications of information theory, genetic algorithms, and neural models to predict oil flow

    NASA Astrophysics Data System (ADS)

    Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto

    2009-07-01

    This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.

  19. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  20. Impact of fitting dominance and additive effects on accuracy of genomic prediction of breeding values in layers.

    PubMed

    Heidaritabar, M; Wolc, A; Arango, J; Zeng, J; Settar, P; Fulton, J E; O'Sullivan, N P; Bastiaansen, J W M; Fernando, R L; Garrick, D J; Dekkers, J C M

    2016-10-01

    Most genomic prediction studies fit only additive effects in models to estimate genomic breeding values (GEBV). However, if dominance genetic effects are an important source of variation for complex traits, accounting for them may improve the accuracy of GEBV. We investigated the effect of fitting dominance and additive effects on the accuracy of GEBV for eight egg production and quality traits in a purebred line of brown layers using pedigree or genomic information (42K single-nucleotide polymorphism (SNP) panel). Phenotypes were corrected for the effect of hatch date. Additive and dominance genetic variances were estimated using genomic-based [genomic best linear unbiased prediction (GBLUP)-REML and BayesC] and pedigree-based (PBLUP-REML) methods. Breeding values were predicted using a model that included both additive and dominance effects and a model that included only additive effects. The reference population consisted of approximately 1800 animals hatched between 2004 and 2009, while approximately 300 young animals hatched in 2010 were used for validation. Accuracy of prediction was computed as the correlation between phenotypes and estimated breeding values of the validation animals divided by the square root of the estimate of heritability in the whole population. The proportion of dominance variance to total phenotypic variance ranged from 0.03 to 0.22 with PBLUP-REML across traits, from 0 to 0.03 with GBLUP-REML and from 0.01 to 0.05 with BayesC. Accuracies of GEBV ranged from 0.28 to 0.60 across traits. Inclusion of dominance effects did not improve the accuracy of GEBV, and differences in their accuracies between genomic-based methods were small (0.01-0.05), with GBLUP-REML yielding higher prediction accuracies than BayesC for egg production, egg colour and yolk weight, while BayesC yielded higher accuracies than GBLUP-REML for the other traits. In conclusion, fitting dominance effects did not impact accuracy of genomic prediction of breeding values in this population. © 2016 Blackwell Verlag GmbH.

  1. A cost effective FBG-based security fence with fire alarm function

    NASA Astrophysics Data System (ADS)

    Wu, H. J.; Li, S. S.; Lu, X. L.; Wu, Y.; Rao, Y. J.

    2012-02-01

    Fiber Bragg Grating (FBG) is sensitive to the temperature as well when it is measuring the strain change, which is always avoided in most measurement applications. However, in this paper strain/temperature dual sensitivity is utilized to construct a special security fence with a second function of fire threat prediction. In an FBG-based fiber fence configuration, only by characteristics analysis and identification method, it can intelligently distinguish the different effects of personal threats and fires from their different trends of the wavelength drifts. Thus without any additional temperature sensing fittings or other fire alarm systems integrated, a normal perimeter security system can possess a second function of fire prediction, which can not only monitor the intrusion induced by personal actions but also predict fire threats in advance. The experimental results show the effectiveness of the method.

  2. A comparison of methods to predict historical daily streamflow time series in the southeastern United States

    USGS Publications Warehouse

    Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.

    2015-01-01

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB). Additional metrics of comparison can easily be incorporated into this type of analysis. By considering such a multifaceted approach, the top-performing models can easily be identified and considered for further research. The top-performing models can then provide a basis for future applications and explorations by scientists, engineers, managers, and practitioners to suit their own needs.

  3. A Noise Level Prediction Method Based on Electro-Mechanical Frequency Response Function for Capacitors

    PubMed Central

    Zhu, Lingyu; Ji, Shengchang; Shen, Qi; Liu, Yuan; Li, Jinyu; Liu, Hao

    2013-01-01

    The capacitors in high-voltage direct-current (HVDC) converter stations radiate a lot of audible noise which can reach higher than 100 dB. The existing noise level prediction methods are not satisfying enough. In this paper, a new noise level prediction method is proposed based on a frequency response function considering both electrical and mechanical characteristics of capacitors. The electro-mechanical frequency response function (EMFRF) is defined as the frequency domain quotient of the vibration response and the squared capacitor voltage, and it is obtained from impulse current experiment. Under given excitations, the vibration response of the capacitor tank is the product of EMFRF and the square of the given capacitor voltage in frequency domain, and the radiated audible noise is calculated by structure acoustic coupling formulas. The noise level under the same excitations is also measured in laboratory, and the results are compared with the prediction. The comparison proves that the noise prediction method is effective. PMID:24349105

  4. Motivation Classification and Grade Prediction for MOOCs Learners

    PubMed Central

    Xu, Bin; Yang, Dan

    2016-01-01

    While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker. PMID:26884747

  5. Motivation Classification and Grade Prediction for MOOCs Learners.

    PubMed

    Xu, Bin; Yang, Dan

    2016-01-01

    While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker.

  6. A noise level prediction method based on electro-mechanical frequency response function for capacitors.

    PubMed

    Zhu, Lingyu; Ji, Shengchang; Shen, Qi; Liu, Yuan; Li, Jinyu; Liu, Hao

    2013-01-01

    The capacitors in high-voltage direct-current (HVDC) converter stations radiate a lot of audible noise which can reach higher than 100 dB. The existing noise level prediction methods are not satisfying enough. In this paper, a new noise level prediction method is proposed based on a frequency response function considering both electrical and mechanical characteristics of capacitors. The electro-mechanical frequency response function (EMFRF) is defined as the frequency domain quotient of the vibration response and the squared capacitor voltage, and it is obtained from impulse current experiment. Under given excitations, the vibration response of the capacitor tank is the product of EMFRF and the square of the given capacitor voltage in frequency domain, and the radiated audible noise is calculated by structure acoustic coupling formulas. The noise level under the same excitations is also measured in laboratory, and the results are compared with the prediction. The comparison proves that the noise prediction method is effective.

  7. Bayesian model aggregation for ensemble-based estimates of protein pKa values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gosink, Luke J.; Hogan, Emilie A.; Pulsipher, Trenton C.

    2014-03-01

    This paper investigates an ensemble-based technique called Bayesian Model Averaging (BMA) to improve the performance of protein amino acid pmore » $$K_a$$ predictions. Structure-based p$$K_a$$ calculations play an important role in the mechanistic interpretation of protein structure and are also used to determine a wide range of protein properties. A diverse set of methods currently exist for p$$K_a$$ prediction, ranging from empirical statistical models to {\\it ab initio} quantum mechanical approaches. However, each of these methods are based on a set of assumptions that have inherent bias and sensitivities that can effect a model's accuracy and generalizability for p$$K_a$$ prediction in complicated biomolecular systems. We use BMA to combine eleven diverse prediction methods that each estimate pKa values of amino acids in staphylococcal nuclease. These methods are based on work conducted for the pKa Cooperative and the pKa measurements are based on experimental work conducted by the Garc{\\'i}a-Moreno lab. Our study demonstrates that the aggregated estimate obtained from BMA outperforms all individual prediction methods in our cross-validation study with improvements from 40-70\\% over other method classes. This work illustrates a new possible mechanism for improving the accuracy of p$$K_a$$ prediction and lays the foundation for future work on aggregate models that balance computational cost with prediction accuracy.« less

  8. A research program to reduce interior noise in general aviation airplanes. [test methods and results

    NASA Technical Reports Server (NTRS)

    Roskam, J.; Muirhead, V. U.; Smith, H. W.; Peschier, T. D.; Durenberger, D.; Vandam, K.; Shu, T. C.

    1977-01-01

    Analytical and semi-empirical methods for determining the transmission of sound through isolated panels and predicting panel transmission loss are described. Test results presented include the influence of plate stiffness and mass and the effects of pressurization and vibration damping materials on sound transmission characteristics. Measured and predicted results are presented in tables and graphs.

  9. Automatic Keyword Identification by Artificial Neural Networks Compared to Manual Identification by Users of Filtering Systems.

    ERIC Educational Resources Information Center

    Boger, Zvi; Kuflik, Tsvi; Shoval, Peretz; Shapira, Bracha

    2001-01-01

    Discussion of information filtering (IF) and information retrieval focuses on the use of an artificial neural network (ANN) as an alternative method for both IF and term selection and compares its effectiveness to that of traditional methods. Results show that the ANN relevance prediction out-performs the prediction of an IF system. (Author/LRW)

  10. Ultra-High Bypass Ratio Jet Noise

    NASA Technical Reports Server (NTRS)

    Low, John K. C.

    1994-01-01

    The jet noise from a 1/15 scale model of a Pratt and Whitney Advanced Ducted Propulsor (ADP) was measured in the United Technology Research Center anechoic research tunnel (ART) under a range of operating conditions. Conditions were chosen to match engine operating conditions. Data were obtained at static conditions and at wind tunnel Mach numbers of 0.2, 0.27, and 0.35 to simulate inflight effects on jet noise. Due to a temperature dependence of the secondary nozzle area, the model nozzle secondary to primary area ratio varied from 7.12 at 100 percent thrust to 7.39 at 30 percent thrust. The bypass ratio varied from 10.2 to 11.8 respectively. Comparison of the data with predictions using the current Society of Automotive Engineers (SAE) Jet Noise Prediction Method showed that the current prediction method overpredicted the ADP jet noise by 6 decibels. The data suggest that a simple method of subtracting 6 decibels from the SAE Coaxial Jet Noise Prediction for the merged and secondary flow source components would result in good agreement between predicted and measured levels. The simulated jet noise flight effects with wind tunnel Mach numbers up to 0.35 produced jet noise inflight noise reductions up to 12 decibels. The reductions in jet noise levels were across the entire jet noise spectra, suggesting that the inflight effects affected all source noise components.

  11. Joint L2,1 Norm and Fisher Discrimination Constrained Feature Selection for Rational Synthesis of Microporous Aluminophosphates.

    PubMed

    Qi, Miao; Wang, Ting; Yi, Yugen; Gao, Na; Kong, Jun; Wang, Jianzhong

    2017-04-01

    Feature selection has been regarded as an effective tool to help researchers understand the generating process of data. For mining the synthesis mechanism of microporous AlPOs, this paper proposes a novel feature selection method by joint l 2,1 norm and Fisher discrimination constraints (JNFDC). In order to obtain more effective feature subset, the proposed method can be achieved in two steps. The first step is to rank the features according to sparse and discriminative constraints. The second step is to establish predictive model with the ranked features, and select the most significant features in the light of the contribution of improving the predictive accuracy. To the best of our knowledge, JNFDC is the first work which employs the sparse representation theory to explore the synthesis mechanism of six kinds of pore rings. Numerical simulations demonstrate that our proposed method can select significant features affecting the specified structural property and improve the predictive accuracy. Moreover, comparison results show that JNFDC can obtain better predictive performances than some other state-of-the-art feature selection methods. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Epidemiologic research using probabilistic outcome definitions.

    PubMed

    Cai, Bing; Hennessy, Sean; Lo Re, Vincent; Small, Dylan S

    2015-01-01

    Epidemiologic studies using electronic healthcare data often define the presence or absence of binary clinical outcomes by using algorithms with imperfect specificity, sensitivity, and positive predictive value. This results in misclassification and bias in study results. We describe and evaluate a new method called probabilistic outcome definition (POD) that uses logistic regression to estimate the probability of a clinical outcome using multiple potential algorithms and then uses multiple imputation to make valid inferences about the risk ratio or other epidemiologic parameters of interest. We conducted a simulation to evaluate the performance of the POD method with two variables that can predict the true outcome and compared the POD method with the conventional method. The simulation results showed that when the true risk ratio is equal to 1.0 (null), the conventional method based on a binary outcome provides unbiased estimates. However, when the risk ratio is not equal to 1.0, the traditional method, either using one predictive variable or both predictive variables to define the outcome, is biased when the positive predictive value is <100%, and the bias is very severe when the sensitivity or positive predictive value is poor (less than 0.75 in our simulation). In contrast, the POD method provides unbiased estimates of the risk ratio both when this measure of effect is equal to 1.0 and not equal to 1.0. Even when the sensitivity and positive predictive value are low, the POD method continues to provide unbiased estimates of the risk ratio. The POD method provides an improved way to define outcomes in database research. This method has a major advantage over the conventional method in that it provided unbiased estimates of risk ratios and it is easy to use. Copyright © 2014 John Wiley & Sons, Ltd.

  13. BiodMHC: an online server for the prediction of MHC class II-peptide binding affinity.

    PubMed

    Wang, Lian; Pan, Danling; Hu, Xihao; Xiao, Jinyu; Gao, Yangyang; Zhang, Huifang; Zhang, Yan; Liu, Juan; Zhu, Shanfeng

    2009-05-01

    Effective identification of major histocompatibility complex (MHC) molecules restricted peptides is a critical step in discovering immune epitopes. Although many online servers have been built to predict class II MHC-peptide binding affinity, they have been trained on different datasets, and thus fail in providing a unified comparison of various methods. In this paper, we present our implementation of seven popular predictive methods, namely SMM-align, ARB, SVR-pairwise, Gibbs sampler, ProPred, LP-top2, and MHCPred, on a single web server named BiodMHC (http://biod.whu.edu.cn/BiodMHC/index.html, the software is available upon request). Using a standard measure of AUC (Area Under the receiver operating characteristic Curves), we compare these methods by means of not only cross validation but also prediction on independent test datasets. We find that SMM-align, ProPred, SVR-pairwise, ARB, and Gibbs sampler are the five best-performing methods. For the binding affinity prediction of class II MHC-peptide, BiodMHC provides a convenient online platform for researchers to obtain binding information simultaneously using various methods.

  14. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  15. Stochastic model stationarization by eliminating the periodic term and its effect on time series prediction

    NASA Astrophysics Data System (ADS)

    Moeeni, Hamid; Bonakdari, Hossein; Fatemi, Seyed Ehsan

    2017-04-01

    Because time series stationarization has a key role in stochastic modeling results, three methods are analyzed in this study. The methods are seasonal differencing, seasonal standardization and spectral analysis to eliminate the periodic effect on time series stationarity. First, six time series including 4 streamflow series and 2 water temperature series are stationarized. The stochastic term for these series obtained with ARIMA is subsequently modeled. For the analysis, 9228 models are introduced. It is observed that seasonal standardization and spectral analysis eliminate the periodic term completely, while seasonal differencing maintains seasonal correlation structures. The obtained results indicate that all three methods present acceptable performance overall. However, model accuracy in monthly streamflow prediction is higher with seasonal differencing than with the other two methods. Another advantage of seasonal differencing over the other methods is that the monthly streamflow is never estimated as negative. Standardization is the best method for predicting monthly water temperature although it is quite similar to seasonal differencing, while spectral analysis performed the weakest in all cases. It is concluded that for each monthly seasonal series, seasonal differencing is the best stationarization method in terms of periodic effect elimination. Moreover, the monthly water temperature is predicted with more accuracy than monthly streamflow. The criteria of the average stochastic term divided by the amplitude of the periodic term obtained for monthly streamflow and monthly water temperature were 0.19 and 0.30, 0.21 and 0.13, and 0.07 and 0.04 respectively. As a result, the periodic term is more dominant than the stochastic term for water temperature in the monthly water temperature series compared to streamflow series.

  16. Including non-additive genetic effects in Bayesian methods for the prediction of genetic values based on genome-wide markers

    PubMed Central

    2011-01-01

    Background Molecular marker information is a common source to draw inferences about the relationship between genetic and phenotypic variation. Genetic effects are often modelled as additively acting marker allele effects. The true mode of biological action can, of course, be different from this plain assumption. One possibility to better understand the genetic architecture of complex traits is to include intra-locus (dominance) and inter-locus (epistasis) interaction of alleles as well as the additive genetic effects when fitting a model to a trait. Several Bayesian MCMC approaches exist for the genome-wide estimation of genetic effects with high accuracy of genetic value prediction. Including pairwise interaction for thousands of loci would probably go beyond the scope of such a sampling algorithm because then millions of effects are to be estimated simultaneously leading to months of computation time. Alternative solving strategies are required when epistasis is studied. Methods We extended a fast Bayesian method (fBayesB), which was previously proposed for a purely additive model, to include non-additive effects. The fBayesB approach was used to estimate genetic effects on the basis of simulated datasets. Different scenarios were simulated to study the loss of accuracy of prediction, if epistatic effects were not simulated but modelled and vice versa. Results If 23 QTL were simulated to cause additive and dominance effects, both fBayesB and a conventional MCMC sampler BayesB yielded similar results in terms of accuracy of genetic value prediction and bias of variance component estimation based on a model including additive and dominance effects. Applying fBayesB to data with epistasis, accuracy could be improved by 5% when all pairwise interactions were modelled as well. The accuracy decreased more than 20% if genetic variation was spread over 230 QTL. In this scenario, accuracy based on modelling only additive and dominance effects was generally superior to that of the complex model including epistatic effects. Conclusions This simulation study showed that the fBayesB approach is convenient for genetic value prediction. Jointly estimating additive and non-additive effects (especially dominance) has reasonable impact on the accuracy of prediction and the proportion of genetic variation assigned to the additive genetic source. PMID:21867519

  17. Free variable selection QSPR study to predict 19F chemical shifts of some fluorinated organic compounds using Random Forest and RBF-PLS methods

    NASA Astrophysics Data System (ADS)

    Goudarzi, Nasser

    2016-04-01

    In this work, two new and powerful chemometrics methods are applied for the modeling and prediction of the 19F chemical shift values of some fluorinated organic compounds. The radial basis function-partial least square (RBF-PLS) and random forest (RF) are employed to construct the models to predict the 19F chemical shifts. In this study, we didn't used from any variable selection method and RF method can be used as variable selection and modeling technique. Effects of the important parameters affecting the ability of the RF prediction power such as the number of trees (nt) and the number of randomly selected variables to split each node (m) were investigated. The root-mean-square errors of prediction (RMSEP) for the training set and the prediction set for the RBF-PLS and RF models were 44.70, 23.86, 29.77, and 23.69, respectively. Also, the correlation coefficients of the prediction set for the RBF-PLS and RF models were 0.8684 and 0.9313, respectively. The results obtained reveal that the RF model can be used as a powerful chemometrics tool for the quantitative structure-property relationship (QSPR) studies.

  18. Epileptic Seizures Prediction Using Machine Learning Methods

    PubMed Central

    Usman, Syed Muhammad

    2017-01-01

    Epileptic seizures occur due to disorder in brain functionality which can affect patient's health. Prediction of epileptic seizures before the beginning of the onset is quite useful for preventing the seizure by medication. Machine learning techniques and computational methods are used for predicting epileptic seizures from Electroencephalograms (EEG) signals. However, preprocessing of EEG signals for noise removal and features extraction are two major issues that have an adverse effect on both anticipation time and true positive prediction rate. Therefore, we propose a model that provides reliable methods of both preprocessing and feature extraction. Our model predicts epileptic seizures' sufficient time before the onset of seizure starts and provides a better true positive rate. We have applied empirical mode decomposition (EMD) for preprocessing and have extracted time and frequency domain features for training a prediction model. The proposed model detects the start of the preictal state, which is the state that starts few minutes before the onset of the seizure, with a higher true positive rate compared to traditional methods, 92.23%, and maximum anticipation time of 33 minutes and average prediction time of 23.6 minutes on scalp EEG CHB-MIT dataset of 22 subjects. PMID:29410700

  19. An approximate solution to improve computational efficiency of impedance-type payload load prediction

    NASA Technical Reports Server (NTRS)

    White, C. W.

    1981-01-01

    The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.

  20. Research of Water Level Prediction for a Continuous Flood due to Typhoons Based on a Machine Learning Method

    NASA Astrophysics Data System (ADS)

    Nakatsugawa, M.; Kobayashi, Y.; Okazaki, R.; Taniguchi, Y.

    2017-12-01

    This research aims to improve accuracy of water level prediction calculations for more effective river management. In August 2016, Hokkaido was visited by four typhoons, whose heavy rainfall caused severe flooding. In the Tokoro river basin of Eastern Hokkaido, the water level (WL) at the Kamikawazoe gauging station, which is at the lower reaches exceeded the design high-water level and the water rose to the highest level on record. To predict such flood conditions and mitigate disaster damage, it is necessary to improve the accuracy of prediction as well as to prolong the lead time (LT) required for disaster mitigation measures such as flood-fighting activities and evacuation actions by residents. There is the need to predict the river water level around the peak stage earlier and more accurately. Previous research dealing with WL prediction had proposed a method in which the WL at the lower reaches is estimated by the correlation with the WL at the upper reaches (hereinafter: "the water level correlation method"). Additionally, a runoff model-based method has been generally used in which the discharge is estimated by giving rainfall prediction data to a runoff model such as a storage function model and then the WL is estimated from that discharge by using a WL discharge rating curve (H-Q curve). In this research, an attempt was made to predict WL by applying the Random Forest (RF) method, which is a machine learning method that can estimate the contribution of explanatory variables. Furthermore, from the practical point of view, we investigated the prediction of WL based on a multiple correlation (MC) method involving factors using explanatory variables with high contribution in the RF method, and we examined the proper selection of explanatory variables and the extension of LT. The following results were found: 1) Based on the RF method tuned up by learning from previous floods, the WL for the abnormal flood case of August 2016 was properly predicted with a lead time of 6 h. 2) Based on the contribution of explanatory variables, factors were selected for the MC method. In this way, plausible prediction results were obtained.

  1. Genomic prediction using an iterative conditional expectation algorithm for a fast BayesC-like model.

    PubMed

    Dong, Linsong; Wang, Zhiyong

    2018-06-11

    Genomic prediction is feasible for estimating genomic breeding values because of dense genome-wide markers and credible statistical methods, such as Genomic Best Linear Unbiased Prediction (GBLUP) and various Bayesian methods. Compared with GBLUP, Bayesian methods propose more flexible assumptions for the distributions of SNP effects. However, most Bayesian methods are performed based on Markov chain Monte Carlo (MCMC) algorithms, leading to computational efficiency challenges. Hence, some fast Bayesian approaches, such as fast BayesB (fBayesB), were proposed to speed up the calculation. This study proposed another fast Bayesian method termed fast BayesC (fBayesC). The prior distribution of fBayesC assumes that a SNP with probability γ has a non-zero effect which comes from a normal density with a common variance. The simulated data from QTLMAS XII workshop and actual data on large yellow croaker were used to compare the predictive results of fBayesB, fBayesC and (MCMC-based) BayesC. The results showed that when γ was set as a small value, such as 0.01 in the simulated data or 0.001 in the actual data, fBayesB and fBayesC yielded lower prediction accuracies (abilities) than BayesC. In the actual data, fBayesC could yield very similar predictive abilities as BayesC when γ ≥ 0.01. When γ = 0.01, fBayesB could also yield similar results as fBayesC and BayesC. However, fBayesB could not yield an explicit result when γ ≥ 0.1, but a similar situation was not observed for fBayesC. Moreover, the computational speed of fBayesC was significantly faster than that of BayesC, making fBayesC a promising method for genomic prediction.

  2. Predicting Welding Distortion in a Panel Structure with Longitudinal Stiffeners Using Inherent Deformations Obtained by Inverse Analysis Method

    PubMed Central

    Liang, Wei; Murakawa, Hidekazu

    2014-01-01

    Welding-induced deformation not only negatively affects dimension accuracy but also degrades the performance of product. If welding deformation can be accurately predicted beforehand, the predictions will be helpful for finding effective methods to improve manufacturing accuracy. Till now, there are two kinds of finite element method (FEM) which can be used to simulate welding deformation. One is the thermal elastic plastic FEM and the other is elastic FEM based on inherent strain theory. The former only can be used to calculate welding deformation for small or medium scale welded structures due to the limitation of computing speed. On the other hand, the latter is an effective method to estimate the total welding distortion for large and complex welded structures even though it neglects the detailed welding process. When the elastic FEM is used to calculate the welding-induced deformation for a large structure, the inherent deformations in each typical joint should be obtained beforehand. In this paper, a new method based on inverse analysis was proposed to obtain the inherent deformations for weld joints. Through introducing the inherent deformations obtained by the proposed method into the elastic FEM based on inherent strain theory, we predicted the welding deformation of a panel structure with two longitudinal stiffeners. In addition, experiments were carried out to verify the simulation results. PMID:25276856

  3. Predicting welding distortion in a panel structure with longitudinal stiffeners using inherent deformations obtained by inverse analysis method.

    PubMed

    Liang, Wei; Murakawa, Hidekazu

    2014-01-01

    Welding-induced deformation not only negatively affects dimension accuracy but also degrades the performance of product. If welding deformation can be accurately predicted beforehand, the predictions will be helpful for finding effective methods to improve manufacturing accuracy. Till now, there are two kinds of finite element method (FEM) which can be used to simulate welding deformation. One is the thermal elastic plastic FEM and the other is elastic FEM based on inherent strain theory. The former only can be used to calculate welding deformation for small or medium scale welded structures due to the limitation of computing speed. On the other hand, the latter is an effective method to estimate the total welding distortion for large and complex welded structures even though it neglects the detailed welding process. When the elastic FEM is used to calculate the welding-induced deformation for a large structure, the inherent deformations in each typical joint should be obtained beforehand. In this paper, a new method based on inverse analysis was proposed to obtain the inherent deformations for weld joints. Through introducing the inherent deformations obtained by the proposed method into the elastic FEM based on inherent strain theory, we predicted the welding deformation of a panel structure with two longitudinal stiffeners. In addition, experiments were carried out to verify the simulation results.

  4. A review: applications of the phase field method in predicting microstructure and property evolution of irradiated nuclear materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yulan; Hu, Shenyang; Sun, Xin

    Here, complex microstructure changes occur in nuclear fuel and structural materials due to the extreme environments of intense irradiation and high temperature. This paper evaluates the role of the phase field method in predicting the microstructure evolution of irradiated nuclear materials and the impact on their mechanical, thermal, and magnetic properties. The paper starts with an overview of the important physical mechanisms of defect evolution and the significant gaps in simulating microstructure evolution in irradiated nuclear materials. Then, the phase field method is introduced as a powerful and predictive tool and its applications to microstructure and property evolution in irradiatedmore » nuclear materials are reviewed. The review shows that (1) Phase field models can correctly describe important phenomena such as spatial-dependent generation, migration, and recombination of defects, radiation-induced dissolution, the Soret effect, strong interfacial energy anisotropy, and elastic interaction; (2) The phase field method can qualitatively and quantitatively simulate two-dimensional and three-dimensional microstructure evolution, including radiation-induced segregation, second phase nucleation, void migration, void and gas bubble superlattice formation, interstitial loop evolution, hydrate formation, and grain growth, and (3) The Phase field method correctly predicts the relationships between microstructures and properties. The final section is dedicated to a discussion of the strengths and limitations of the phase field method, as applied to irradiation effects in nuclear materials.« less

  5. A review: applications of the phase field method in predicting microstructure and property evolution of irradiated nuclear materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yulan; Hu, Shenyang; Sun, Xin

    Complex microstructure changes occur in nuclear fuel and structural materials due to the extreme environments of intense irradiation and high temperature. This paper evaluates the role of the phase field (PF) method in predicting the microstructure evolution of irradiated nuclear materials and the impact on their mechanical, thermal, and magnetic properties. The paper starts with an overview of the important physical mechanisms of defect evolution and the significant gaps in simulating microstructure evolution in irradiated nuclear materials. Then, the PF method is introduced as a powerful and predictive tool and its applications to microstructure and property evolution in irradiated nuclearmore » materials are reviewed. The review shows that 1) FP models can correctly describe important phenomena such as spatial dependent generation, migration, and recombination of defects, radiation-induced dissolution, the Soret effect, strong interfacial energy anisotropy, and elastic interaction; 2) The PF method can qualitatively and quantitatively simulate 2-D and 3-D microstructure evolution, including radiation-induced segregation, second phase nucleation, void migration, void and gas bubble superlattice formation, interstitial loop evolution, hydrate formation, and grain growth, and 3) The FP method correctly predicts the relationships between microstructures and properties. The final section is dedicated to a discussion of the strengths and limitations of the PF method, as applied to irradiation effects in nuclear materials.« less

  6. A review: applications of the phase field method in predicting microstructure and property evolution of irradiated nuclear materials

    DOE PAGES

    Li, Yulan; Hu, Shenyang; Sun, Xin; ...

    2017-04-14

    Here, complex microstructure changes occur in nuclear fuel and structural materials due to the extreme environments of intense irradiation and high temperature. This paper evaluates the role of the phase field method in predicting the microstructure evolution of irradiated nuclear materials and the impact on their mechanical, thermal, and magnetic properties. The paper starts with an overview of the important physical mechanisms of defect evolution and the significant gaps in simulating microstructure evolution in irradiated nuclear materials. Then, the phase field method is introduced as a powerful and predictive tool and its applications to microstructure and property evolution in irradiatedmore » nuclear materials are reviewed. The review shows that (1) Phase field models can correctly describe important phenomena such as spatial-dependent generation, migration, and recombination of defects, radiation-induced dissolution, the Soret effect, strong interfacial energy anisotropy, and elastic interaction; (2) The phase field method can qualitatively and quantitatively simulate two-dimensional and three-dimensional microstructure evolution, including radiation-induced segregation, second phase nucleation, void migration, void and gas bubble superlattice formation, interstitial loop evolution, hydrate formation, and grain growth, and (3) The Phase field method correctly predicts the relationships between microstructures and properties. The final section is dedicated to a discussion of the strengths and limitations of the phase field method, as applied to irradiation effects in nuclear materials.« less

  7. Realizing drug repositioning by adapting a recommendation system to handle the process.

    PubMed

    Ozsoy, Makbule Guclin; Özyer, Tansel; Polat, Faruk; Alhajj, Reda

    2018-04-12

    Drug repositioning is the process of identifying new targets for known drugs. It can be used to overcome problems associated with traditional drug discovery by adapting existing drugs to treat new discovered diseases. Thus, it may reduce associated risk, cost and time required to identify and verify new drugs. Nowadays, drug repositioning has received more attention from industry and academia. To tackle this problem, researchers have applied many different computational methods and have used various features of drugs and diseases. In this study, we contribute to the ongoing research efforts by combining multiple features, namely chemical structures, protein interactions and side-effects to predict new indications of target drugs. To achieve our target, we realize drug repositioning as a recommendation process and this leads to a new perspective in tackling the problem. The utilized recommendation method is based on Pareto dominance and collaborative filtering. It can also integrate multiple data-sources and multiple features. For the computation part, we applied several settings and we compared their performance. Evaluation results show that the proposed method can achieve more concentrated predictions with high precision, where nearly half of the predictions are true. Compared to other state of the art methods described in the literature, the proposed method is better at making right predictions by having higher precision. The reported results demonstrate the applicability and effectiveness of recommendation methods for drug repositioning.

  8. A numerical simulation strategy on occupant evacuation behaviors and casualty prediction in a building during earthquakes

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Yu, Xiaohui; Zhang, Yanjuan; Zhai, Changhai

    2018-01-01

    Casualty prediction in a building during earthquakes benefits to implement the economic loss estimation in the performance-based earthquake engineering methodology. Although after-earthquake observations reveal that the evacuation has effects on the quantity of occupant casualties during earthquakes, few current studies consider occupant movements in the building in casualty prediction procedures. To bridge this knowledge gap, a numerical simulation method using refined cellular automata model is presented, which can describe various occupant dynamic behaviors and building dimensions. The simulation on the occupant evacuation is verified by a recorded evacuation process from a school classroom in real-life 2013 Ya'an earthquake in China. The occupant casualties in the building under earthquakes are evaluated by coupling the building collapse process simulation by finite element method, the occupant evacuation simulation, and the casualty occurrence criteria with time and space synchronization. A case study of casualty prediction in a building during an earthquake is provided to demonstrate the effect of occupant movements on casualty prediction.

  9. Performance of Trajectory Models with Wind Uncertainty

    NASA Technical Reports Server (NTRS)

    Lee, Alan G.; Weygandt, Stephen S.; Schwartz, Barry; Murphy, James R.

    2009-01-01

    Typical aircraft trajectory predictors use wind forecasts but do not account for the forecast uncertainty. A method for generating estimates of wind prediction uncertainty is described and its effect on aircraft trajectory prediction uncertainty is investigated. The procedure for estimating the wind prediction uncertainty relies uses a time-lagged ensemble of weather model forecasts from the hourly updated Rapid Update Cycle (RUC) weather prediction system. Forecast uncertainty is estimated using measures of the spread amongst various RUC time-lagged ensemble forecasts. This proof of concept study illustrates the estimated uncertainty and the actual wind errors, and documents the validity of the assumed ensemble-forecast accuracy relationship. Aircraft trajectory predictions are made using RUC winds with provision for the estimated uncertainty. Results for a set of simulated flights indicate this simple approach effectively translates the wind uncertainty estimate into an aircraft trajectory uncertainty. A key strength of the method is the ability to relate uncertainty to specific weather phenomena (contained in the various ensemble members) allowing identification of regional variations in uncertainty.

  10. Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai

    2013-01-01

    This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.

  11. Ensemble-based prediction of RNA secondary structures.

    PubMed

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.

  12. Exact simulation of polarized light reflectance by particle deposits

    NASA Astrophysics Data System (ADS)

    Ramezan Pour, B.; Mackowski, D. W.

    2015-12-01

    The use of polarimetric light reflection measurements as a means of identifying the physical and chemical characteristics of particulate materials obviously relies on an accurate model of predicting the effects of particle size, shape, concentration, and refractive index on polarized reflection. The research examines two methods for prediction of reflection from plane parallel layers of wavelength—sized particles. The first method is based on an exact superposition solution to Maxwell's time harmonic wave equations for a deposit of spherical particles that are exposed to a plane incident wave. We use a FORTRAN-90 implementation of this solution (the Multiple Sphere T Matrix (MSTM) code), coupled with parallel computational platforms, to directly simulate the reflection from particle layers. The second method examined is based upon the vector radiative transport equation (RTE). Mie theory is used in our RTE model to predict the extinction coefficient, albedo, and scattering phase function of the particles, and the solution of the RTE is obtained from adding—doubling method applied to a plane—parallel configuration. Our results show that the MSTM and RTE predictions of the Mueller matrix elements converge when particle volume fraction in the particle layer decreases below around five percent. At higher volume fractions the RTE can yield results that, depending on the particle size and refractive index, significantly depart from the exact predictions. The particle regimes which lead to dependent scattering effects, and the application of methods to correct the vector RTE for particle interaction, will be discussed.

  13. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques.

    PubMed

    Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.

  14. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques

    PubMed Central

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634

  15. Validation of prediction models: examining temporal and geographic stability of baseline risk and estimated covariate effects

    PubMed Central

    Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.

    2018-01-01

    Background Stability in baseline risk and estimated predictor effects both geographically and temporally is a desirable property of clinical prediction models. However, this issue has received little attention in the methodological literature. Our objective was to examine methods for assessing temporal and geographic heterogeneity in baseline risk and predictor effects in prediction models. Methods We studied 14,857 patients hospitalized with heart failure at 90 hospitals in Ontario, Canada, in two time periods. We focussed on geographic and temporal variation in baseline risk (intercept) and predictor effects (regression coefficients) of the EFFECT-HF mortality model for predicting 1-year mortality in patients hospitalized for heart failure. We used random effects logistic regression models for the 14,857 patients. Results The baseline risk of mortality displayed moderate geographic variation, with the hospital-specific probability of 1-year mortality for a reference patient lying between 0.168 and 0.290 for 95% of hospitals. Furthermore, the odds of death were 11% lower in the second period than in the first period. However, we found minimal geographic or temporal variation in predictor effects. Among 11 tests of differences in time for predictor variables, only one had a modestly significant P value (0.03). Conclusions This study illustrates how temporal and geographic heterogeneity of prediction models can be assessed in settings with a large sample of patients from a large number of centers at different time periods. PMID:29350215

  16. Prediction of Soil Deformation in Tunnelling Using Artificial Neural Networks.

    PubMed

    Lai, Jinxing; Qiu, Junling; Feng, Zhihua; Chen, Jianxun; Fan, Haobo

    2016-01-01

    In the past few decades, as a new tool for analysis of the tough geotechnical problems, artificial neural networks (ANNs) have been successfully applied to address a number of engineering problems, including deformation due to tunnelling in various types of rock mass. Unlike the classical regression methods in which a certain form for the approximation function must be presumed, ANNs do not require the complex constitutive models. Additionally, it is traced that the ANN prediction system is one of the most effective ways to predict the rock mass deformation. Furthermore, it could be envisaged that ANNs would be more feasible for the dynamic prediction of displacements in tunnelling in the future, especially if ANN models are combined with other research methods. In this paper, we summarized the state-of-the-art and future research challenges of ANNs on the tunnel deformation prediction. And the application cases as well as the improvement of ANN models were also presented. The presented ANN models can serve as a benchmark for effective prediction of the tunnel deformation with characters of nonlinearity, high parallelism, fault tolerance, learning, and generalization capability.

  17. A hybrid intelligent method for three-dimensional short-term prediction of dissolved oxygen content in aquaculture

    PubMed Central

    Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang

    2018-01-01

    A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies. PMID:29466394

  18. Prediction of Soil Deformation in Tunnelling Using Artificial Neural Networks

    PubMed Central

    Lai, Jinxing

    2016-01-01

    In the past few decades, as a new tool for analysis of the tough geotechnical problems, artificial neural networks (ANNs) have been successfully applied to address a number of engineering problems, including deformation due to tunnelling in various types of rock mass. Unlike the classical regression methods in which a certain form for the approximation function must be presumed, ANNs do not require the complex constitutive models. Additionally, it is traced that the ANN prediction system is one of the most effective ways to predict the rock mass deformation. Furthermore, it could be envisaged that ANNs would be more feasible for the dynamic prediction of displacements in tunnelling in the future, especially if ANN models are combined with other research methods. In this paper, we summarized the state-of-the-art and future research challenges of ANNs on the tunnel deformation prediction. And the application cases as well as the improvement of ANN models were also presented. The presented ANN models can serve as a benchmark for effective prediction of the tunnel deformation with characters of nonlinearity, high parallelism, fault tolerance, learning, and generalization capability. PMID:26819587

  19. Predicting Gene Structure Changes Resulting from Genetic Variants via Exon Definition Features.

    PubMed

    Majoros, William H; Holt, Carson; Campbell, Michael S; Ware, Doreen; Yandell, Mark; Reddy, Timothy E

    2018-04-25

    Genetic variation that disrupts gene function by altering gene splicing between individuals can substantially influence traits and disease. In those cases, accurately predicting the effects of genetic variation on splicing can be highly valuable for investigating the mechanisms underlying those traits and diseases. While methods have been developed to generate high quality computational predictions of gene structures in reference genomes, the same methods perform poorly when used to predict the potentially deleterious effects of genetic changes that alter gene splicing between individuals. Underlying that discrepancy in predictive ability are the common assumptions by reference gene finding algorithms that genes are conserved, well-formed, and produce functional proteins. We describe a probabilistic approach for predicting recent changes to gene structure that may or may not conserve function. The model is applicable to both coding and noncoding genes, and can be trained on existing gene annotations without requiring curated examples of aberrant splicing. We apply this model to the problem of predicting altered splicing patterns in the genomes of individual humans, and we demonstrate that performing gene-structure prediction without relying on conserved coding features is feasible. The model predicts an unexpected abundance of variants that create de novo splice sites, an observation supported by both simulations and empirical data from RNA-seq experiments. While these de novo splice variants are commonly misinterpreted by other tools as coding or noncoding variants of little or no effect, we find that in some cases they can have large effects on splicing activity and protein products, and we propose that they may commonly act as cryptic factors in disease. The software is available from geneprediction.org/SGRF. bmajoros@duke.edu. Supplementary information is available at Bioinformatics online.

  20. An Operating Method Using Prediction of Photovoltaic Power for a Photovoltaic-Diesel Hybrid Power Generation System

    NASA Astrophysics Data System (ADS)

    Yamamoto, Shigehiro; Sumi, Kazuyoshi; Nishikawa, Eiichi; Hashimoto, Takeshi

    This paper describes a novel operating method using prediction of photovoltaic (PV) power for a photovoltaic-diesel hybrid power generation system. The system is composed of a PV array, a storage battery, a bi-directional inverter and a diesel engine generator (DG). The proposed method enables the system to save fuel consumption by using PV energy effectively, reducing charge and discharge energy of the storage battery, and avoiding low-load operation of the DG. The PV power is simply predicted from a theoretical equation of solar radiation and the observed PV energy for a constant time before the prediction. The amount of fuel consumption of the proposed method is compared with that of other methods by a simulation based on measurement data of the PV power at an actual PV generation system for one year. The simulation results indicate that the amount of fuel consumption of the proposed method is smaller than that of any other methods, and is close to that of the ideal operation of the DG.

  1. Global vision of druggability issues: applications and perspectives.

    PubMed

    Abi Hussein, Hiba; Geneix, Colette; Petitjean, Michel; Borrel, Alexandre; Flatters, Delphine; Camproux, Anne-Claude

    2017-02-01

    During the preliminary stage of a drug discovery project, the lack of druggability information and poor target selection are the main causes of frequent failures. Elaborating on accurate computational druggability prediction methods is a requirement for prioritizing target selection, designing new drugs and avoiding side effects. In this review, we describe a survey of recently reported druggability prediction methods mainly based on networks, statistical pocket druggability predictions and virtual screening. An application for a frequent mutation of p53 tumor suppressor is presented, illustrating the complementarity of druggability prediction approaches, the remaining challenges and potential new drug development perspectives. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Construction of prediction intervals for Palmer Drought Severity Index using bootstrap

    NASA Astrophysics Data System (ADS)

    Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan

    2018-04-01

    In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.

  3. Genomic selection and association mapping in rice (Oryza sativa): effect of trait genetic architecture, training population composition, marker number and statistical model on accuracy of rice genomic selection in elite, tropical rice breeding lines.

    PubMed

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R

    2015-02-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline.

  4. Genomic Selection and Association Mapping in Rice (Oryza sativa): Effect of Trait Genetic Architecture, Training Population Composition, Marker Number and Statistical Model on Accuracy of Rice Genomic Selection in Elite, Tropical Rice Breeding Lines

    PubMed Central

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R.

    2015-01-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline. PMID:25689273

  5. Using Delaunay triangulation and Voronoi tessellation to predict the toxicities of binary mixtures containing hormetic compound

    NASA Astrophysics Data System (ADS)

    Qu, Rui; Liu, Shu-Shen; Zheng, Qiao-Feng; Li, Tong

    2017-03-01

    Concentration addition (CA) was proposed as a reasonable default approach for the ecological risk assessment of chemical mixtures. However, CA cannot predict the toxicity of mixture at some effect zones if not all components have definite effective concentrations at the given effect, such as some compounds induce hormesis. In this paper, we developed a new method for the toxicity prediction of various types of binary mixtures, an interpolation method based on the Delaunay triangulation (DT) and Voronoi tessellation (VT) as well as the training set of direct equipartition ray design (EquRay) mixtures, simply IDVequ. At first, the EquRay was employed to design the basic concentration compositions of five binary mixture rays. The toxic effects of single components and mixture rays at different times and various concentrations were determined by the time-dependent microplate toxicity analysis. Secondly, the concentration-toxicity data of the pure components and various mixture rays were acted as a training set. The DT triangles and VT polygons were constructed by various vertices of concentrations in the training set. The toxicities of unknown mixtures were predicted by the linear interpolation and natural neighbor interpolation of vertices. The IDVequ successfully predicted the toxicities of various types of binary mixtures.

  6. Using Delaunay triangulation and Voronoi tessellation to predict the toxicities of binary mixtures containing hormetic compound

    PubMed Central

    Qu, Rui; Liu, Shu-Shen; Zheng, Qiao-Feng; Li, Tong

    2017-01-01

    Concentration addition (CA) was proposed as a reasonable default approach for the ecological risk assessment of chemical mixtures. However, CA cannot predict the toxicity of mixture at some effect zones if not all components have definite effective concentrations at the given effect, such as some compounds induce hormesis. In this paper, we developed a new method for the toxicity prediction of various types of binary mixtures, an interpolation method based on the Delaunay triangulation (DT) and Voronoi tessellation (VT) as well as the training set of direct equipartition ray design (EquRay) mixtures, simply IDVequ. At first, the EquRay was employed to design the basic concentration compositions of five binary mixture rays. The toxic effects of single components and mixture rays at different times and various concentrations were determined by the time-dependent microplate toxicity analysis. Secondly, the concentration-toxicity data of the pure components and various mixture rays were acted as a training set. The DT triangles and VT polygons were constructed by various vertices of concentrations in the training set. The toxicities of unknown mixtures were predicted by the linear interpolation and natural neighbor interpolation of vertices. The IDVequ successfully predicted the toxicities of various types of binary mixtures. PMID:28287626

  7. Evaluation and prediction of long-term environmental effects of nonmetallic materials, second phase

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Changes in the functional properties of a number of nonmetallic materials were evaluated experimentally as a function of simulated space environments and to use such data to develop models for accelerated test methods useful for predicting such behavioral changes. The effects of changed particle irradiations on candidate space materials are evaluated.

  8. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison

    PubMed Central

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S.; Sinha, Saurabh

    2011-01-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, ‘enhancers’), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for ‘motif-blind’ CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to ‘supervise’ the search. We propose a new statistical method, based on ‘Interpolated Markov Models’, for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. PMID:21821659

  9. Prediction of hole expansion ratio for various steel sheets based on uniaxial tensile properties

    NASA Astrophysics Data System (ADS)

    Kim, Jae Hyung; Kwon, Young Jin; Lee, Taekyung; Lee, Kee-Ahn; Kim, Hyoung Seop; Lee, Chong Soo

    2018-01-01

    Stretch-flangeability is one of important formability parameters of thin steel sheets used in the automotive industry. There have been many attempts to predict hole expansion ratio (HER), a typical term to evaluate stretch-flangeability, using uniaxial tensile properties for convenience. This paper suggests a new approach that uses total elongation and average normal anisotropy to predict HER of thin steel sheets. The method provides a good linear relationship between HER of the machined hole and the predictive variables in a variety of materials with different microstructures obtained using different processing methods. The HER of the punched hole was also well predicted using the similar approach, which reflected only the portion of post uniform elongation. The physical meaning drawn by our approach successfully explained the poor HER of austenitic steels despite their considerable elongation. The proposed method to predict HER is simple and cost-effective, so it will be useful in industry. In addition, the model provides a physical explanation of HER, so it will be useful in academia.

  10. Fan Noise Prediction with Applications to Aircraft System Noise Assessment

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Envia, Edmane; Burley, Casey L.

    2009-01-01

    This paper describes an assessment of current fan noise prediction tools by comparing measured and predicted sideline acoustic levels from a benchmark fan noise wind tunnel test. Specifically, an empirical method and newly developed coupled computational approach are utilized to predict aft fan noise for a benchmark test configuration. Comparisons with sideline noise measurements are performed to assess the relative merits of the two approaches. The study identifies issues entailed in coupling the source and propagation codes, as well as provides insight into the capabilities of the tools in predicting the fan noise source and subsequent propagation and radiation. In contrast to the empirical method, the new coupled computational approach provides the ability to investigate acoustic near-field effects. The potential benefits/costs of these new methods are also compared with the existing capabilities in a current aircraft noise system prediction tool. The knowledge gained in this work provides a basis for improved fan source specification in overall aircraft system noise studies.

  11. Comparison of baseline removal methods for laser-induced breakdown spectroscopy of geological samples

    NASA Astrophysics Data System (ADS)

    Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas

    2016-12-01

    This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.

  12. Genetic risk prediction using a spatial autoregressive model with adaptive lasso.

    PubMed

    Wen, Yalu; Shen, Xiaoxi; Lu, Qing

    2018-05-31

    With rapidly evolving high-throughput technologies, studies are being initiated to accelerate the process toward precision medicine. The collection of the vast amounts of sequencing data provides us with great opportunities to systematically study the role of a deep catalog of sequencing variants in risk prediction. Nevertheless, the massive amount of noise signals and low frequencies of rare variants in sequencing data pose great analytical challenges on risk prediction modeling. Motivated by the development in spatial statistics, we propose a spatial autoregressive model with adaptive lasso (SARAL) for risk prediction modeling using high-dimensional sequencing data. The SARAL is a set-based approach, and thus, it reduces the data dimension and accumulates genetic effects within a single-nucleotide variant (SNV) set. Moreover, it allows different SNV sets having various magnitudes and directions of effect sizes, which reflects the nature of complex diseases. With the adaptive lasso implemented, SARAL can shrink the effects of noise SNV sets to be zero and, thus, further improve prediction accuracy. Through simulation studies, we demonstrate that, overall, SARAL is comparable to, if not better than, the genomic best linear unbiased prediction method. The method is further illustrated by an application to the sequencing data from the Alzheimer's Disease Neuroimaging Initiative. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Systematic drug repositioning for a wide range of diseases with integrative analyses of phenotypic and molecular data.

    PubMed

    Iwata, Hiroaki; Sawada, Ryusuke; Mizutani, Sayaka; Yamanishi, Yoshihiro

    2015-02-23

    Drug repositioning, or the application of known drugs to new indications, is a challenging issue in pharmaceutical science. In this study, we developed a new computational method to predict unknown drug indications for systematic drug repositioning in a framework of supervised network inference. We defined a descriptor for each drug-disease pair based on the phenotypic features of drugs (e.g., medicinal effects and side effects) and various molecular features of diseases (e.g., disease-causing genes, diagnostic markers, disease-related pathways, and environmental factors) and constructed a statistical model to predict new drug-disease associations for a wide range of diseases in the International Classification of Diseases. Our results show that the proposed method outperforms previous methods in terms of accuracy and applicability, and its performance does not depend on drug chemical structure similarity. Finally, we performed a comprehensive prediction of a drug-disease association network consisting of 2349 drugs and 858 diseases and described biologically meaningful examples of newly predicted drug indications for several types of cancers and nonhereditary diseases.

  14. Multi-Instance Metric Transfer Learning for Genome-Wide Protein Function Prediction.

    PubMed

    Xu, Yonghui; Min, Huaqing; Wu, Qingyao; Song, Hengjie; Ye, Bicui

    2017-02-06

    Multi-Instance (MI) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with multiple instances. Many studies in this literature attempted to find an appropriate Multi-Instance Learning (MIL) method for genome-wide protein function prediction under a usual assumption, the underlying distribution from testing data (target domain, i.e., TD) is the same as that from training data (source domain, i.e., SD). However, this assumption may be violated in real practice. To tackle this problem, in this paper, we propose a Multi-Instance Metric Transfer Learning (MIMTL) approach for genome-wide protein function prediction. In MIMTL, we first transfer the source domain distribution to the target domain distribution by utilizing the bag weights. Then, we construct a distance metric learning method with the reweighted bags. At last, we develop an alternative optimization scheme for MIMTL. Comprehensive experimental evidence on seven real-world organisms verifies the effectiveness and efficiency of the proposed MIMTL approach over several state-of-the-art methods.

  15. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    PubMed

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  16. Prediction of unsteady separated flows on oscillating airfoils

    NASA Technical Reports Server (NTRS)

    Mccroskey, W. J.

    1978-01-01

    Techniques for calculating high Reynolds number flow around an airfoil undergoing dynamic stall are reviewed. Emphasis is placed on predicting the values of lift, drag, and pitching moments. Methods discussed include: the discrete potential vortex method; thin boundary layer method; strong interaction between inviscid and viscous flows; and solutions to the Navier-Stokes equations. Empirical methods for estimating unsteady airloads on oscillating airfoils are also described. These methods correlate force and moment data from wind tunnel tests to indicate the effects of various parameters, such as airfoil shape, Mach number, amplitude and frequency of sinosoidal oscillations, mean angle, and type of motion.

  17. System dynamic simulation: A new method in social impact assessment (SIA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karami, Shobeir, E-mail: shobeirkarami@gmail.com; Karami, Ezatollah, E-mail: ekarami@shirazu.ac.ir; Buys, Laurie, E-mail: l.buys@qut.edu.au

    Many complex social questions are difficult to address adequately with conventional methods and techniques, due to the complicated dynamics, and hard to quantify social processes. Despite these difficulties researchers and practitioners have attempted to use conventional methods not only in evaluative modes but also in predictive modes to inform decision making. The effectiveness of SIAs would be increased if they were used to support the project design processes. This requires deliberate use of lessons from retrospective assessments to inform predictive assessments. Social simulations may be a useful tool for developing a predictive SIA method. There have been limited attempts tomore » develop computer simulations that allow social impacts to be explored and understood before implementing development projects. In light of this argument, this paper aims to introduce system dynamic (SD) simulation as a new predictive SIA method in large development projects. We propose the potential value of the SD approach to simulate social impacts of development projects. We use data from the SIA of Gareh-Bygone floodwater spreading project to illustrate the potential of SD simulation in SIA. It was concluded that in comparison to traditional SIA methods SD simulation can integrate quantitative and qualitative inputs from different sources and methods and provides a more effective and dynamic assessment of social impacts for development projects. We recommend future research to investigate the full potential of SD in SIA in comparing different situations and scenarios.« less

  18. MACE prediction of acute coronary syndrome via boosted resampling classification using electronic medical records.

    PubMed

    Huang, Zhengxing; Chan, Tak-Ming; Dong, Wei

    2017-02-01

    Major adverse cardiac events (MACE) of acute coronary syndrome (ACS) often occur suddenly resulting in high mortality and morbidity. Recently, the rapid development of electronic medical records (EMR) provides the opportunity to utilize the potential of EMR to improve the performance of MACE prediction. In this study, we present a novel data-mining based approach specialized for MACE prediction from a large volume of EMR data. The proposed approach presents a new classification algorithm by applying both over-sampling and under-sampling on minority-class and majority-class samples, respectively, and integrating the resampling strategy into a boosting framework so that it can effectively handle imbalance of MACE of ACS patients analogous to domain practice. The method learns a new and stronger MACE prediction model each iteration from a more difficult subset of EMR data with wrongly predicted MACEs of ACS patients by a previous weak model. We verify the effectiveness of the proposed approach on a clinical dataset containing 2930 ACS patient samples with 268 feature types. While the imbalanced ratio does not seem extreme (25.7%), MACE prediction targets pose great challenge to traditional methods. As these methods degenerate dramatically with increasing imbalanced ratios, the performance of our approach for predicting MACE remains robust and reaches 0.672 in terms of AUC. On average, the proposed approach improves the performance of MACE prediction by 4.8%, 4.5%, 8.6% and 4.8% over the standard SVM, Adaboost, SMOTE, and the conventional GRACE risk scoring system for MACE prediction, respectively. We consider that the proposed iterative boosting approach has demonstrated great potential to meet the challenge of MACE prediction for ACS patients using a large volume of EMR. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Practical quantum mechanics-based fragment methods for predicting molecular crystal properties.

    PubMed

    Wen, Shuhao; Nanda, Kaushik; Huang, Yuanhang; Beran, Gregory J O

    2012-06-07

    Significant advances in fragment-based electronic structure methods have created a real alternative to force-field and density functional techniques in condensed-phase problems such as molecular crystals. This perspective article highlights some of the important challenges in modeling molecular crystals and discusses techniques for addressing them. First, we survey recent developments in fragment-based methods for molecular crystals. Second, we use examples from our own recent research on a fragment-based QM/MM method, the hybrid many-body interaction (HMBI) model, to analyze the physical requirements for a practical and effective molecular crystal model chemistry. We demonstrate that it is possible to predict molecular crystal lattice energies to within a couple kJ mol(-1) and lattice parameters to within a few percent in small-molecule crystals. Fragment methods provide a systematically improvable approach to making predictions in the condensed phase, which is critical to making robust predictions regarding the subtle energy differences found in molecular crystals.

  20. Predicting New Indications for Approved Drugs Using a Proteo-Chemometric Method

    PubMed Central

    Dakshanamurthy, Sivanesan; Issa, Naiem T; Assefnia, Shahin; Seshasayee, Ashwini; Peters, Oakland J; Madhavan, Subha; Uren, Aykut; Brown, Milton L; Byers, Stephen W

    2012-01-01

    The most effective way to move from target identification to the clinic is to identify already approved drugs with the potential for activating or inhibiting unintended targets (repurposing or repositioning). This is usually achieved by high throughput chemical screening, transcriptome matching or simple in silico ligand docking. We now describe a novel rapid computational proteo-chemometric method called “Train, Match, Fit, Streamline” (TMFS) to map new drug-target interaction space and predict new uses. The TMFS method combines shape, topology and chemical signatures, including docking score and functional contact points of the ligand, to predict potential drug-target interactions with remarkable accuracy. Using the TMFS method, we performed extensive molecular fit computations on 3,671 FDA approved drugs across 2,335 human protein crystal structures. The TMFS method predicts drug-target associations with 91% accuracy for the majority of drugs. Over 58% of the known best ligands for each target were correctly predicted as top ranked, followed by 66%, 76%, 84% and 91% for agents ranked in the top 10, 20, 30 and 40, respectively, out of all 3,671 drugs. Drugs ranked in the top 1–40, that have not been experimentally validated for a particular target now become candidates for repositioning. Furthermore, we used the TMFS method to discover that mebendazole, an anti-parasitic with recently discovered and unexpected anti-cancer properties, has the structural potential to inhibit VEGFR2. We confirmed experimentally that mebendazole inhibits VEGFR2 kinase activity as well as angiogenesis at doses comparable with its known effects on hookworm. TMFS also predicted, and was confirmed with surface plasmon resonance, that dimethyl celecoxib and the anti-inflammatory agent celecoxib can bind cadherin-11, an adhesion molecule important in rheumatoid arthritis and poor prognosis malignancies for which no targeted therapies exist. We anticipate that expanding our TMFS method to the >27,000 clinically active agents available worldwide across all targets will be most useful in the repositioning of existing drugs for new therapeutic targets. PMID:22780961

  1. Reverse-engineering the genetic circuitry of a cancer cell with predicted intervention in chronic lymphocytic leukemia.

    PubMed

    Vallat, Laurent; Kemper, Corey A; Jung, Nicolas; Maumy-Bertrand, Myriam; Bertrand, Frédéric; Meyer, Nicolas; Pocheville, Arnaud; Fisher, John W; Gribben, John G; Bahram, Seiamak

    2013-01-08

    Cellular behavior is sustained by genetic programs that are progressively disrupted in pathological conditions--notably, cancer. High-throughput gene expression profiling has been used to infer statistical models describing these cellular programs, and development is now needed to guide orientated modulation of these systems. Here we develop a regression-based model to reverse-engineer a temporal genetic program, based on relevant patterns of gene expression after cell stimulation. This method integrates the temporal dimension of biological rewiring of genetic programs and enables the prediction of the effect of targeted gene disruption at the system level. We tested the performance accuracy of this model on synthetic data before reverse-engineering the response of primary cancer cells to a proliferative (protumorigenic) stimulation in a multistate leukemia biological model (i.e., chronic lymphocytic leukemia). To validate the ability of our method to predict the effects of gene modulation on the global program, we performed an intervention experiment on a targeted gene. Comparison of the predicted and observed gene expression changes demonstrates the possibility of predicting the effects of a perturbation in a gene regulatory network, a first step toward an orientated intervention in a cancer cell genetic program.

  2. Aqueous solubility, effects of salts on aqueous solubility, and partitioning behavior of hexafluorobenzene: experimental results and COSMO-RS predictions.

    PubMed

    Schröder, Bernd; Freire, Mara G; Varanda, Fatima R; Marrucho, Isabel M; Santos, Luís M N B F; Coutinho, João A P

    2011-07-01

    The aqueous solubility of hexafluorobenzene has been determined, at 298.15K, using a shake-flask method with a spectrophotometric quantification technique. Furthermore, the solubility of hexafluorobenzene in saline aqueous solutions, at distinct salt concentrations, has been measured. Both salting-in and salting-out effects were observed and found to be dependent on the nature of the cationic/anionic composition of the salt. COSMO-RS, the Conductor-like Screening Model for Real Solvents, has been used to predict the corresponding aqueous solubilities at conditions similar to those used experimentally. The prediction results showed that the COSMO-RS approach is suitable for the prediction of salting-in/-out effects. The salting-in/-out phenomena have been rationalized with the support of COSMO-RS σ-profiles. The prediction potential of COSMO-RS regarding aqueous solubilities and octanol-water partition coefficients has been compared with typically used QSPR-based methods. Up to now, the absence of accurate solubility data for hexafluorobenzene hampered the calculation of the respective partition coefficients. Combining available accurate vapor pressure data with the experimentally determined water solubility, a novel air-water partition coefficient has been derived. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Theoretical performance of foil journal bearings

    NASA Technical Reports Server (NTRS)

    Carpino, M.; Peng, J.-P.

    1991-01-01

    A modified forward iteration approach for the coupled solution of foil bearings is presented. The method is used to predict the steady state theoretical performance of a journal type gas bearing constructed from an inextensible shell supported by an elastic foundation. Bending effects are treated as negligible. Finite element methods are used to predict both the foil deflections and the pressure distribution in the gas film.

  4. Application of General Regression Neural Network to the Prediction of LOD Change

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao

    2012-01-01

    Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.

  5. Developing and utilizing an Euler computational method for predicting the airframe/propulsion effects for an aft-mounted turboprop transport. Volume 1: Theory document

    NASA Technical Reports Server (NTRS)

    Chen, H. C.; Yu, N. Y.

    1991-01-01

    An Euler flow solver was developed for predicting the airframe/propulsion integration effects for an aft-mounted turboprop transport. This solver employs a highly efficient multigrid scheme, with a successive mesh-refinement procedure to accelerate the convergence of the solution. A new dissipation model was also implemented to render solutions that are grid insensitive. The propeller power effects are simulated by the actuator disk concept. An embedded flow solution method was developed for predicting the detailed flow characteristics in the local vicinity of an aft-mounted propfan engine in the presence of a flow field induced by a complete aircraft. Results from test case analysis are presented. A user's guide for execution of computer programs, including format of various input files, sample job decks, and sample input files, is provided in an accompanying volume.

  6. Toward Biopredictive Dissolution for Enteric Coated Dosage Forms.

    PubMed

    Al-Gousous, J; Amidon, G L; Langguth, P

    2016-06-06

    The aim of this work was to develop a phosphate buffer based dissolution method for enteric-coated formulations with improved biopredictivity for fasted conditions. Two commercially available enteric-coated aspirin products were used as model formulations (Aspirin Protect 300 mg, and Walgreens Aspirin 325 mg). The disintegration performance of these products in a physiological 8 mM pH 6.5 bicarbonate buffer (representing the conditions in the proximal small intestine) was used as a standard to optimize the employed phosphate buffer molarity. To account for the fact that a pH and buffer molarity gradient exists along the small intestine, the introduction of such a gradient was proposed for products with prolonged lag times (when it leads to a release lower than 75% in the first hour post acid stage) in the proposed buffer. This would allow the method also to predict the performance of later-disintegrating products. Dissolution performance using the accordingly developed method was compared to that observed when using two well-established dissolution methods: the United States Pharmacopeia (USP) method and blank fasted state simulated intestinal fluid (FaSSIF). The resulting dissolution profiles were convoluted using GastroPlus software to obtain predicted pharmacokinetic profiles. A pharmacokinetic study on healthy human volunteers was performed to evaluate the predictions made by the different dissolution setups. The novel method provided the best prediction, by a relatively wide margin, for the difference between the lag times of the two tested formulations, indicating its being able to predict the post gastric emptying onset of drug release with reasonable accuracy. Both the new and the blank FaSSIF methods showed potential for establishing in vitro-in vivo correlation (IVIVC) concerning the prediction of Cmax and AUC0-24 (prediction errors not more than 20%). However, these predictions are strongly affected by the highly variable first pass metabolism necessitating the evaluation of an absorption rate metric that is more independent of the first-pass effect. The Cmax/AUC0-24 ratio was selected for this purpose. Regarding this metric's predictions, the new method provided very good prediction of the two products' performances relative to each other (only 1.05% prediction error in this regard), while its predictions for the individual products' values in absolute terms were borderline, narrowly missing the regulatory 20% prediction error limits (21.51% for Aspirin Protect and 22.58% for Walgreens Aspirin). The blank FaSSIF-based method provided good Cmax/AUC0-24 ratio prediction, in absolute terms, for Aspirin Protect (9.05% prediction error), but its prediction for Walgreens Aspirin (33.97% prediction error) was overwhelmingly poor. Thus it gave practically the same average but much higher maximum prediction errors compared to the new method, and it was strongly overdiscriminating as for predicting their performances relative to one another. The USP method, despite not being overdiscriminating, provided poor predictions of the individual products' Cmax/AUC0-24 ratios. This indicates that, overall, the new method is of improved biopredictivity compared to established methods.

  7. Synergistic target combination prediction from curated signaling networks: Machine learning meets systems biology and pharmacology.

    PubMed

    Chua, Huey Eng; Bhowmick, Sourav S; Tucker-Kellogg, Lisa

    2017-10-01

    Given a signaling network, the target combination prediction problem aims to predict efficacious and safe target combinations for combination therapy. State-of-the-art in silico methods use Monte Carlo simulated annealing (mcsa) to modify a candidate solution stochastically, and use the Metropolis criterion to accept or reject the proposed modifications. However, such stochastic modifications ignore the impact of the choice of targets and their activities on the combination's therapeutic effect and off-target effects, which directly affect the solution quality. In this paper, we present mascot, a method that addresses this limitation by leveraging two additional heuristic criteria to minimize off-target effects and achieve synergy for candidate modification. Specifically, off-target effects measure the unintended response of a signaling network to the target combination and is often associated with toxicity. Synergy occurs when a pair of targets exerts effects that are greater than the sum of their individual effects, and is generally a beneficial strategy for maximizing effect while minimizing toxicity. mascot leverages on a machine learning-based target prioritization method which prioritizes potential targets in a given disease-associated network to select more effective targets (better therapeutic effect and/or lower off-target effects); and on Loewe additivity theory from pharmacology which assesses the non-additive effects in a combination drug treatment to select synergistic target activities. Our experimental study on two disease-related signaling networks demonstrates the superiority of mascot in comparison to existing approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. The Use of Shock Isolation mounts in Small High-Speed Craft to Protect Equipment from Wave Slam Effects

    DTIC Science & Technology

    2017-07-01

    11 MOUNT RELATIVE DISPLACEMENT CALCULATIONS ......................................................... 13 Computational Methods...RELATIVE DISPLACEMENT AND MITIGATION RATIO...Predicted SDOF Responses ...................................................14 Figure 11. Predicted Relative Displacement for 3 Hz 30% Damped Mount

  9. A literature review on fatigue and creep interaction

    NASA Technical Reports Server (NTRS)

    Chen, W. C.

    1978-01-01

    Life-time prediction methods, which are based on a number of empirical and phenomenological relationships, are presented. Three aspects are reviewed: effects of testing parameters on high temperature fatigue, life-time prediction, and high temperature fatigue crack growth.

  10. SU-E-T-802: Verification of Implanted Cardiac Pacemaker Doses in Intensity-Modulated Radiation Therapy: Dose Prediction Accuracy and Reduction Effect of a Lead Sheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, J; Chung, J

    2015-06-15

    Purpose: To verify delivered doses on the implanted cardiac pacemaker, predicted doses with and without dose reduction method were verified using the MOSFET detectors in terms of beam delivery and dose calculation techniques in intensity-modulated radiation therapy (IMRT). Methods: The pacemaker doses for a patient with a tongue cancer were predicted according to the beam delivery methods [step-and-shoot (SS) and sliding window (SW)], intensity levels for dose optimization, and dose calculation algorithms. Dosimetric effects on the pacemaker were calculated three dose engines: pencil-beam convolution (PBC), analytical anisotropic algorithm (AAA), and Acuros-XB. A lead shield of 2 mm thickness was designedmore » for minimizing irradiated doses to the pacemaker. Dose variations affected by the heterogeneous material properties of the pacemaker and effectiveness of the lead shield were predicted by the Acuros-XB. Dose prediction accuracy and the feasibility of the dose reduction strategy were verified based on the measured skin doses right above the pacemaker using mosfet detectors during the radiation treatment. Results: The Acuros-XB showed underestimated skin doses and overestimated doses by the lead-shield effect, even though the lower dose disagreement was observed. It led to improved dose prediction with higher intensity level of dose optimization in IMRT. The dedicated tertiary lead sheet effectively achieved reduction of pacemaker dose up to 60%. Conclusion: The current SS technique could deliver lower scattered doses than recommendation criteria, however, use of the lead sheet contributed to reduce scattered doses.Thin lead plate can be a useful tertiary shielder and it could not acuse malfunction or electrical damage of the implanted pacemaker in IMRT. It is required to estimate more accurate scattered doses of the patient with medical device to design proper dose reduction strategy.« less

  11. Electrode effects in dielectric spectroscopy of colloidal suspensions

    NASA Astrophysics Data System (ADS)

    Cirkel, P. A.; van der Ploeg, J. P. M.; Koper, G. J. M.

    1997-02-01

    We present a simple model to account for electrode polarization in colloidal suspensions. Apart from correctly predicting the ω {-3}/{2} dependence for the dielectric permittivity at low frequencies ω, the model provides an explicit dependence of the effect on electrode spacing. The predictions are tested for the sodium bis(2-ethylhexyl) sulfosuccinate (AOT) water-in-oil microemulsion with iso-octane as continuous phase. In particular, the dependence of electrode polarization effects on electrode spacing has been measured and is found to be in accordance with the model prediction. Methods to reduce or account for electrode polarization are briefly discussed.

  12. NASA progress in aircraft noise prediction

    NASA Technical Reports Server (NTRS)

    Raney, J. P.; Padula, S. L.; Zorumski, W. E.

    1981-01-01

    Langley Research Center efforts to develop a methodology for predicting the effective perceived noise level (EPNL) produced by jet-powered CTOL aircraft to an accuracy of + or - 1.5 dB are summarized with emphasis on the aircraft noise prediction program (ANOPP) which contains a complete set of prediction methods for CTOL aircraft including propulsion system noise sources, aerodynamic or airframe noise sources, forward speed effects, a layered atmospheric model with molecular absorption, ground impedance effects including excess ground attenuation, and a received noise contouring capability. The present state of ANOPP is described and its accuracy and applicability to the preliminary aircraft design process is assessed. Areas are indicated where further theoretical and experimental research on noise prediction are needed. Topics covered include the elements of the noise prediction problem which are incorporated in ANOPP, results of comparisons of ANOPP calculations with measured noise levels, and progress toward treating noise as a design constraint in aircraft system studies.

  13. A comparison of five methods to predict genomic breeding values of dairy bulls from genome-wide SNP markers

    PubMed Central

    2009-01-01

    Background Genomic selection (GS) uses molecular breeding values (MBV) derived from dense markers across the entire genome for selection of young animals. The accuracy of MBV prediction is important for a successful application of GS. Recently, several methods have been proposed to estimate MBV. Initial simulation studies have shown that these methods can accurately predict MBV. In this study we compared the accuracies and possible bias of five different regression methods in an empirical application in dairy cattle. Methods Genotypes of 7,372 SNP and highly accurate EBV of 1,945 dairy bulls were used to predict MBV for protein percentage (PPT) and a profit index (Australian Selection Index, ASI). Marker effects were estimated by least squares regression (FR-LS), Bayesian regression (Bayes-R), random regression best linear unbiased prediction (RR-BLUP), partial least squares regression (PLSR) and nonparametric support vector regression (SVR) in a training set of 1,239 bulls. Accuracy and bias of MBV prediction were calculated from cross-validation of the training set and tested against a test team of 706 young bulls. Results For both traits, FR-LS using a subset of SNP was significantly less accurate than all other methods which used all SNP. Accuracies obtained by Bayes-R, RR-BLUP, PLSR and SVR were very similar for ASI (0.39-0.45) and for PPT (0.55-0.61). Overall, SVR gave the highest accuracy. All methods resulted in biased MBV predictions for ASI, for PPT only RR-BLUP and SVR predictions were unbiased. A significant decrease in accuracy of prediction of ASI was seen in young test cohorts of bulls compared to the accuracy derived from cross-validation of the training set. This reduction was not apparent for PPT. Combining MBV predictions with pedigree based predictions gave 1.05 - 1.34 times higher accuracies compared to predictions based on pedigree alone. Some methods have largely different computational requirements, with PLSR and RR-BLUP requiring the least computing time. Conclusions The four methods which use information from all SNP namely RR-BLUP, Bayes-R, PLSR and SVR generate similar accuracies of MBV prediction for genomic selection, and their use in the selection of immediate future generations in dairy cattle will be comparable. The use of FR-LS in genomic selection is not recommended. PMID:20043835

  14. Discovering Synergistic Drug Combination from a Computational Perspective.

    PubMed

    Ding, Pingjian; Luo, Jiawei; Liang, Cheng; Xiao, Qiu; Cao, Buwen; Li, Guanghui

    2018-03-30

    Synergistic drug combinations play an important role in the treatment of complex diseases. The identification of effective drug combination is vital to further reduce the side effects and improve therapeutic efficiency. In previous years, in vitro method has been the main route to discover synergistic drug combinations. However, many limitations of time and resource consumption lie within the in vitro method. Therefore, with the rapid development of computational models and the explosive growth of large and phenotypic data, computational methods for discovering synergistic drug combinations are an efficient and promising tool and contribute to precision medicine. It is the key of computational methods how to construct the computational model. Different computational strategies generate different performance. In this review, the recent advancements in computational methods for predicting effective drug combination are concluded from multiple aspects. First, various datasets utilized to discover synergistic drug combinations are summarized. Second, we discussed feature-based approaches and partitioned these methods into two classes including feature-based methods in terms of similarity measure, and feature-based methods in terms of machine learning. Third, we discussed network-based approaches for uncovering synergistic drug combinations. Finally, we analyzed and prospected computational methods for predicting effective drug combinations. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. Multimodal manifold-regularized transfer learning for MCI conversion prediction.

    PubMed

    Cheng, Bo; Liu, Mingxia; Suk, Heung-Il; Shen, Dinggang; Zhang, Daoqiang

    2015-12-01

    As the early stage of Alzheimer's disease (AD), mild cognitive impairment (MCI) has high chance to convert to AD. Effective prediction of such conversion from MCI to AD is of great importance for early diagnosis of AD and also for evaluating AD risk pre-symptomatically. Unlike most previous methods that used only the samples from a target domain to train a classifier, in this paper, we propose a novel multimodal manifold-regularized transfer learning (M2TL) method that jointly utilizes samples from another domain (e.g., AD vs. normal controls (NC)) as well as unlabeled samples to boost the performance of the MCI conversion prediction. Specifically, the proposed M2TL method includes two key components. The first one is a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain (i.e., AD and NC) and the target domain (i.e., MCI converters (MCI-C) and MCI non-converters (MCI-NC)). The second one is a semi-supervised multimodal manifold-regularized least squares classification method, where the target-domain samples, the auxiliary-domain samples, and the unlabeled samples can be jointly used for training our classifier. Furthermore, with the integration of a group sparsity constraint into our objective function, the proposed M2TL has a capability of selecting the informative samples to build a robust classifier. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database validate the effectiveness of the proposed method by significantly improving the classification accuracy of 80.1 % for MCI conversion prediction, and also outperforming the state-of-the-art methods.

  16. A Predictive Model for Toxicity Effects Assessment of Biotransformed Hepatic Drugs Using Iterative Sampling Method.

    PubMed

    Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella

    2016-12-09

    Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.

  17. Comparison of spatiotemporal prediction models of daily exposure of individuals to ambient nitrogen dioxide and ozone in Montreal, Canada.

    PubMed

    Buteau, Stephane; Hatzopoulou, Marianne; Crouse, Dan L; Smargiassi, Audrey; Burnett, Richard T; Logan, Travis; Cavellin, Laure Deville; Goldberg, Mark S

    2017-07-01

    In previous studies investigating the short-term health effects of ambient air pollution the exposure metric that is often used is the daily average across monitors, thus assuming that all individuals have the same daily exposure. Studies that incorporate space-time exposures of individuals are essential to further our understanding of the short-term health effects of ambient air pollution. As part of a longitudinal cohort study of the acute effects of air pollution that incorporated subject-specific information and medical histories of subjects throughout the follow-up, the purpose of this study was to develop and compare different prediction models using data from fixed-site monitors and other monitoring campaigns to estimate daily, spatially-resolved concentrations of ozone (O 3 ) and nitrogen dioxide (NO 2 ) of participants' residences in Montreal, 1991-2002. We used the following methods to predict spatially-resolved daily concentrations of O 3 and NO 2 for each geographic region in Montreal (defined by three-character postal code areas): (1) assigning concentrations from the nearest monitor; (2) spatial interpolation using inverse-distance weighting; (3) back-extrapolation from a land-use regression model from a dense monitoring survey, and; (4) a combination of a land-use and Bayesian maximum entropy model. We used a variety of indices of agreement to compare estimates of exposure assigned from the different methods, notably scatterplots of pairwise predictions, distribution of differences and computation of the absolute agreement intraclass correlation (ICC). For each pairwise prediction, we also produced maps of the ICCs by these regions indicating the spatial variability in the degree of agreement. We found some substantial differences in agreement across pairs of methods in daily mean predicted concentrations of O 3 and NO 2 . On a given day and postal code area the difference in the concentration assigned could be as high as 131ppb for O 3 and 108ppb for NO 2 . For both pollutants, better agreement was found between predictions from the nearest monitor and the inverse-distance weighting interpolation methods, with ICCs of 0.89 (95% confidence interval (CI): 0.89, 0.89) for O 3 and 0.81 (95%CI: 0.80, 0.81) for NO 2 , respectively. For this pair of methods the maximum difference on a given day and postal code area was 36ppb for O 3 and 74ppb for NO 2 . The back-extrapolation method showed a higher degree of disagreement with the nearest monitor approach, inverse-distance weighting interpolation, and the Bayesian maximum entropy model, which were strongly constrained by the sparse monitoring network. The maps showed that the patterns of agreement differed across the postal code areas and the variability depended on the pair of methods compared and the pollutants. For O 3 , but not NO 2 , postal areas showing greater disagreement were mostly located near the city centre and along highways, especially in maps involving the back-extrapolation method. In view of the substantial differences in daily concentrations of O 3 and NO 2 predicted by the different methods, we suggest that analyses of the health effects from air pollution should make use of multiple exposure assessment methods. Although we cannot make any recommendations as to which is the most valid method, models that make use of higher spatially resolved data, such as from dense exposure surveys or from high spatial resolution satellite data, likely provide the most valid estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. In silico site-directed mutagenesis informs species-specific predictions of chemical susceptibility derived from the Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS) tool

    EPA Science Inventory

    The Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS) tool was developed to address needs for rapid, cost effective methods of species extrapolation of chemical susceptibility. Specifically, the SeqAPASS tool compares the primary sequence (Level 1), functiona...

  19. A Modified Kirchhoff plate theory for Free Vibration analysis of functionally graded material plates using meshfree method

    NASA Astrophysics Data System (ADS)

    Nguyen Van Do, Vuong

    2018-04-01

    In this paper, a modified Kirchhoff theory is presented for free vibration analyses of functionally graded material (FGM) plate based on modified radial point interpolation method (RPIM). The shear deformation effects are taken account into modified theory to ignore the locking phenomenon of thin plates. Due to the proposed refined plate theory, the number of independent unknowns reduces one variable and exists with four degrees of freedom per node. The simulated free vibration results employed by the modified RPIM are compared with the other analytical solutions to verify the effectiveness and the accuracy of the developed mesh-free method. Detail parametric studies of the proposed method are then conducted including the effectiveness of thickness ratio, boundary condition and material inhomogeneity on the sample problems of square plates. Results illustrated that the modified mesh-free RPIM can effectively predict the numerical calculation as compared to the exact solutions. The obtained numerical results are indicated that the proposed method are stable and well accurate prediction to evaluate with other published analyses.

  20. Recurrence predictive models for patients with hepatocellular carcinoma after radiofrequency ablation using support vector machines with feature selection methods.

    PubMed

    Liang, Ja-Der; Ping, Xiao-Ou; Tseng, Yi-Ju; Huang, Guan-Tarn; Lai, Feipei; Yang, Pei-Ming

    2014-12-01

    Recurrence of hepatocellular carcinoma (HCC) is an important issue despite effective treatments with tumor eradication. Identification of patients who are at high risk for recurrence may provide more efficacious screening and detection of tumor recurrence. The aim of this study was to develop recurrence predictive models for HCC patients who received radiofrequency ablation (RFA) treatment. From January 2007 to December 2009, 83 newly diagnosed HCC patients receiving RFA as their first treatment were enrolled. Five feature selection methods including genetic algorithm (GA), simulated annealing (SA) algorithm, random forests (RF) and hybrid methods (GA+RF and SA+RF) were utilized for selecting an important subset of features from a total of 16 clinical features. These feature selection methods were combined with support vector machine (SVM) for developing predictive models with better performance. Five-fold cross-validation was used to train and test SVM models. The developed SVM-based predictive models with hybrid feature selection methods and 5-fold cross-validation had averages of the sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the ROC curve as 67%, 86%, 82%, 69%, 90%, and 0.69, respectively. The SVM derived predictive model can provide suggestive high-risk recurrent patients, who should be closely followed up after complete RFA treatment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Application of Response Surface Methods To Determine Conditions for Optimal Genomic Prediction

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2017-01-01

    An epistatic genetic architecture can have a significant impact on prediction accuracies of genomic prediction (GP) methods. Machine learning methods predict traits comprised of epistatic genetic architectures more accurately than statistical methods based on additive mixed linear models. The differences between these types of GP methods suggest a diagnostic for revealing genetic architectures underlying traits of interest. In addition to genetic architecture, the performance of GP methods may be influenced by the sample size of the training population, the number of QTL, and the proportion of phenotypic variability due to genotypic variability (heritability). Possible values for these factors and the number of combinations of the factor levels that influence the performance of GP methods can be large. Thus, efficient methods for identifying combinations of factor levels that produce most accurate GPs is needed. Herein, we employ response surface methods (RSMs) to find the experimental conditions that produce the most accurate GPs. We illustrate RSM with an example of simulated doubled haploid populations and identify the combination of factors that maximize the difference between prediction accuracies of best linear unbiased prediction (BLUP) and support vector machine (SVM) GP methods. The greatest impact on the response is due to the genetic architecture of the population, heritability of the trait, and the sample size. When epistasis is responsible for all of the genotypic variance and heritability is equal to one and the sample size of the training population is large, the advantage of using the SVM method vs. the BLUP method is greatest. However, except for values close to the maximum, most of the response surface shows little difference between the methods. We also determined that the conditions resulting in the greatest prediction accuracy for BLUP occurred when genetic architecture consists solely of additive effects, and heritability is equal to one. PMID:28720710

  2. Modeling and Simulation of Nanoindentation

    NASA Astrophysics Data System (ADS)

    Huang, Sixie; Zhou, Caizhi

    2017-11-01

    Nanoindentation is a hardness test method applied to small volumes of material which can provide some unique effects and spark many related research activities. To fully understand the phenomena observed during nanoindentation tests, modeling and simulation methods have been developed to predict the mechanical response of materials during nanoindentation. However, challenges remain with those computational approaches, because of their length scale, predictive capability, and accuracy. This article reviews recent progress and challenges for modeling and simulation of nanoindentation, including an overview of molecular dynamics, the quasicontinuum method, discrete dislocation dynamics, and the crystal plasticity finite element method, and discusses how to integrate multiscale modeling approaches seamlessly with experimental studies to understand the length-scale effects and microstructure evolution during nanoindentation tests, creating a unique opportunity to establish new calibration procedures for the nanoindentation technique.

  3. Ultra-Short-Term Wind Power Prediction Using a Hybrid Model

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.

  4. DONBOL: A computer program for predicting axisymmetric nozzle afterbody pressure distributions and drag at subsonic speeds

    NASA Technical Reports Server (NTRS)

    Putnam, L. E.

    1979-01-01

    A Neumann solution for inviscid external flow was coupled to a modified Reshotko-Tucker integral boundary-layer technique, the control volume method of Presz for calculating flow in the separated region, and an inviscid one-dimensional solution for the jet exhaust flow in order to predict axisymmetric nozzle afterbody pressure distributions and drag. The viscous and inviscid flows are solved iteratively until convergence is obtained. A computer algorithm of this procedure was written and is called DONBOL. A description of the computer program and a guide to its use is given. Comparisons of the predictions of this method with experiments show that the method accurately predicts the pressure distributions of boattail afterbodies which have the jet exhaust flow simulated by solid bodies. For nozzle configurations which have the jet exhaust simulated by high-pressure air, the present method significantly underpredicts the magnitude of nozzle pressure drag. This deficiency results because the method neglects the effects of jet plume entrainment. This method is limited to subsonic free-stream Mach numbers below that for which the flow over the body of revolution becomes sonic.

  5. A novel principal component analysis for spatially misaligned multivariate air pollution data.

    PubMed

    Jandarov, Roman A; Sheppard, Lianne A; Sampson, Paul D; Szpiro, Adam A

    2017-01-01

    We propose novel methods for predictive (sparse) PCA with spatially misaligned data. These methods identify principal component loading vectors that explain as much variability in the observed data as possible, while also ensuring the corresponding principal component scores can be predicted accurately by means of spatial statistics at locations where air pollution measurements are not available. This will make it possible to identify important mixtures of air pollutants and to quantify their health effects in cohort studies, where currently available methods cannot be used. We demonstrate the utility of predictive (sparse) PCA in simulated data and apply the approach to annual averages of particulate matter speciation data from national Environmental Protection Agency (EPA) regulatory monitors.

  6. Improved Modeling of Finite-Rate Turbulent Combustion Processes in Research Combustors

    NASA Technical Reports Server (NTRS)

    VanOverbeke, Thomas J.

    1998-01-01

    The objective of this thesis is to further develop and test a stochastic model of turbulent combustion in recirculating flows. There is a requirement to increase the accuracy of multi-dimensional combustion predictions. As turbulence affects reaction rates, this interaction must be more accurately evaluated. In this work a more physically correct way of handling the interaction of turbulence on combustion is further developed and tested. As turbulence involves randomness, stochastic modeling is used. Averaged values such as temperature and species concentration are found by integrating the probability density function (pdf) over the range of the scalar. The model in this work does not assume the pdf type, but solves for the evolution of the pdf using the Monte Carlo solution technique. The model is further developed by including a more robust reaction solver, by using accurate thermodynamics and by more accurate transport elements. The stochastic method is used with Semi-Implicit Method for Pressure-Linked Equations. The SIMPLE method is used to solve for velocity, pressure, turbulent kinetic energy and dissipation. The pdf solver solves for temperature and species concentration. Thus, the method is partially familiar to combustor engineers. The method is compared to benchmark experimental data and baseline calculations. The baseline method was tested on isothermal flows, evaporating sprays and combusting sprays. Pdf and baseline predictions were performed for three diffusion flames and one premixed flame. The pdf method predicted lower combustion rates than the baseline method in agreement with the data, except for the premixed flame. The baseline and stochastic predictions bounded the experimental data for the premixed flame. The use of a continuous mixing model or relax to mean mixing model had little effect on the prediction of average temperature. Two grids were used in a hydrogen diffusion flame simulation. Grid density did not effect the predictions except for peak temperature and tangential velocity. The hybrid pdf method did take longer and required more memory, but has a theoretical basis to extend to many reaction steps which cannot be said of current turbulent combustion models.

  7. Method effects: the problem with negatively versus positively keyed items.

    PubMed

    Lindwall, Magnus; Barkoukis, Vassilis; Grano, Caterina; Lucidi, Fabio; Raudsepp, Lennart; Liukkonen, Jarmo; Thøgersen-Ntoumani, Cecilie

    2012-01-01

    Using confirmatory factor analyses, we examined method effects on Rosenberg's Self-Esteem Scale (RSES; Rosenberg, 1965) in a sample of older European adults. Nine hundred forty nine community-dwelling adults 60 years of age or older from 5 European countries completed the RSES as well as measures of depression and life satisfaction. The 2 models that had an acceptable fit with the data included method effects. The method effects were associated with both positively and negatively worded items. Method effects models were invariant across gender and age, but not across countries. Both depression and life satisfaction predicted method effects. Individuals with higher depression scores and lower life satisfaction scores were more likely to endorse negatively phrased items.

  8. Implementing a novel movement-based approach to inferring parturition and neonate caribou calf survival.

    PubMed

    Bonar, Maegwin; Ellington, E Hance; Lewis, Keith P; Vander Wal, Eric

    2018-01-01

    In ungulates, parturition is correlated with a reduction in movement rate. With advances in movement-based technologies comes an opportunity to develop new techniques to assess reproduction in wild ungulates that are less invasive and reduce biases. DeMars et al. (2013, Ecology and Evolution 3:4149-4160) proposed two promising new methods (individual- and population-based; the DeMars model) that use GPS inter-fix step length of adult female caribou (Rangifer tarandus caribou) to infer parturition and neonate survival. Our objective was to apply the DeMars model to caribou populations that may violate model assumptions for retrospective analysis of parturition and calf survival. We extended the use of the DeMars model after assigning parturition and calf mortality status by examining herd-wide distributions of parturition date, calf mortality date, and survival. We used the DeMars model to estimate parturition and calf mortality events and compared them with the known parturition and calf mortality events from collared adult females (n = 19). We also used the DeMars model to estimate parturition and calf mortality events for collared female caribou with unknown parturition and calf mortality events (n = 43) and instead derived herd-wide estimates of calf survival as well as distributions of parturition and calf mortality dates and compared them to herd-wide estimates generated from calves fitted with VHF collars (n = 134). For our data, the individual-based method was effective at predicting calf mortality, but was not effective at predicting parturition. The population-based method was more effective at predicting parturition but was not effective at predicting calf mortality. At the herd-level, the predicted distributions of parturition date from both methods differed from each other and from the distribution derived from the parturition dates of VHF-collared calves (log-ranked test: χ2 = 40.5, df = 2, p < 0.01). The predicted distributions of calf mortality dates from both methods were similar to the observed distribution derived from VHF-collared calves. Both methods underestimated herd-wide calf survival based on VHF-collared calves, however, a combination of the individual- and population-based methods produced herd-wide survival estimates similar to estimates generated from collared calves. The limitations we experienced when applying the DeMars model could result from the shortcomings in our data violating model assumptions. However despite the differences in our caribou systems, with proper validation techniques the framework in the DeMars model is sufficient to make inferences on parturition and calf mortality.

  9. Implementing a novel movement-based approach to inferring parturition and neonate caribou calf survival

    PubMed Central

    Ellington, E. Hance; Lewis, Keith P.; Vander Wal, Eric

    2018-01-01

    In ungulates, parturition is correlated with a reduction in movement rate. With advances in movement-based technologies comes an opportunity to develop new techniques to assess reproduction in wild ungulates that are less invasive and reduce biases. DeMars et al. (2013, Ecology and Evolution 3:4149–4160) proposed two promising new methods (individual- and population-based; the DeMars model) that use GPS inter-fix step length of adult female caribou (Rangifer tarandus caribou) to infer parturition and neonate survival. Our objective was to apply the DeMars model to caribou populations that may violate model assumptions for retrospective analysis of parturition and calf survival. We extended the use of the DeMars model after assigning parturition and calf mortality status by examining herd-wide distributions of parturition date, calf mortality date, and survival. We used the DeMars model to estimate parturition and calf mortality events and compared them with the known parturition and calf mortality events from collared adult females (n = 19). We also used the DeMars model to estimate parturition and calf mortality events for collared female caribou with unknown parturition and calf mortality events (n = 43) and instead derived herd-wide estimates of calf survival as well as distributions of parturition and calf mortality dates and compared them to herd-wide estimates generated from calves fitted with VHF collars (n = 134). For our data, the individual-based method was effective at predicting calf mortality, but was not effective at predicting parturition. The population-based method was more effective at predicting parturition but was not effective at predicting calf mortality. At the herd-level, the predicted distributions of parturition date from both methods differed from each other and from the distribution derived from the parturition dates of VHF-collared calves (log-ranked test: χ2 = 40.5, df = 2, p < 0.01). The predicted distributions of calf mortality dates from both methods were similar to the observed distribution derived from VHF-collared calves. Both methods underestimated herd-wide calf survival based on VHF-collared calves, however, a combination of the individual- and population-based methods produced herd-wide survival estimates similar to estimates generated from collared calves. The limitations we experienced when applying the DeMars model could result from the shortcomings in our data violating model assumptions. However despite the differences in our caribou systems, with proper validation techniques the framework in the DeMars model is sufficient to make inferences on parturition and calf mortality. PMID:29466451

  10. A feature-based approach to modeling protein-protein interaction hot spots.

    PubMed

    Cho, Kyu-il; Kim, Dongsup; Lee, Doheon

    2009-05-01

    Identifying features that effectively represent the energetic contribution of an individual interface residue to the interactions between proteins remains problematic. Here, we present several new features and show that they are more effective than conventional features. By combining the proposed features with conventional features, we develop a predictive model for interaction hot spots. Initially, 54 multifaceted features, composed of different levels of information including structure, sequence and molecular interaction information, are quantified. Then, to identify the best subset of features for predicting hot spots, feature selection is performed using a decision tree. Based on the selected features, a predictive model for hot spots is created using support vector machine (SVM) and tested on an independent test set. Our model shows better overall predictive accuracy than previous methods such as the alanine scanning methods Robetta and FOLDEF, and the knowledge-based method KFC. Subsequent analysis yields several findings about hot spots. As expected, hot spots have a larger relative surface area burial and are more hydrophobic than other residues. Unexpectedly, however, residue conservation displays a rather complicated tendency depending on the types of protein complexes, indicating that this feature is not good for identifying hot spots. Of the selected features, the weighted atomic packing density, relative surface area burial and weighted hydrophobicity are the top 3, with the weighted atomic packing density proving to be the most effective feature for predicting hot spots. Notably, we find that hot spots are closely related to pi-related interactions, especially pi . . . pi interactions.

  11. Development and validation of a general approach to predict and quantify the synergism of anti-cancer drugs using experimental design and artificial neural networks.

    PubMed

    Pivetta, Tiziana; Isaia, Francesco; Trudu, Federica; Pani, Alessandra; Manca, Matteo; Perra, Daniela; Amato, Filippo; Havel, Josef

    2013-10-15

    The combination of two or more drugs using multidrug mixtures is a trend in the treatment of cancer. The goal is to search for a synergistic effect and thereby reduce the required dose and inhibit the development of resistance. An advanced model-free approach for data exploration and analysis, based on artificial neural networks (ANN) and experimental design is proposed to predict and quantify the synergism of drugs. The proposed method non-linearly correlates the concentrations of drugs with the cytotoxicity of the mixture, providing the possibility of choosing the optimal drug combination that gives the maximum synergism. The use of ANN allows for the prediction of the cytotoxicity of each combination of drugs in the chosen concentration interval. The method was validated by preparing and experimentally testing the combinations with the predicted highest synergistic effect. In all cases, the data predicted by the network were experimentally confirmed. The method was applied to several binary mixtures of cisplatin and [Cu(1,10-orthophenanthroline)2(H2O)](ClO4)2, Cu(1,10-orthophenanthroline)(H2O)2(ClO4)2 or [Cu(1,10-orthophenanthroline)2(imidazolidine-2-thione)](ClO4)2. The cytotoxicity of the two drugs, alone and in combination, was determined against human acute T-lymphoblastic leukemia cells (CCRF-CEM). For all systems, a synergistic effect was found for selected combinations. © 2013 Elsevier B.V. All rights reserved.

  12. Prediction of cloud condensation nuclei activity for organic compounds using functional group contribution methods

    DOE PAGES

    Petters, M. D.; Kreidenweis, S. M.; Ziemann, P. J.

    2016-01-19

    A wealth of recent laboratory and field experiments demonstrate that organic aerosol composition evolves with time in the atmosphere, leading to changes in the influence of the organic fraction to cloud condensation nuclei (CCN) spectra. There is a need for tools that can realistically represent the evolution of CCN activity to better predict indirect effects of organic aerosol on clouds and climate. This work describes a model to predict the CCN activity of organic compounds from functional group composition. Following previous methods in the literature, we test the ability of semi-empirical group contribution methods in Kohler theory to predict themore » effective hygroscopicity parameter, kappa. However, in our approach we also account for liquid–liquid phase boundaries to simulate phase-limited activation behavior. Model evaluation against a selected database of published laboratory measurements demonstrates that kappa can be predicted within a factor of 2. Simulation of homologous series is used to identify the relative effectiveness of different functional groups in increasing the CCN activity of weakly functionalized organic compounds. Hydroxyl, carboxyl, aldehyde, hydroperoxide, carbonyl, and ether moieties promote CCN activity while methylene and nitrate moieties inhibit CCN activity. Furthermore, the model can be incorporated into scale-bridging test beds such as the Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) to evaluate the evolution of kappa for a complex mix of organic compounds and to develop suitable parameterizations of CCN evolution for larger-scale models.« less

  13. Evaluating concentration estimation errors in ELISA microarray experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less

  14. A novel method for structure-based prediction of ion channel conductance properties.

    PubMed Central

    Smart, O S; Breed, J; Smith, G R; Sansom, M S

    1997-01-01

    A rapid and easy-to-use method of predicting the conductance of an ion channel from its three-dimensional structure is presented. The method combines the pore dimensions of the channel as measured in the HOLE program with an Ohmic model of conductance. An empirically based correction factor is then applied. The method yielded good results for six experimental channel structures (none of which were included in the training set) with predictions accurate to within an average factor of 1.62 to the true values. The predictive r2 was equal to 0.90, which is indicative of a good predictive ability. The procedure is used to validate model structures of alamethicin and phospholamban. Two genuine predictions for the conductance of channels with known structure but without reported conductances are given. A modification of the procedure that calculates the expected results for the effect of the addition of nonelectrolyte polymers on conductance is set out. Results for a cholera toxin B-subunit crystal structure agree well with the measured values. The difficulty in interpreting such studies is discussed, with the conclusion that measurements on channels of known structure are required. Images FIGURE 1 FIGURE 3 FIGURE 4 FIGURE 6 FIGURE 10 PMID:9138559

  15. Extended Glauert tip correction to include vortex rollup effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maniaci, David; Schmitz, Sven

    Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less

  16. Extended Glauert tip correction to include vortex rollup effects

    DOE PAGES

    Maniaci, David; Schmitz, Sven

    2016-10-03

    Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less

  17. Effective gene prediction by high resolution frequency estimator based on least-norm solution technique

    PubMed Central

    2014-01-01

    Linear algebraic concept of subspace plays a significant role in the recent techniques of spectrum estimation. In this article, the authors have utilized the noise subspace concept for finding hidden periodicities in DNA sequence. With the vast growth of genomic sequences, the demand to identify accurately the protein-coding regions in DNA is increasingly rising. Several techniques of DNA feature extraction which involves various cross fields have come up in the recent past, among which application of digital signal processing tools is of prime importance. It is known that coding segments have a 3-base periodicity, while non-coding regions do not have this unique feature. One of the most important spectrum analysis techniques based on the concept of subspace is the least-norm method. The least-norm estimator developed in this paper shows sharp period-3 peaks in coding regions completely eliminating background noise. Comparison of proposed method with existing sliding discrete Fourier transform (SDFT) method popularly known as modified periodogram method has been drawn on several genes from various organisms and the results show that the proposed method has better as well as an effective approach towards gene prediction. Resolution, quality factor, sensitivity, specificity, miss rate, and wrong rate are used to establish superiority of least-norm gene prediction method over existing method. PMID:24386895

  18. Prioritization of candidate disease genes by combining topological similarity and semantic similarity.

    PubMed

    Liu, Bin; Jin, Min; Zeng, Pan

    2015-10-01

    The identification of gene-phenotype relationships is very important for the treatment of human diseases. Studies have shown that genes causing the same or similar phenotypes tend to interact with each other in a protein-protein interaction (PPI) network. Thus, many identification methods based on the PPI network model have achieved good results. However, in the PPI network, some interactions between the proteins encoded by candidate gene and the proteins encoded by known disease genes are very weak. Therefore, some studies have combined the PPI network with other genomic information and reported good predictive performances. However, we believe that the results could be further improved. In this paper, we propose a new method that uses the semantic similarity between the candidate gene and known disease genes to set the initial probability vector of a random walk with a restart algorithm in a human PPI network. The effectiveness of our method was demonstrated by leave-one-out cross-validation, and the experimental results indicated that our method outperformed other methods. Additionally, our method can predict new causative genes of multifactor diseases, including Parkinson's disease, breast cancer and obesity. The top predictions were good and consistent with the findings in the literature, which further illustrates the effectiveness of our method. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. An overview of self-consistent methods for fiber-reinforced composites

    NASA Technical Reports Server (NTRS)

    Gramoll, Kurt C.; Freed, Alan D.; Walker, Kevin P.

    1991-01-01

    The Walker et al. (1989) self-consistent method to predict both the elastic and the inelastic effective material properties of composites is examined and compared with the results of other self-consistent and elastically based solutions. The elastic part of their method is shown to be identical to other self-consistent methods for non-dilute reinforced composite materials; they are the Hill (1965), Budiansky (1965), and Nemat-Nasser et al. (1982) derivations. A simplified form of the non-dilute self-consistent method is also derived. The predicted, elastic, effective material properties for fiber reinforced material using the Walker method was found to deviate from the elasticity solution for the v sub 31, K sub 12, and mu sub 31 material properties (fiber is in the 3 direction) especially at the larger volume fractions. Also, the prediction for the transverse shear modulus, mu sub 12, exceeds one of the accepted Hashin bounds. Only the longitudinal elastic modulus E sub 33 agrees with the elasticity solution. The differences between the Walker and the elasticity solutions are primarily due to the assumption used in the derivation of the self-consistent method, i.e., the strain fields in the inclusions and the matrix are assumed to remain constant, which is not a correct assumption for a high concentration of inclusions.

  20. Fuzzy Regression Prediction and Application Based on Multi-Dimensional Factors of Freight Volume

    NASA Astrophysics Data System (ADS)

    Xiao, Mengting; Li, Cheng

    2018-01-01

    Based on the reality of the development of air cargo, the multi-dimensional fuzzy regression method is used to determine the influencing factors, and the three most important influencing factors of GDP, total fixed assets investment and regular flight route mileage are determined. The system’s viewpoints and analogy methods, the use of fuzzy numbers and multiple regression methods to predict the civil aviation cargo volume. In comparison with the 13th Five-Year Plan for China’s Civil Aviation Development (2016-2020), it is proved that this method can effectively improve the accuracy of forecasting and reduce the risk of forecasting. It is proved that this model predicts civil aviation freight volume of the feasibility, has a high practical significance and practical operation.

  1. Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach

    NASA Astrophysics Data System (ADS)

    Liu, Wenyang; Sawant, Amit; Ruan, Dan

    2016-07-01

    The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.

  2. ProTox: a web server for the in silico prediction of rodent oral toxicity

    PubMed Central

    Drwal, Malgorzata N.; Banerjee, Priyanka; Dunkel, Mathias; Wettig, Martin R.; Preissner, Robert

    2014-01-01

    Animal trials are currently the major method for determining the possible toxic effects of drug candidates and cosmetics. In silico prediction methods represent an alternative approach and aim to rationalize the preclinical drug development, thus enabling the reduction of the associated time, costs and animal experiments. Here, we present ProTox, a web server for the prediction of rodent oral toxicity. The prediction method is based on the analysis of the similarity of compounds with known median lethal doses (LD50) and incorporates the identification of toxic fragments, therefore representing a novel approach in toxicity prediction. In addition, the web server includes an indication of possible toxicity targets which is based on an in-house collection of protein–ligand-based pharmacophore models (‘toxicophores’) for targets associated with adverse drug reactions. The ProTox web server is open to all users and can be accessed without registration at: http://tox.charite.de/tox. The only requirement for the prediction is the two-dimensional structure of the input compounds. All ProTox methods have been evaluated based on a diverse external validation set and displayed strong performance (sensitivity, specificity and precision of 76, 95 and 75%, respectively) and superiority over other toxicity prediction tools, indicating their possible applicability for other compound classes. PMID:24838562

  3. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  4. The Fatigue Approach to Vibration and Health: is it a Practical and Viable way of Predicting the Effects on People?

    NASA Astrophysics Data System (ADS)

    Sandover, J.

    1998-08-01

    The fatigue approach assumes that the vertebral end-plates are the weak link in the spine subjected to shock and vibration, and fail as a result of material fatigue. The theory assumes that end-plate damage leads to degeneration and pain in the lumbar spine. There is evidence for both the damage predicted and the fatigue mode of failure so that the approach may provide a basis for predictive methods for use in epidemiology and standards. An available data set from a variety of heavy vehicles in practical situations was used for predictions of spinal stress and fatigue life. Although there was some disparity between the predictive methods used, the more developed methods indicated fatigue lives that appeared reasonable, taking into account the vehicles tested and our knowledge of spinal degeneration. It is argued that the modelling and fatigue approaches combined offer a basis for estimating the effects of vibration and shock on health. Although the human variables are such that the approach, as yet, only offers rough estimates, it offers a good basis for understanding. The approach indicates that peak values are important and large peaks dominate risk. The method indicates that long term r.m.s. methods probably underestimate the risk of injury. The BS 6841Wband ISO 2631Wkweightings have shortcomings when used where peak values are important. A simple model may be more appropriate. The principle can be applied to continuous vibration as well as high acceleration events so that one method can be applied universally to continuous vibrations, high acceleration events and mixtures of these. An endurance limit can be hypothesised and, if this limit is sufficiently high, then the need for many measurements can be reduced.

  5. Impacts of Earth rotation parameters on GNSS ultra-rapid orbit prediction: Derivation and real-time correction

    NASA Astrophysics Data System (ADS)

    Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto

    2017-12-01

    Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50% (error related to ERP) when a highly accurate observed orbit is used with the correction method. For iGMAS-predicted orbits, the accuracy improvement ranges from 8.5% for the inclined BeiDou orbits to 17.99% for the GPS orbits. This demonstrates that the correction method proposed by this study can optimize the ultra-rapid orbit prediction.

  6. Prediction models for clustered data: comparison of a random intercept and standard regression model

    PubMed Central

    2013-01-01

    Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436

  7. An Application of the Acoustic Similarity Law to the Numerical Analysis of Centrifugal Fan Noise

    NASA Astrophysics Data System (ADS)

    Jeon, Wan-Ho; Lee, Duck-Joo; Rhee, Huinam

    Centrifugal fans, which are frequently used in our daily lives and various industries, usually make severe noise problems. Generally, the centrifugal fan noise consists of tones at the blade passing frequency and its higher harmonics. These tonal sounds come from the interaction between the flow discharged from the impeller and the cutoff in the casing. Prediction of the noise from a centrifugal fan becomes more necessary to optimize the design to meet both the performance and noise criteria. However, only some limited studies on noise prediction method exist because there are difficulties in obtaining detailed information about the flow field and casing effect on noise radiation. This paper aims to investigate the noise generation mechanism of a centrifugal fan and to develop a prediction method for the unsteady flow and acoustic pressure fields. In order to do this, a numerical analysis method using acoustic similarity law is proposed, and it is verified that the method can predict the noise generation mechanism very well by comparing the predicted results with available experimental results.

  8. Computational methods using weighed-extreme learning machine to predict protein self-interactions with protein evolutionary information.

    PubMed

    An, Ji-Yong; Zhang, Lei; Zhou, Yong; Zhao, Yu-Jun; Wang, Da-Fu

    2017-08-18

    Self-interactions Proteins (SIPs) is important for their biological activity owing to the inherent interaction amongst their secondary structures or domains. However, due to the limitations of experimental Self-interactions detection, one major challenge in the study of prediction SIPs is how to exploit computational approaches for SIPs detection based on evolutionary information contained protein sequence. In the work, we presented a novel computational approach named WELM-LAG, which combined the Weighed-Extreme Learning Machine (WELM) classifier with Local Average Group (LAG) to predict SIPs based on protein sequence. The major improvement of our method lies in presenting an effective feature extraction method used to represent candidate Self-interactions proteins by exploring the evolutionary information embedded in PSI-BLAST-constructed position specific scoring matrix (PSSM); and then employing a reliable and robust WELM classifier to carry out classification. In addition, the Principal Component Analysis (PCA) approach is used to reduce the impact of noise. The WELM-LAG method gave very high average accuracies of 92.94 and 96.74% on yeast and human datasets, respectively. Meanwhile, we compared it with the state-of-the-art support vector machine (SVM) classifier and other existing methods on human and yeast datasets, respectively. Comparative results indicated that our approach is very promising and may provide a cost-effective alternative for predicting SIPs. In addition, we developed a freely available web server called WELM-LAG-SIPs to predict SIPs. The web server is available at http://219.219.62.123:8888/WELMLAG/ .

  9. Husbands’ and Wives’ Physical Activity and Depressive Symptoms: Longitudinal Findings from the Cardiovascular Health Study

    PubMed Central

    Monin, Joan K.; Levy, Becca; Chen, Baibing; Fried, Terri; Stahl, Sarah T.; Schulz, Richard; Doyle, Margaret; Kershaw, Trace

    2015-01-01

    Background When examining older adults’ health behaviors and psychological health it is important to consider the social context. Purpose To examine in older adult marriages whether each spouse’s physical activity predicted changes in their own (actor effects) and their partner’s (partner effects) depressive symptoms. Gender differences were also examined. Method Each spouse within 1,260 married couples (at baseline) in the Cardiovascular Health Study completed self-report measures at wave 1 (1989–1990), wave 3 (1992–1993), and wave 7 (1996–1997). Dyadic path analyses were performed. Results Husbands’ physical activity significantly predicted own decreased depressive symptoms (actor effect). For both spouses, own physical activity did not significantly predict the spouse’s depressive symptoms (partner effects). However, husbands’ physical activity and depressive symptoms predicted wives’ physical activity and depressive symptoms (partner effects), respectively. Depressive symptoms did not predict physical activity. Conclusion Findings suggest that husbands’ physical activity is particularly influential for older married couples’ psychological health. PMID:25868508

  10. Predicting missing links in complex networks based on common neighbors and distance

    PubMed Central

    Yang, Jinxuan; Zhang, Xiao-Dong

    2016-01-01

    The algorithms based on common neighbors metric to predict missing links in complex networks are very popular, but most of these algorithms do not account for missing links between nodes with no common neighbors. It is not accurate enough to reconstruct networks by using these methods in some cases especially when between nodes have less common neighbors. We proposed in this paper a new algorithm based on common neighbors and distance to improve accuracy of link prediction. Our proposed algorithm makes remarkable effect in predicting the missing links between nodes with no common neighbors and performs better than most existing currently used methods for a variety of real-world networks without increasing complexity. PMID:27905526

  11. Modeling and evaluating of surface roughness prediction in micro-grinding on soda-lime glass considering tool characterization

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Gong, Yadong; Wang, Jinsheng

    2013-11-01

    The current research of micro-grinding mainly focuses on the optimal processing technology for different materials. However, the material removal mechanism in micro-grinding is the base of achieving high quality processing surface. Therefore, a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography is proposed in this paper. The differences of material removal mechanism between convention grinding process and micro-grinding process are analyzed. Topography characterization has been done on micro-grinding tools which are fabricated by electroplating. Models of grain density generation and grain interval are built, and new predicting model of micro-grinding surface roughness is developed. In order to verify the precision and application effect of the surface roughness prediction model proposed, a micro-grinding orthogonally experiment on soda-lime glass is designed and conducted. A series of micro-machining surfaces which are 78 nm to 0.98 μm roughness of brittle material is achieved. It is found that experimental roughness results and the predicting roughness data have an evident coincidence, and the component variable of describing the size effects in predicting model is calculated to be 1.5×107 by reverse method based on the experimental results. The proposed model builds a set of distribution to consider grains distribution densities in different protrusion heights. Finally, the characterization of micro-grinding tools which are used in the experiment has been done based on the distribution set. It is concluded that there is a significant coincidence between surface prediction data from the proposed model and measurements from experiment results. Therefore, the effectiveness of the model is demonstrated. This paper proposes a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography, which would provide significant research theory and experimental reference of material removal mechanism in micro-grinding of soda-lime glass.

  12. Nonparametric method for genomics-based prediction of performance of quantitative traits involving epistasis in plant breeding.

    PubMed

    Sun, Xiaochun; Ma, Ping; Mumm, Rita H

    2012-01-01

    Genomic selection (GS) procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA) and reproducing kernel Hilbert spaces (RKHS) regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC) and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression.

  13. Nonparametric Method for Genomics-Based Prediction of Performance of Quantitative Traits Involving Epistasis in Plant Breeding

    PubMed Central

    Sun, Xiaochun; Ma, Ping; Mumm, Rita H.

    2012-01-01

    Genomic selection (GS) procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA) and reproducing kernel Hilbert spaces (RKHS) regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC) and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression. PMID:23226325

  14. Bayesian additive decision trees of biomarker by treatment interactions for predictive biomarker detection and subgroup identification.

    PubMed

    Zhao, Yang; Zheng, Wei; Zhuo, Daisy Y; Lu, Yuefeng; Ma, Xiwen; Liu, Hengchang; Zeng, Zhen; Laird, Glen

    2017-10-11

    Personalized medicine, or tailored therapy, has been an active and important topic in recent medical research. Many methods have been proposed in the literature for predictive biomarker detection and subgroup identification. In this article, we propose a novel decision tree-based approach applicable in randomized clinical trials. We model the prognostic effects of the biomarkers using additive regression trees and the biomarker-by-treatment effect using a single regression tree. Bayesian approach is utilized to periodically revise the split variables and the split rules of the decision trees, which provides a better overall fitting. Gibbs sampler is implemented in the MCMC procedure, which updates the prognostic trees and the interaction tree separately. We use the posterior distribution of the interaction tree to construct the predictive scores of the biomarkers and to identify the subgroup where the treatment is superior to the control. Numerical simulations show that our proposed method performs well under various settings comparing to existing methods. We also demonstrate an application of our method in a real clinical trial.

  15. Ultrasonic Non-destructive Prediction of Spot Welding Shear Strength

    NASA Astrophysics Data System (ADS)

    Himawan, R.; Haryanto, M.; Subekti, R. M.; Sunaryo, G. R.

    2018-02-01

    To enhance a corrosion resistant of ferritic steel in reactor pressure vessel, stainless steel was used as a cladding. Bonding process between these two steels may result a inhomogenity either sub-clad crack or un-joined part. To ensure the integrity, effective inspection method is needed for this purpose. Therefore, in this study, an experiment of ultrasonic test for inspection of two bonding plate was performed. The objective of this study is to develop an effective method in predicting the shear fracture load of the join. For simplicity, these joined was modelled with two plate of stainless steel with spot welding. Ultrasonic tests were performed using contact method with 5 MHz in frequency and 10 mm in diameter of transducer. Amplitude of reflected wave from intermediate layer was used as a quantitative parameter. A set of experiment results show that shear fracture load has a linear correlation with amplitude of reflected wave. Besides, amplitude of reflected wave also has relation with nugget diameter. It could be concluded that ultrasonic contact method could be applied in predicting a shear fracture load.

  16. Prediction-based dynamic load-sharing heuristics

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.; Devarakonda, Murthy; Iyer, Ravishankar K.

    1993-01-01

    The authors present dynamic load-sharing heuristics that use predicted resource requirements of processes to manage workloads in a distributed system. A previously developed statistical pattern-recognition method is employed for resource prediction. While nonprediction-based heuristics depend on a rapidly changing system status, the new heuristics depend on slowly changing program resource usage patterns. Furthermore, prediction-based heuristics can be more effective since they use future requirements rather than just the current system state. Four prediction-based heuristics, two centralized and two distributed, are presented. Using trace driven simulations, they are compared against random scheduling and two effective nonprediction based heuristics. Results show that the prediction-based centralized heuristics achieve up to 30 percent better response times than the nonprediction centralized heuristic, and that the prediction-based distributed heuristics achieve up to 50 percent improvements relative to their nonprediction counterpart.

  17. Semi-supervised protein subcellular localization.

    PubMed

    Xu, Qian; Hu, Derek Hao; Xue, Hong; Yu, Weichuan; Yang, Qiang

    2009-01-30

    Protein subcellular localization is concerned with predicting the location of a protein within a cell using computational method. The location information can indicate key functionalities of proteins. Accurate predictions of subcellular localizations of protein can aid the prediction of protein function and genome annotation, as well as the identification of drug targets. Computational methods based on machine learning, such as support vector machine approaches, have already been widely used in the prediction of protein subcellular localization. However, a major drawback of these machine learning-based approaches is that a large amount of data should be labeled in order to let the prediction system learn a classifier of good generalization ability. However, in real world cases, it is laborious, expensive and time-consuming to experimentally determine the subcellular localization of a protein and prepare instances of labeled data. In this paper, we present an approach based on a new learning framework, semi-supervised learning, which can use much fewer labeled instances to construct a high quality prediction model. We construct an initial classifier using a small set of labeled examples first, and then use unlabeled instances to refine the classifier for future predictions. Experimental results show that our methods can effectively reduce the workload for labeling data using the unlabeled data. Our method is shown to enhance the state-of-the-art prediction results of SVM classifiers by more than 10%.

  18. Prediction and measurement of the electromagnetic environment of high-power medium-wave and short-wave broadcast antennas in far field.

    PubMed

    Tang, Zhanghong; Wang, Qun; Ji, Zhijiang; Shi, Meiwu; Hou, Guoyan; Tan, Danjun; Wang, Pengqi; Qiu, Xianbo

    2014-12-01

    With the increasing city size, high-power electromagnetic radiation devices such as high-power medium-wave (MW) and short-wave (SW) antennas have been inevitably getting closer and closer to buildings, which resulted in the pollution of indoor electromagnetic radiation becoming worsened. To avoid such radiation exceeding the exposure limits by national standards, it is necessary to predict and survey the electromagnetic radiation by MW and SW antennas before constructing the buildings. In this paper, a modified prediction method for the far-field electromagnetic radiation is proposed and successfully applied to predict the electromagnetic environment of an area close to a group of typical high-power MW and SW wave antennas. Different from currently used simplified prediction method defined in the Radiation Protection Management Guidelines (H J/T 10. 3-1996), the new method in this article makes use of more information such as antennas' patterns to predict the electromagnetic environment. Therefore, it improves the prediction accuracy significantly by the new feature of resolution at different directions. At the end of this article, a comparison between the prediction data and the measured results is given to demonstrate the effectiveness of the proposed new method. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Comparison of in silico models for prediction of mutagenicity.

    PubMed

    Bakhtyari, Nazanin G; Raitano, Giuseppa; Benfenati, Emilio; Martin, Todd; Young, Douglas

    2013-01-01

    Using a dataset with more than 6000 compounds, the performance of eight quantitative structure activity relationships (QSAR) models was evaluated: ACD/Tox Suite, Absorption, Distribution, Metabolism, Elimination, and Toxicity of chemical substances (ADMET) predictor, Derek, Toxicity Estimation Software Tool (T.E.S.T.), TOxicity Prediction by Komputer Assisted Technology (TOPKAT), Toxtree, CEASAR, and SARpy (SAR in python). In general, the results showed a high level of performance. To have a realistic estimate of the predictive ability, the results for chemicals inside and outside the training set for each model were considered. The effect of applicability domain tools (when available) on the prediction accuracy was also evaluated. The predictive tools included QSAR models, knowledge-based systems, and a combination of both methods. Models based on statistical QSAR methods gave better results.

  20. Prediction of Antimicrobial Peptides Based on Sequence Alignment and Feature Selection Methods

    PubMed Central

    Wang, Ping; Hu, Lele; Liu, Guiyou; Jiang, Nan; Chen, Xiaoyun; Xu, Jianyong; Zheng, Wen; Li, Li; Tan, Ming; Chen, Zugen; Song, Hui; Cai, Yu-Dong; Chou, Kuo-Chen

    2011-01-01

    Antimicrobial peptides (AMPs) represent a class of natural peptides that form a part of the innate immune system, and this kind of ‘nature's antibiotics’ is quite promising for solving the problem of increasing antibiotic resistance. In view of this, it is highly desired to develop an effective computational method for accurately predicting novel AMPs because it can provide us with more candidates and useful insights for drug design. In this study, a new method for predicting AMPs was implemented by integrating the sequence alignment method and the feature selection method. It was observed that, the overall jackknife success rate by the new predictor on a newly constructed benchmark dataset was over 80.23%, and the Mathews correlation coefficient is 0.73, indicating a good prediction. Moreover, it is indicated by an in-depth feature analysis that the results are quite consistent with the previously known knowledge that some amino acids are preferential in AMPs and that these amino acids do play an important role for the antimicrobial activity. For the convenience of most experimental scientists who want to use the prediction method without the interest to follow the mathematical details, a user-friendly web-server is provided at http://amp.biosino.org/. PMID:21533231

  1. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  2. Genomic prediction in animals and plants: simulation of data, validation, reporting, and benchmarking.

    PubMed

    Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M

    2013-02-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals.

  3. Genomic Prediction in Animals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking

    PubMed Central

    Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.

    2013-01-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals. PMID:23222650

  4. Improved LTVMPC design for steering control of autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Velhal, Shridhar; Thomas, Susy

    2017-01-01

    An improved linear time varying model predictive control for steering control of autonomous vehicle running on slippery road is presented. Control strategy is designed such that the vehicle will follow the predefined trajectory with highest possible entry speed. In linear time varying model predictive control, nonlinear vehicle model is successively linearized at each sampling instant. This linear time varying model is used to design MPC which will predict the future horizon. By incorporating predicted input horizon in each successive linearization the effectiveness of controller has been improved. The tracking performance using steering with front wheel and braking at four wheels are presented to illustrate the effectiveness of the proposed method.

  5. Use of moments of momentum to predict the crystal habit in potassium hydrogen phthalate

    NASA Technical Reports Server (NTRS)

    Barber, Patrick G.; Petty, John T.

    1990-01-01

    A relatively simple calculation of the moments of momentum predicts the morphological order of crystal faces for potassium hydrogen phthalate. The effects on the habit caused by the addition of monomeric, dimeric, and larger aggregates during crystal growth are considered. The first six of the seven observed crystal faces are predicted with this method.

  6. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2008-09-30

    novel method of simultaneous real- time measurements of ice-nucleating particle concentrations and size- resolved chemical composition of individual...is to develop a practical predictive capability for visibility and weather effects of aerosol particles for the entire globe for timely use in...prediction follows that used in numerical weather prediction, namely real- time assessment for initialization of first-principles models. The Naval

  7. An augmented classical least squares method for quantitative Raman spectral analysis against component information loss.

    PubMed

    Zhou, Yan; Cao, Hui

    2013-01-01

    We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.

  8. Dynamic prediction in functional concurrent regression with an application to child growth.

    PubMed

    Leroux, Andrew; Xiao, Luo; Crainiceanu, Ciprian; Checkley, William

    2018-04-15

    In many studies, it is of interest to predict the future trajectory of subjects based on their historical data, referred to as dynamic prediction. Mixed effects models have traditionally been used for dynamic prediction. However, the commonly used random intercept and slope model is often not sufficiently flexible for modeling subject-specific trajectories. In addition, there may be useful exposures/predictors of interest that are measured concurrently with the outcome, complicating dynamic prediction. To address these problems, we propose a dynamic functional concurrent regression model to handle the case where both the functional response and the functional predictors are irregularly measured. Currently, such a model cannot be fit by existing software. We apply the model to dynamically predict children's length conditional on prior length, weight, and baseline covariates. Inference on model parameters and subject-specific trajectories is conducted using the mixed effects representation of the proposed model. An extensive simulation study shows that the dynamic functional regression model provides more accurate estimation and inference than existing methods. Methods are supported by fast, flexible, open source software that uses heavily tested smoothing techniques. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  9. GetReal in mathematical modelling: a review of studies predicting drug effectiveness in the real world.

    PubMed

    Panayidou, Klea; Gsteiger, Sandro; Egger, Matthias; Kilcher, Gablu; Carreras, Máximo; Efthimiou, Orestis; Debray, Thomas P A; Trelle, Sven; Hummel, Noemi

    2016-09-01

    The performance of a drug in a clinical trial setting often does not reflect its effect in daily clinical practice. In this third of three reviews, we examine the applications that have been used in the literature to predict real-world effectiveness from randomized controlled trial efficacy data. We searched MEDLINE, EMBASE from inception to March 2014, the Cochrane Methodology Register, and websites of key journals and organisations and reference lists. We extracted data on the type of model and predictions, data sources, validation and sensitivity analyses, disease area and software. We identified 12 articles in which four approaches were used: multi-state models, discrete event simulation models, physiology-based models and survival and generalized linear models. Studies predicted outcomes over longer time periods in different patient populations, including patients with lower levels of adherence or persistence to treatment or examined doses not tested in trials. Eight studies included individual patient data. Seven examined cardiovascular and metabolic diseases and three neurological conditions. Most studies included sensitivity analyses, but external validation was performed in only three studies. We conclude that mathematical modelling to predict real-world effectiveness of drug interventions is not widely used at present and not well validated. © 2016 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd. © 2016 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd.

  10. A Systematic Investigation of Computation Models for Predicting Adverse Drug Reactions (ADRs)

    PubMed Central

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Background Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. Principal Findings In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Conclusion Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms. PMID:25180585

  11. Measured and predicted rotor performance for the SERI advanced wind turbine blades

    NASA Astrophysics Data System (ADS)

    Tangler, J.; Smith, B.; Kelley, N.; Jager, D.

    1992-02-01

    Measured and predicted rotor performance for the Solar Energy Research Institute (SERI) advanced wind turbine blades were compared to assess the accuracy of predictions and to identify the sources of error affecting both predictions and measurements. An awareness of these sources of error contributes to improved prediction and measurement methods that will ultimately benefit future rotor design efforts. Propeller/vane anemometers were found to underestimate the wind speed in turbulent environments such as the San Gorgonio Pass wind farm area. Using sonic or cup anemometers, good agreement was achieved between predicted and measured power output for wind speeds up to 8 m/sec. At higher wind speeds an optimistic predicted power output and the occurrence of peak power at wind speeds lower than measurements resulted from the omission of turbulence and yaw error. In addition, accurate two-dimensional (2-D) airfoil data prior to stall and a post stall airfoil data synthesization method that reflects three-dimensional (3-D) effects were found to be essential for accurate performance prediction.

  12. Effect of accuracy of wind power prediction on power system operator

    NASA Technical Reports Server (NTRS)

    Schlueter, R. A.; Sigari, G.; Costi, T.

    1985-01-01

    This research project proposed a modified unit commitment that schedules connection and disconnection of generating units in response to load. A modified generation control is also proposed that controls steam units under automatic generation control, fast responding diesels, gas turbines and hydro units under a feedforward control, and wind turbine array output under a closed loop array control. This modified generation control and unit commitment require prediction of trend wind power variation one hour ahead and the prediction of error in this trend wind power prediction one half hour ahead. An improved meter for predicting trend wind speed variation is developed. Methods for accurately simulating the wind array power from a limited number of wind speed prediction records was developed. Finally, two methods for predicting the error in the trend wind power prediction were developed. This research provides a foundation for testing and evaluating the modified unit commitment and generation control that was developed to maintain operating reliability at a greatly reduced overall production cost for utilities with wind generation capacity.

  13. Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture

    PubMed Central

    Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang

    2016-01-01

    The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176

  14. Recurrent Neural Network Applications for Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  15. Effectiveness of Cooperative Learning Instructional Tools With Predict-Observe-Explain Strategy on the Topic of Cuboid and Cube Volume

    NASA Astrophysics Data System (ADS)

    Nurhuda; Lukito, A.; Masriyah

    2018-01-01

    This study aims to develop instructional tools and implement it to see the effectiveness. The method used in this research referred to Designing Effective Instruction. Experimental research with two-group pretest-posttest design method was conducted. The instructional tools have been developed is cooperative learning model with predict-observe-explain strategy on the topic of cuboid and cube volume which consist of lesson plans, POE tasks, and Tests. Instructional tools were of good quality by criteria of validity, practicality, and effectiveness. These instructional tools was very effective for teaching the volume of cuboid and cube. Cooperative instructional tool with predict-observe-explain (POE) strategy was good of quality because the teacher was easy to implement the steps of learning, students easy to understand the material and students’ learning outcomes completed classically. Learning by using this instructional tool was effective because learning activities were appropriate and students were very active. Students’ learning outcomes were completed classically and better than conventional learning. This study produced a good instructional tool and effectively used in learning. Therefore, these instructional tools can be used as an alternative to teach volume of cuboid and cube topics.

  16. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts

    NASA Astrophysics Data System (ADS)

    Yang, Hongbin; Sun, Lixia; Li, Weihua; Liu, Guixia; Tang, Yun

    2018-02-01

    For a drug, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future.

  17. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts

    PubMed Central

    Yang, Hongbin; Sun, Lixia; Li, Weihua; Liu, Guixia; Tang, Yun

    2018-01-01

    During drug development, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future. PMID:29515993

  18. sNebula, a network-based algorithm to predict binding between human leukocyte antigens and peptides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Heng; Ye, Hao; Ng, Hui Wen

    Understanding the binding between human leukocyte antigens (HLAs) and peptides is important to understand the functioning of the immune system. Since it is time-consuming and costly to measure the binding between large numbers of HLAs and peptides, computational methods including machine learning models and network approaches have been developed to predict HLA-peptide binding. However, there are several limitations for the existing methods. We developed a network-based algorithm called sNebula to address these limitations. We curated qualitative Class I HLA-peptide binding data and demonstrated the prediction performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-validations. Furthermore, this algorithmmore » can predict not only peptides of different lengths and different types of HLAs, but also the peptides or HLAs that have no existing binding data. We believe sNebula is an effective method to predict HLA-peptide binding and thus improve our understanding of the immune system.« less

  19. Correlation of Puma airloads: Lifting-line and wake calculation

    NASA Technical Reports Server (NTRS)

    Bousman, William G.; Young, Colin; Gilbert, Neil; Toulmay, Francois; Johnson, Wayne; Riley, M. J.

    1989-01-01

    A cooperative program undertaken by organizations in the United States, England, France, and Australia has assessed the strengths and weaknesses of four lifting-line/wake methods and three CFD methods by comparing their predictions with the data obtained in flight trials of a research Puma. The Puma was tested in two configurations: a mixed bladed rotor with instrumented rectangular tip blades, and a configuration with four identical swept tip blades. The results are examined of the lifting-line predictions. The better lifting-line methods show good agreement with lift at the blade tip for the configuration with four swept tips; the moment is well predicted at 0.92 R, but deteriorates outboard. The predictions for the mixed bladed rotor configuration range from fair to good. The lift prediction is better for the swept tip blade than for the rectangular tip blade, but the reasons for this cannot be determined because of the unmodeled effects of the mixed bladed rotor.

  20. sNebula, a network-based algorithm to predict binding between human leukocyte antigens and peptides

    PubMed Central

    Luo, Heng; Ye, Hao; Ng, Hui Wen; Sakkiah, Sugunadevi; Mendrick, Donna L.; Hong, Huixiao

    2016-01-01

    Understanding the binding between human leukocyte antigens (HLAs) and peptides is important to understand the functioning of the immune system. Since it is time-consuming and costly to measure the binding between large numbers of HLAs and peptides, computational methods including machine learning models and network approaches have been developed to predict HLA-peptide binding. However, there are several limitations for the existing methods. We developed a network-based algorithm called sNebula to address these limitations. We curated qualitative Class I HLA-peptide binding data and demonstrated the prediction performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-validations. This algorithm can predict not only peptides of different lengths and different types of HLAs, but also the peptides or HLAs that have no existing binding data. We believe sNebula is an effective method to predict HLA-peptide binding and thus improve our understanding of the immune system. PMID:27558848

  1. International journal of computational fluid dynamics real-time prediction of unsteady flow based on POD reduced-order model and particle filter

    NASA Astrophysics Data System (ADS)

    Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru

    2016-04-01

    An integrated method consisting of a proper orthogonal decomposition (POD)-based reduced-order model (ROM) and a particle filter (PF) is proposed for real-time prediction of an unsteady flow field. The proposed method is validated using identical twin experiments of an unsteady flow field around a circular cylinder for Reynolds numbers of 100 and 1000. In this study, a PF is employed (ROM-PF) to modify the temporal coefficient of the ROM based on observation data because the prediction capability of the ROM alone is limited due to the stability issue. The proposed method reproduces the unsteady flow field several orders faster than a reference numerical simulation based on Navier-Stokes equations. Furthermore, the effects of parameters, related to observation and simulation, on the prediction accuracy are studied. Most of the energy modes of the unsteady flow field are captured, and it is possible to stably predict the long-term evolution with ROM-PF.

  2. sNebula, a network-based algorithm to predict binding between human leukocyte antigens and peptides

    DOE PAGES

    Luo, Heng; Ye, Hao; Ng, Hui Wen; ...

    2016-08-25

    Understanding the binding between human leukocyte antigens (HLAs) and peptides is important to understand the functioning of the immune system. Since it is time-consuming and costly to measure the binding between large numbers of HLAs and peptides, computational methods including machine learning models and network approaches have been developed to predict HLA-peptide binding. However, there are several limitations for the existing methods. We developed a network-based algorithm called sNebula to address these limitations. We curated qualitative Class I HLA-peptide binding data and demonstrated the prediction performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-validations. Furthermore, this algorithmmore » can predict not only peptides of different lengths and different types of HLAs, but also the peptides or HLAs that have no existing binding data. We believe sNebula is an effective method to predict HLA-peptide binding and thus improve our understanding of the immune system.« less

  3. Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation.

    PubMed

    Li, Xiang; Peng, Ling; Yao, Xiaojing; Cui, Shaolong; Hu, Yuan; You, Chengzeng; Chi, Tianhe

    2017-12-01

    Air pollutant concentration forecasting is an effective method of protecting public health by providing an early warning against harmful air pollutants. However, existing methods of air pollutant concentration prediction fail to effectively model long-term dependencies, and most neglect spatial correlations. In this paper, a novel long short-term memory neural network extended (LSTME) model that inherently considers spatiotemporal correlations is proposed for air pollutant concentration prediction. Long short-term memory (LSTM) layers were used to automatically extract inherent useful features from historical air pollutant data, and auxiliary data, including meteorological data and time stamp data, were merged into the proposed model to enhance the performance. Hourly PM 2.5 (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm) concentration data collected at 12 air quality monitoring stations in Beijing City from Jan/01/2014 to May/28/2016 were used to validate the effectiveness of the proposed LSTME model. Experiments were performed using the spatiotemporal deep learning (STDL) model, the time delay neural network (TDNN) model, the autoregressive moving average (ARMA) model, the support vector regression (SVR) model, and the traditional LSTM NN model, and a comparison of the results demonstrated that the LSTME model is superior to the other statistics-based models. Additionally, the use of auxiliary data improved model performance. For the one-hour prediction tasks, the proposed model performed well and exhibited a mean absolute percentage error (MAPE) of 11.93%. In addition, we conducted multiscale predictions over different time spans and achieved satisfactory performance, even for 13-24 h prediction tasks (MAPE = 31.47%). Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Stable Isotope Ratio and Elemental Profile Combined with Support Vector Machine for Provenance Discrimination of Oolong Tea (Wuyi-Rock Tea)

    PubMed Central

    Lou, Yun-xiao; Fu, Xian-shu; Yu, Xiao-ping; Zhang, Ya-fen

    2017-01-01

    This paper focused on an effective method to discriminate the geographical origin of Wuyi-Rock tea by the stable isotope ratio (SIR) and metallic element profiling (MEP) combined with support vector machine (SVM) analysis. Wuyi-Rock tea (n = 99) collected from nine producing areas and non-Wuyi-Rock tea (n = 33) from eleven nonproducing areas were analysed for SIR and MEP by established methods. The SVM model based on coupled data produced the best prediction accuracy (0.9773). This prediction shows that instrumental methods combined with a classification model can provide an effective and stable tool for provenance discrimination. Moreover, every feature variable in stable isotope and metallic element data was ranked by its contribution to the model. The results show that δ2H, δ18O, Cs, Cu, Ca, and Rb contents are significant indications for provenance discrimination and not all of the metallic elements improve the prediction accuracy of the SVM model. PMID:28473941

  5. Historical feature pattern extraction based network attack situation sensing algorithm.

    PubMed

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously.

  6. Historical Feature Pattern Extraction Based Network Attack Situation Sensing Algorithm

    PubMed Central

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously. PMID:24892054

  7. Analytical method for predicting the pressure distribution about a nacelle at transonic speeds

    NASA Technical Reports Server (NTRS)

    Keith, J. S.; Ferguson, D. R.; Merkle, C. L.; Heck, P. H.; Lahti, D. J.

    1973-01-01

    The formulation and development of a computer analysis for the calculation of streamlines and pressure distributions around two-dimensional (planar and axisymmetric) isolated nacelles at transonic speeds are described. The computerized flow field analysis is designed to predict the transonic flow around long and short high-bypass-ratio fan duct nacelles with inlet flows and with exhaust flows having appropriate aerothermodynamic properties. The flow field boundaries are located as far upstream and downstream as necessary to obtain minimum disturbances at the boundary. The far-field lateral flow field boundary is analytically defined to exactly represent free-flight conditions or solid wind tunnel wall effects. The inviscid solution technique is based on a Streamtube Curvature Analysis. The computer program utilizes an automatic grid refinement procedure and solves the flow field equations with a matrix relaxation technique. The boundary layer displacement effects and the onset of turbulent separation are included, based on the compressible turbulent boundary layer solution method of Stratford and Beavers and on the turbulent separation prediction method of Stratford.

  8. Boosting compound-protein interaction prediction by deep learning.

    PubMed

    Tian, Kai; Shao, Mingyu; Wang, Yang; Guan, Jihong; Zhou, Shuigeng

    2016-11-01

    The identification of interactions between compounds and proteins plays an important role in network pharmacology and drug discovery. However, experimentally identifying compound-protein interactions (CPIs) is generally expensive and time-consuming, computational approaches are thus introduced. Among these, machine-learning based methods have achieved a considerable success. However, due to the nonlinear and imbalanced nature of biological data, many machine learning approaches have their own limitations. Recently, deep learning techniques show advantages over many state-of-the-art machine learning methods in some applications. In this study, we aim at improving the performance of CPI prediction based on deep learning, and propose a method called DL-CPI (the abbreviation of Deep Learning for Compound-Protein Interactions prediction), which employs deep neural network (DNN) to effectively learn the representations of compound-protein pairs. Extensive experiments show that DL-CPI can learn useful features of compound-protein pairs by a layerwise abstraction, and thus achieves better prediction performance than existing methods on both balanced and imbalanced datasets. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Predicting protein function and other biomedical characteristics with heterogeneous ensembles

    PubMed Central

    Whalen, Sean; Pandey, Om Prakash

    2015-01-01

    Prediction problems in biomedical sciences, including protein function prediction (PFP), are generally quite difficult. This is due in part to incomplete knowledge of the cellular phenomenon of interest, the appropriateness and data quality of the variables and measurements used for prediction, as well as a lack of consensus regarding the ideal predictor for specific problems. In such scenarios, a powerful approach to improving prediction performance is to construct heterogeneous ensemble predictors that combine the output of diverse individual predictors that capture complementary aspects of the problems and/or datasets. In this paper, we demonstrate the potential of such heterogeneous ensembles, derived from stacking and ensemble selection methods, for addressing PFP and other similar biomedical prediction problems. Deeper analysis of these results shows that the superior predictive ability of these methods, especially stacking, can be attributed to their attention to the following aspects of the ensemble learning process: (i) better balance of diversity and performance, (ii) more effective calibration of outputs and (iii) more robust incorporation of additional base predictors. Finally, to make the effective application of heterogeneous ensembles to large complex datasets (big data) feasible, we present DataSink, a distributed ensemble learning framework, and demonstrate its sound scalability using the examined datasets. DataSink is publicly available from https://github.com/shwhalen/datasink. PMID:26342255

  10. In silico prediction of drug-induced myelotoxicity by using Naïve Bayes method.

    PubMed

    Zhang, Hui; Yu, Peng; Zhang, Teng-Guo; Kang, Yan-Li; Zhao, Xiao; Li, Yuan-Yuan; He, Jia-Hui; Zhang, Ji

    2015-11-01

    Drug-induced myelotoxicity usually leads to decrease the production of platelets, red cells, and white cells. Thus, early identification and characterization of myelotoxicity hazard in drug development is very necessary. The purpose of this investigation was to develop a prediction model of drug-induced myelotoxicity by using a Naïve Bayes classifier. For comparison, other prediction models based on support vector machine and single-hidden-layer feed-forward neural network  methods were also established. Among all the prediction models, the Naïve Bayes classification model showed the best prediction performance, which offered an average overall prediction accuracy of [Formula: see text] for the training set and [Formula: see text] for the external test set. The significant contributions of this study are that we first developed a Naïve Bayes classification model of drug-induced myelotoxicity adverse effect using a larger scale dataset, which could be employed for the prediction of drug-induced myelotoxicity. In addition, several important molecular descriptors and substructures of myelotoxic compounds have been identified, which should be taken into consideration in the design of new candidate compounds to produce safer and more effective drugs, ultimately reducing the attrition rate in later stages of drug development.

  11. A summary of wind power prediction methods

    NASA Astrophysics Data System (ADS)

    Wang, Yuqi

    2018-06-01

    The deterministic prediction of wind power, the probability prediction and the prediction of wind power ramp events are introduced in this paper. Deterministic prediction includes the prediction of statistical learning based on histor ical data and the prediction of physical models based on NWP data. Due to the great impact of wind power ramp events on the power system, this paper also introduces the prediction of wind power ramp events. At last, the evaluation indicators of all kinds of prediction are given. The prediction of wind power can be a good solution to the adverse effects of wind power on the power system due to the abrupt, intermittent and undulation of wind power.

  12. Nonlinear-drifted Brownian motion with multiple hidden states for remaining useful life prediction of rechargeable batteries

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Zhao, Yang; Yang, Fangfang; Tsui, Kwok-Leung

    2017-09-01

    Brownian motion with adaptive drift has attracted much attention in prognostics because its first hitting time is highly relevant to remaining useful life prediction and it follows the inverse Gaussian distribution. Besides linear degradation modeling, nonlinear-drifted Brownian motion has been developed to model nonlinear degradation. Moreover, the first hitting time distribution of the nonlinear-drifted Brownian motion has been approximated by time-space transformation. In the previous studies, the drift coefficient is the only hidden state used in state space modeling of the nonlinear-drifted Brownian motion. Besides the drift coefficient, parameters of a nonlinear function used in the nonlinear-drifted Brownian motion should be treated as additional hidden states of state space modeling to make the nonlinear-drifted Brownian motion more flexible. In this paper, a prognostic method based on nonlinear-drifted Brownian motion with multiple hidden states is proposed and then it is applied to predict remaining useful life of rechargeable batteries. 26 sets of rechargeable battery degradation samples are analyzed to validate the effectiveness of the proposed prognostic method. Moreover, some comparisons with a standard particle filter based prognostic method, a spherical cubature particle filter based prognostic method and two classic Bayesian prognostic methods are conducted to highlight the superiority of the proposed prognostic method. Results show that the proposed prognostic method has lower average prediction errors than the particle filter based prognostic methods and the classic Bayesian prognostic methods for battery remaining useful life prediction.

  13. Contribution of new technologies to characterization and prediction of adverse effects.

    PubMed

    Rouquié, David; Heneweer, Marjoke; Botham, Jane; Ketelslegers, Hans; Markell, Lauren; Pfister, Thomas; Steiling, Winfried; Strauss, Volker; Hennes, Christa

    2015-02-01

    Identification of the potential hazards of chemicals has traditionally relied on studies in laboratory animals where changes in clinical pathology and histopathology compared to untreated controls defined an adverse effect. In the past decades, increased consistency in the definition of adversity with chemically-induced effects in laboratory animals, as well as in the assessment of human relevance has been reached. More recently, a paradigm shift in toxicity testing has been proposed, mainly driven by concerns over animal welfare but also thanks to the development of new methods. Currently, in vitro approaches, toxicogenomic technologies and computational tools, are available to provide mechanistic insight in toxicological Mode of Action (MOA) of the adverse effects observed in laboratory animals. The vision described as Tox21c (Toxicity Testing in the 21st century) aims at predicting in vivo toxicity using a bottom-up-approach, starting with understanding of MOA based on in vitro data to ultimately predict adverse effects in humans. At present, a practical application of the Tox21c vision is still far away. While moving towards toxicity prediction based on in vitro data, a stepwise reduction of in vivo testing is foreseen by combining in vitro with in vivo tests. Furthermore, newly developed methods will also be increasingly applied, in conjunction with established methods in order to gain trust in these new methods. This confidence is based on a critical scientific prerequisite: the establishment of a causal link between data obtained with new technologies and adverse effects manifested in repeated-dose in vivo toxicity studies. It is proposed to apply the principles described in the WHO/IPCS framework of MOA to obtain this link. Finally, an international database of known MOAs obtained in laboratory animals using data-rich chemicals will facilitate regulatory acceptance and could further help in the validation of the toxicity pathway and adverse outcome pathway concepts.

  14. Numerical simulation of the change characteristics of power dissipation coefficient of Ti-24Al-15Nb alloy in hot deformation

    NASA Astrophysics Data System (ADS)

    Wang, Kelu; Li, Xin; Zhang, Xiaobo

    2018-03-01

    The power dissipation maps of Ti-25Al-15Nb alloy were constructed by using the compression test data. A method is proposed to predict the distribution and variation of power dissipation coefficient in hot forging process using both the dynamic material model and finite element simulation. Using the proposed method, the change characteristics of the power dissipation coefficient are simulated and predicted. The effectiveness of the proposed method was verified by comparing the simulation results with the physical experimental results.

  15. Completing sparse and disconnected protein-protein network by deep learning.

    PubMed

    Huang, Lei; Liao, Li; Wu, Cathy H

    2018-03-22

    Protein-protein interaction (PPI) prediction remains a central task in systems biology to achieve a better and holistic understanding of cellular and intracellular processes. Recently, an increasing number of computational methods have shifted from pair-wise prediction to network level prediction. Many of the existing network level methods predict PPIs under the assumption that the training network should be connected. However, this assumption greatly affects the prediction power and limits the application area because the current golden standard PPI networks are usually very sparse and disconnected. Therefore, how to effectively predict PPIs based on a training network that is sparse and disconnected remains a challenge. In this work, we developed a novel PPI prediction method based on deep learning neural network and regularized Laplacian kernel. We use a neural network with an autoencoder-like architecture to implicitly simulate the evolutionary processes of a PPI network. Neurons of the output layer correspond to proteins and are labeled with values (1 for interaction and 0 for otherwise) from the adjacency matrix of a sparse disconnected training PPI network. Unlike autoencoder, neurons at the input layer are given all zero input, reflecting an assumption of no a priori knowledge about PPIs, and hidden layers of smaller sizes mimic ancient interactome at different times during evolution. After the training step, an evolved PPI network whose rows are outputs of the neural network can be obtained. We then predict PPIs by applying the regularized Laplacian kernel to the transition matrix that is built upon the evolved PPI network. The results from cross-validation experiments show that the PPI prediction accuracies for yeast data and human data measured as AUC are increased by up to 8.4 and 14.9% respectively, as compared to the baseline. Moreover, the evolved PPI network can also help us leverage complementary information from the disconnected training network and multiple heterogeneous data sources. Tested by the yeast data with six heterogeneous feature kernels, the results show our method can further improve the prediction performance by up to 2%, which is very close to an upper bound that is obtained by an Approximate Bayesian Computation based sampling method. The proposed evolution deep neural network, coupled with regularized Laplacian kernel, is an effective tool in completing sparse and disconnected PPI networks and in facilitating integration of heterogeneous data sources.

  16. High precision in protein contact prediction using fully convolutional neural networks and minimal sequence features.

    PubMed

    Jones, David T; Kandathil, Shaun M

    2018-04-26

    In addition to substitution frequency data from protein sequence alignments, many state-of-the-art methods for contact prediction rely on additional sources of information, or features, of protein sequences in order to predict residue-residue contacts, such as solvent accessibility, predicted secondary structure, and scores from other contact prediction methods. It is unclear how much of this information is needed to achieve state-of-the-art results. Here, we show that using deep neural network models, simple alignment statistics contain sufficient information to achieve state-of-the-art precision. Our prediction method, DeepCov, uses fully convolutional neural networks operating on amino-acid pair frequency or covariance data derived directly from sequence alignments, without using global statistical methods such as sparse inverse covariance or pseudolikelihood estimation. Comparisons against CCMpred and MetaPSICOV2 show that using pairwise covariance data calculated from raw alignments as input allows us to match or exceed the performance of both of these methods. Almost all of the achieved precision is obtained when considering relatively local windows (around 15 residues) around any member of a given residue pairing; larger window sizes have comparable performance. Assessment on a set of shallow sequence alignments (fewer than 160 effective sequences) indicates that the new method is substantially more precise than CCMpred and MetaPSICOV2 in this regime, suggesting that improved precision is attainable on smaller sequence families. Overall, the performance of DeepCov is competitive with the state of the art, and our results demonstrate that global models, which employ features from all parts of the input alignment when predicting individual contacts, are not strictly needed in order to attain precise contact predictions. DeepCov is freely available at https://github.com/psipred/DeepCov. d.t.jones@ucl.ac.uk.

  17. Examining empirical evidence of the effect of superfluidity on the fusion barrier

    NASA Astrophysics Data System (ADS)

    Scamps, Guillaume

    2018-04-01

    Background: Recent time-dependent Hartree-Fock-Bogoliubov (TDHFB) calculations predict that superfluidity enhances fluctuations of the fusion barrier. This effect is not fully understood and not yet experimentally revealed. Purpose: The goal of this study is to empirically investigate the effect of superfluidity on the distribution width of the fusion barrier. Method: Two new methods are proposed in the present study. First, the local regression method is introduced and used to determine the barrier distribution. The second method, which requires only the calculation of an integral of the cross section, is developed to determine accurately the fluctuations of the barrier. This integral method, showing the best performance, is systematically applied to 115 fusion reactions. Results: Fluctuations of the barrier for open-shell systems are, on average, larger than those for magic or semimagic nuclei. This is due to the deformation and the superfluidity. To disentangle these two effects, a comparison is made between the experimental width and the width estimated from a model that takes into account the tunneling, the deformation, and the vibration effect. This study reveals that superfluidity enhances the fusion barrier width. Conclusions: This analysis shows that the predicted effect of superfluidity on the width of the barrier is real and is of the order of 1 MeV.

  18. Prediction of phospholipidosis-inducing potential of drugs by in vitro biochemical and physicochemical assays followed by multivariate analysis.

    PubMed

    Kuroda, Yukihiro; Saito, Madoka

    2010-03-01

    An in vitro method to predict phospholipidosis-inducing potential of cationic amphiphilic drugs (CADs) was developed using biochemical and physicochemical assays. The following parameters were applied to principal component analysis, as well as physicochemical parameters: pK(a) and clogP; dissociation constant of CADs from phospholipid, inhibition of enzymatic phospholipid degradation, and metabolic stability of CADs. In the score plot, phospholipidosis-inducing drugs (amiodarone, propranolol, imipramine, chloroquine) were plotted locally forming the subspace for positive CADs; while non-inducing drugs (chlorpromazine, chloramphenicol, disopyramide, lidocaine) were placed scattering out of the subspace, allowing a clear discrimination between both classes of CADs. CADs that often produce false results by conventional physicochemical or cell-based assay methods were accurately determined by our method. Basic and lipophilic disopyramide could be accurately predicted as a nonphospholipidogenic drug. Moreover, chlorpromazine, which is often falsely predicted as a phospholipidosis-inducing drug by in vitro methods, could be accurately determined. Because this method uses the pharmacokinetic parameters pK(a), clogP, and metabolic stability, which are usually obtained in the early stages of drug development, the method newly requires only the two parameters, binding to phospholipid, and inhibition of lipid degradation enzyme. Therefore, this method provides a cost-effective approach to predict phospholipidosis-inducing potential of a drug. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  19. Multi-Label Learning via Random Label Selection for Protein Subcellular Multi-Locations Prediction.

    PubMed

    Wang, Xiao; Li, Guo-Zheng

    2013-03-12

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multi-location proteins to multiple proteins with single location, which doesn't take correlations among different subcellular locations into account. In this paper, a novel method named RALS (multi-label learning via RAndom Label Selection), is proposed to learn from multi-location proteins in an effective and efficient way. Through five-fold cross validation test on a benchmark dataset, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark datasets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multi-locations of proteins. The prediction web server is available at http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  20. Effects of Moisture and Particle Size on Quantitative Determination of Total Organic Carbon (TOC) in Soils Using Near-Infrared Spectroscopy.

    PubMed

    Tamburini, Elena; Vincenzi, Fabio; Costa, Stefania; Mantovi, Paolo; Pedrini, Paola; Castaldelli, Giuseppe

    2017-10-17

    Near-Infrared Spectroscopy is a cost-effective and environmentally friendly technique that could represent an alternative to conventional soil analysis methods, including total organic carbon (TOC). Soil fertility and quality are usually measured by traditional methods that involve the use of hazardous and strong chemicals. The effects of physical soil characteristics, such as moisture content and particle size, on spectral signals could be of great interest in order to understand and optimize prediction capability and set up a robust and reliable calibration model, with the future perspective of being applied in the field. Spectra of 46 soil samples were collected. Soil samples were divided into three data sets: unprocessed, only dried and dried, ground and sieved, in order to evaluate the effects of moisture and particle size on spectral signals. Both separate and combined normalization methods including standard normal variate (SNV), multiplicative scatter correction (MSC) and normalization by closure (NCL), as well as smoothing using first and second derivatives (DV1 and DV2), were applied to a total of seven cases. Pretreatments for model optimization were designed and compared for each data set. The best combination of pretreatments was achieved by applying SNV and DV2 on partial least squares (PLS) modelling. There were no significant differences between the predictions using the three different data sets ( p < 0.05). Finally, a unique database including all three data sets was built to include all the sources of sample variability that were tested and used for final prediction. External validation of TOC was carried out on 16 unknown soil samples to evaluate the predictive ability of the final combined calibration model. Hence, we demonstrate that sample preprocessing has minor influence on the quality of near infrared spectroscopy (NIR) predictions, laying the ground for a direct and fast in situ application of the method. Data can be acquired outside the laboratory since the method is simple and does not need more than a simple band ratio of the spectra.

  1. An evaluation of the effect of recent temperature variability on the prediction of coral bleaching events.

    PubMed

    Donner, Simon D

    2011-07-01

    Over the past 30 years, warm thermal disturbances have become commonplace on coral reefs worldwide. These periods of anomalous sea surface temperature (SST) can lead to coral bleaching, a breakdown of the symbiosis between the host coral and symbiotic dinoflagellates which reside in coral tissue. The onset of bleaching is typically predicted to occur when the SST exceeds a local climatological maximum by 1 degrees C for a month or more. However, recent evidence suggests that the threshold at which bleaching occurs may depend on thermal history. This study uses global SST data sets (HadISST and NOAA AVHRR) and mass coral bleaching reports (from Reefbase) to examine the effect of historical SST variability on the accuracy of bleaching prediction. Two variability-based bleaching prediction methods are developed from global analysis of seasonal and interannual SST variability. The first method employs a local bleaching threshold derived from the historical variability in maximum annual SST to account for spatial variability in past thermal disturbance frequency. The second method uses a different formula to estimate the local climatological maximum to account for the low seasonality of SST in the tropics. The new prediction methods are tested against the common globally fixed threshold method using the observed bleaching reports. The results find that estimating the bleaching threshold from local historical SST variability delivers the highest predictive power, but also a higher rate of Type I errors. The second method has the lowest predictive power globally, though regional analysis suggests that it may be applicable in equatorial regions. The historical data analysis suggests that the bleaching threshold may have appeared to be constant globally because the magnitude of interannual variability in maximum SST is similar for many of the world's coral reef ecosystems. For example, the results show that a SST anomaly of 1 degrees C is equivalent to 1.73-2.94 standard deviations of the maximum monthly SST for two-thirds of the world's coral reefs. Coral reefs in the few regions that experience anomalously high interannual SST variability like the equatorial Pacific could prove critical to understanding how coral communities acclimate or adapt to frequent and/or severe thermal disturbances.

  2. Locally Weighted Learning Methods for Predicting Dose-Dependent Toxicity with Application to the Human Maximum Recommended Daily Dose

    DTIC Science & Technology

    2012-09-10

    Advanced Technology Research Center, U.S. Army Medical Research and Materiel Command, Fort Detrick, Maryland 21702, United States ABSTRACT: Toxicological ...species. Thus, it is more advantageous to predict the toxicological effects of a compound on humans directly from the human toxicological data of related...compounds. However, many popular quantitative structure−activity relationship ( QSAR ) methods that build a single global model by fitting all training

  3. Control of Aerodynamic Flows. Delivery Order 0051: Transition Prediction Method Review Summary for the Rapid Assessment Tool for Transition Prediction (RATTraP)

    DTIC Science & Technology

    2005-06-15

    61 9.2.7 Reynolds Number Effects...............................................................................................62 9.2.8...appropriate for control, and is therefore very useful for airfoil and wing design. However, Arnal (1994) and Schrauf (1994) review the different approaches...evaluation of new airfoil shapes for wings, even in 3- D, in a comparative sense. In summary, carefully used LST is the method of choice for

  4. Efficient and accurate two-scale FE-FFT-based prediction of the effective material behavior of elasto-viscoplastic polycrystals

    NASA Astrophysics Data System (ADS)

    Kochmann, Julian; Wulfinghoff, Stephan; Ehle, Lisa; Mayer, Joachim; Svendsen, Bob; Reese, Stefanie

    2018-06-01

    Recently, two-scale FE-FFT-based methods (e.g., Spahn et al. in Comput Methods Appl Mech Eng 268:871-883, 2014; Kochmann et al. in Comput Methods Appl Mech Eng 305:89-110, 2016) have been proposed to predict the microscopic and overall mechanical behavior of heterogeneous materials. The purpose of this work is the extension to elasto-viscoplastic polycrystals, efficient and robust Fourier solvers and the prediction of micromechanical fields during macroscopic deformation processes. Assuming scale separation, the macroscopic problem is solved using the finite element method. The solution of the microscopic problem, which is embedded as a periodic unit cell (UC) in each macroscopic integration point, is found by employing fast Fourier transforms, fixed-point and Newton-Krylov methods. The overall material behavior is defined by the mean UC response. In order to ensure spatially converged micromechanical fields as well as feasible overall CPU times, an efficient but simple solution strategy for two-scale simulations is proposed. As an example, the constitutive behavior of 42CrMo4 steel is predicted during macroscopic three-point bending tests.

  5. Data-driven modeling and predictive control for boiler-turbine unit using fuzzy clustering and subspace methods.

    PubMed

    Wu, Xiao; Shen, Jiong; Li, Yiguo; Lee, Kwang Y

    2014-05-01

    This paper develops a novel data-driven fuzzy modeling strategy and predictive controller for boiler-turbine unit using fuzzy clustering and subspace identification (SID) methods. To deal with the nonlinear behavior of boiler-turbine unit, fuzzy clustering is used to provide an appropriate division of the operation region and develop the structure of the fuzzy model. Then by combining the input data with the corresponding fuzzy membership functions, the SID method is extended to extract the local state-space model parameters. Owing to the advantages of the both methods, the resulting fuzzy model can represent the boiler-turbine unit very closely, and a fuzzy model predictive controller is designed based on this model. As an alternative approach, a direct data-driven fuzzy predictive control is also developed following the same clustering and subspace methods, where intermediate subspace matrices developed during the identification procedure are utilized directly as the predictor. Simulation results show the advantages and effectiveness of the proposed approach. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Efficient and accurate two-scale FE-FFT-based prediction of the effective material behavior of elasto-viscoplastic polycrystals

    NASA Astrophysics Data System (ADS)

    Kochmann, Julian; Wulfinghoff, Stephan; Ehle, Lisa; Mayer, Joachim; Svendsen, Bob; Reese, Stefanie

    2017-09-01

    Recently, two-scale FE-FFT-based methods (e.g., Spahn et al. in Comput Methods Appl Mech Eng 268:871-883, 2014; Kochmann et al. in Comput Methods Appl Mech Eng 305:89-110, 2016) have been proposed to predict the microscopic and overall mechanical behavior of heterogeneous materials. The purpose of this work is the extension to elasto-viscoplastic polycrystals, efficient and robust Fourier solvers and the prediction of micromechanical fields during macroscopic deformation processes. Assuming scale separation, the macroscopic problem is solved using the finite element method. The solution of the microscopic problem, which is embedded as a periodic unit cell (UC) in each macroscopic integration point, is found by employing fast Fourier transforms, fixed-point and Newton-Krylov methods. The overall material behavior is defined by the mean UC response. In order to ensure spatially converged micromechanical fields as well as feasible overall CPU times, an efficient but simple solution strategy for two-scale simulations is proposed. As an example, the constitutive behavior of 42CrMo4 steel is predicted during macroscopic three-point bending tests.

  7. Aerodynamic characteristics of cruciform missiles at high angles of attack

    NASA Technical Reports Server (NTRS)

    Lesieutre, Daniel J.; Mendenhall, Michael R.; Nazario, Susana M.; Hemsch, Michael J.

    1987-01-01

    An aerodynamic prediction method for missile aerodynamic performance and preliminary design has been developed to utilize a newly available systematic fin data base and an improved equivalent angle of attack methodology. The method predicts total aerodynamic loads and individual fin forces and moments for body-tail (wing-body) and canard-body-tail configurations with cruciform fin arrangements. The data base and the prediction method are valid for angles of attack up to 45 deg, arbitrary roll angles, fin deflection angles between -40 deg and 40 deg, Mach numbers between 0.6 and 4.5, and fin aspect ratios between 0.25 and 4.0. The equivalent angle of attack concept is employed to include the effects of vorticity and geometric scaling.

  8. Water hammer prediction and control: the Green's function method

    NASA Astrophysics Data System (ADS)

    Xuan, Li-Jun; Mao, Feng; Wu, Jie-Zhi

    2012-04-01

    By Green's function method we show that the water hammer (WH) can be analytically predicted for both laminar and turbulent flows (for the latter, with an eddy viscosity depending solely on the space coordinates), and thus its hazardous effect can be rationally controlled and minimized. To this end, we generalize a laminar water hammer equation of Wang et al. (J. Hydrodynamics, B2, 51, 1995) to include arbitrary initial condition and variable viscosity, and obtain its solution by Green's function method. The predicted characteristic WH behaviors by the solutions are in excellent agreement with both direct numerical simulation of the original governing equations and, by adjusting the eddy viscosity coefficient, experimentally measured turbulent flow data. Optimal WH control principle is thereby constructed and demonstrated.

  9. Physical optics for oven-plate scattering prediction

    NASA Technical Reports Server (NTRS)

    Baldauf, J.; Lambert, K.

    1991-01-01

    An oven assembly design is described, which will be used to determine the effects of temperature on the electrical properties of materials which are used as coatings for metal plates. Experimentally, these plates will be heated to a very high temperature in the oven assembly, and measured using a microwave reflectance measurement system developed for the NASA Lewis Research Center, Near-Field Facility. One unknown in this measurement is the effect that the oven assembly will have on the reflectance properties of the plate. Since the oven will be much larger than the plate, the effect could potentially be significant as the size of the plate becomes smaller. Therefore, it is necessary to predict the effect of the oven on the measurement of the plate. A method for predicting the oven effect is described, and the theoretical oven effect is compared to experimental results of the oven material. The computer code which is used to predict the oven effect is also described.

  10. Real-time flutter boundary prediction based on time series models

    NASA Astrophysics Data System (ADS)

    Gu, Wenjing; Zhou, Li

    2018-03-01

    For the purpose of predicting the flutter boundary in real time during flutter flight tests, two time series models accompanied with corresponding stability criterion are adopted in this paper. The first method simplifies a long nonstationary response signal as many contiguous intervals and each is considered to be stationary. The traditional AR model is then established to represent each interval of signal sequence. While the second employs a time-varying AR model to characterize actual measured signals in flutter test with progression variable speed (FTPVS). To predict the flutter boundary, stability parameters are formulated by the identified AR coefficients combined with Jury's stability criterion. The behavior of the parameters is examined using both simulated and wind-tunnel experiment data. The results demonstrate that both methods show significant effectiveness in predicting the flutter boundary at lower speed level. A comparison between the two methods is also given in this paper.

  11. Probabilistic Seeking Prediction in P2P VoD Systems

    NASA Astrophysics Data System (ADS)

    Wang, Weiwei; Xu, Tianyin; Gao, Yang; Lu, Sanglu

    In P2P VoD streaming systems, user behavior modeling is critical to help optimise user experience as well as system throughput. However, it still remains a challenging task due to the dynamic characteristics of user viewing behavior. In this paper, we consider the problem of user seeking prediction which is to predict the user's next seeking position so that the system can proactively make response. We present a novel method for solving this problem. In our method, frequent sequential patterns mining is first performed to extract abstract states which are not overlapped and cover the whole video file altogether. After mapping the raw training dataset to state transitions according to the abstract states, we use a simpel probabilistic contingency table to build the prediction model. We design an experiment on the synthetic P2P VoD dataset. The results demonstrate the effectiveness of our method.

  12. The Effect of Basis Selection on Thermal-Acoustic Random Response Prediction Using Nonlinear Modal Simulation

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Przekop, Adam

    2004-01-01

    The goal of this investigation is to further develop nonlinear modal numerical simulation methods for prediction of geometrically nonlinear response due to combined thermal-acoustic loadings. As with any such method, the accuracy of the solution is dictated by the selection of the modal basis, through which the nonlinear modal stiffness is determined. In this study, a suite of available bases are considered including (i) bending modes only; (ii) coupled bending and companion modes; (iii) uncoupled bending and companion modes; and (iv) bending and membrane modes. Comparison of these solutions with numerical simulation in physical degrees-of-freedom indicates that inclusion of any membrane mode variants (ii - iv) in the basis affects the bending displacement and stress response predictions. The most significant effect is on the membrane displacement, where it is shown that only the type (iv) basis accurately predicts its behavior. Results are presented for beam and plate structures in the thermally pre-buckled regime.

  13. Differences in SEM-AVS and ERM-ERL predictions of sediment impacts from metals in two US Virgin Islands marinas.

    PubMed

    Hinkey, Lynne M; Zaidi, Baqar R

    2007-02-01

    Two US Virgin Islands marinas were examined for potential metal impacts by comparing sediment chemistry data with two sediment quality guideline (SQG) values: the ratio of simultaneously extractable metals to acid volatile sulfides (SEM-AVS), and effects range-low and -mean (ERL-ERM) values. ERL-ERMs predicted the marina/boatyard complex (IBY: 2118 microg/g dry weight total metals, two exceeded ERMs) would have greater impacts than the marina with no boatyard (CBM: 231 microg/g dry weight total metals, no ERMs exceeded). The AVS-SEM method predicted IBY would have fewer effects due to high AVS-forming metal sulfide complexes, reducing trace metal bioavailability. These contradictory predictions demonstrate the importance of validating the results of either of these methods with other toxicity measures before making any management or regulatory decisions regarding boating and marina impacts. This is especially important in non-temperate areas where sediment quality guidelines have not been validated.

  14. Advancing Predictive Hepatotoxicity at the Intersection of Experimental, in Silico, and Artificial Intelligence Technologies.

    PubMed

    Fraser, Keith; Bruckner, Dylan M; Dordick, Jonathan S

    2018-06-18

    Adverse drug reactions, particularly those that result in drug-induced liver injury (DILI), are a major cause of drug failure in clinical trials and drug withdrawals. Hepatotoxicity-mediated drug attrition occurs despite substantial investments of time and money in developing cellular assays, animal models, and computational models to predict its occurrence in humans. Underperformance in predicting hepatotoxicity associated with drugs and drug candidates has been attributed to existing gaps in our understanding of the mechanisms involved in driving hepatic injury after these compounds perfuse and are metabolized by the liver. Herein we assess in vitro, in vivo (animal), and in silico strategies used to develop predictive DILI models. We address the effectiveness of several two- and three-dimensional in vitro cellular methods that are frequently employed in hepatotoxicity screens and how they can be used to predict DILI in humans. We also explore how humanized animal models can recapitulate human drug metabolic profiles and associated liver injury. Finally, we highlight the maturation of computational methods for predicting hepatotoxicity, the untapped potential of artificial intelligence for improving in silico DILI screens, and how knowledge acquired from these predictions can shape the refinement of experimental methods.

  15. A feasibility study for long-path multiple detection using a neural network

    NASA Technical Reports Server (NTRS)

    Feuerbacher, G. A.; Moebes, T. A.

    1994-01-01

    Least-squares inverse filters have found widespread use in the deconvolution of seismograms and the removal of multiples. The use of least-squares prediction filters with prediction distances greater than unity leads to the method of predictive deconvolution which can be used for the removal of long path multiples. The predictive technique allows one to control the length of the desired output wavelet by control of the predictive distance, and hence to specify the desired degree of resolution. Events which are periodic within given repetition ranges can be attenuated selectively. The method is thus effective in the suppression of rather complex reverberation patterns. A back propagation(BP) neural network is constructed to perform the detection of first arrivals of the multiples and therefore aid in the more accurate determination of the predictive distance of the multiples. The neural detector is applied to synthetic reflection coefficients and synthetic seismic traces. The processing results show that the neural detector is accurate and should lead to an automated fast method for determining predictive distances across vast amounts of data such as seismic field records. The neural network system used in this study was the NASA Software Technology Branch's NETS system.

  16. Predicting bioconcentration of chemicals into vegetation from soil or air using the molecular connectivity index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowdy, D.L.; McKone, T.E.; Hsieh, D.P.H.

    1995-12-31

    Bioconcentration factors (BCFs) are the ratio of chemical concentration found in an exposed organism (in this case a plant) to the concentration in an air or soil exposure medium. The authors examine here the use of molecular connectivity indices (MCIs) as quantitative structure-activity relationships (QSARS) for predicting BCFs for organic chemicals between plants and air or soil. The authors compare the reliability of the octanol-air partition coefficient (K{sub oa}) to the MC based prediction method for predicting plant/air partition coefficients. The authors also compare the reliability of the octanol/water partition coefficient (K{sub ow}) to the MC based prediction method formore » predicting plant/soil partition coefficients. The results here indicate that, relative to the use of K{sub ow} or K{sub oa} as predictors of BCFs the MC can substantially increase the reliability with which BCFs can be estimated. The authors find that the MC provides a relatively precise and accurate method for predicting the potential biotransfer of a chemical from environmental media into plants. In addition, the MC is much faster and more cost effective than direct measurements.« less

  17. Predicting the evolution of complex networks via similarity dynamics

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Chen, Leiting; Zhong, Linfeng; Xian, Xingping

    2017-01-01

    Almost all real-world networks are subject to constant evolution, and plenty of them have been investigated empirically to uncover the underlying evolution mechanism. However, the evolution prediction of dynamic networks still remains a challenging problem. The crux of this matter is to estimate the future network links of dynamic networks. This paper studies the evolution prediction of dynamic networks with link prediction paradigm. To estimate the likelihood of the existence of links more accurate, an effective and robust similarity index is presented by exploiting network structure adaptively. Moreover, most of the existing link prediction methods do not make a clear distinction between future links and missing links. In order to predict the future links, the networks are regarded as dynamic systems in this paper, and a similarity updating method, spatial-temporal position drift model, is developed to simulate the evolutionary dynamics of node similarity. Then the updated similarities are used as input information for the future links' likelihood estimation. Extensive experiments on real-world networks suggest that the proposed similarity index performs better than baseline methods and the position drift model performs well for evolution prediction in real-world evolving networks.

  18. Dissolved oxygen content prediction in crab culture using a hybrid intelligent method

    PubMed Central

    Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang

    2016-01-01

    A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds. PMID:27270206

  19. Dissolved oxygen content prediction in crab culture using a hybrid intelligent method.

    PubMed

    Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang

    2016-06-08

    A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds.

  20. The Prediction of the Gas Utilization Ratio Based on TS Fuzzy Neural Network and Particle Swarm Optimization

    PubMed Central

    Jiang, Haihe; Yin, Yixin; Xiao, Wendong; Zhao, Baoyong

    2018-01-01

    Gas utilization ratio (GUR) is an important indicator that is used to evaluate the energy consumption of blast furnaces (BFs). Currently, the existing methods cannot predict the GUR accurately. In this paper, we present a novel data-driven model for predicting the GUR. The proposed approach utilized both the TS fuzzy neural network (TS-FNN) and the particle swarm algorithm (PSO) to predict the GUR. The particle swarm algorithm (PSO) is applied to optimize the parameters of the TS-FNN in order to decrease the error caused by the inaccurate initial parameter. This paper also applied the box graph (Box-plot) method to eliminate the abnormal value of the raw data during the data preprocessing. This method can deal with the data which does not obey the normal distribution which is caused by the complex industrial environments. The prediction results demonstrate that the optimization model based on PSO and the TS-FNN approach achieves higher prediction accuracy compared with the TS-FNN model and SVM model and the proposed approach can accurately predict the GUR of the blast furnace, providing an effective way for the on-line blast furnace distribution control. PMID:29461469

  1. The Prediction of the Gas Utilization Ratio based on TS Fuzzy Neural Network and Particle Swarm Optimization.

    PubMed

    Zhang, Sen; Jiang, Haihe; Yin, Yixin; Xiao, Wendong; Zhao, Baoyong

    2018-02-20

    Gas utilization ratio (GUR) is an important indicator that is used to evaluate the energy consumption of blast furnaces (BFs). Currently, the existing methods cannot predict the GUR accurately. In this paper, we present a novel data-driven model for predicting the GUR. The proposed approach utilized both the TS fuzzy neural network (TS-FNN) and the particle swarm algorithm (PSO) to predict the GUR. The particle swarm algorithm (PSO) is applied to optimize the parameters of the TS-FNN in order to decrease the error caused by the inaccurate initial parameter. This paper also applied the box graph (Box-plot) method to eliminate the abnormal value of the raw data during the data preprocessing. This method can deal with the data which does not obey the normal distribution which is caused by the complex industrial environments. The prediction results demonstrate that the optimization model based on PSO and the TS-FNN approach achieves higher prediction accuracy compared with the TS-FNN model and SVM model and the proposed approach can accurately predict the GUR of the blast furnace, providing an effective way for the on-line blast furnace distribution control.

  2. Integrative genetic risk prediction using non-parametric empirical Bayes classification.

    PubMed

    Zhao, Sihai Dave

    2017-06-01

    Genetic risk prediction is an important component of individualized medicine, but prediction accuracies remain low for many complex diseases. A fundamental limitation is the sample sizes of the studies on which the prediction algorithms are trained. One way to increase the effective sample size is to integrate information from previously existing studies. However, it can be difficult to find existing data that examine the target disease of interest, especially if that disease is rare or poorly studied. Furthermore, individual-level genotype data from these auxiliary studies are typically difficult to obtain. This article proposes a new approach to integrative genetic risk prediction of complex diseases with binary phenotypes. It accommodates possible heterogeneity in the genetic etiologies of the target and auxiliary diseases using a tuning parameter-free non-parametric empirical Bayes procedure, and can be trained using only auxiliary summary statistics. Simulation studies show that the proposed method can provide superior predictive accuracy relative to non-integrative as well as integrative classifiers. The method is applied to a recent study of pediatric autoimmune diseases, where it substantially reduces prediction error for certain target/auxiliary disease combinations. The proposed method is implemented in the R package ssa. © 2016, The International Biometric Society.

  3. PSSP-RFE: accurate prediction of protein structural class by recursive feature extraction from PSI-BLAST profile, physical-chemical property and functional annotations.

    PubMed

    Li, Liqi; Cui, Xiang; Yu, Sanjiu; Zhang, Yuan; Luo, Zhong; Yang, Hua; Zhou, Yue; Zheng, Xiaoqi

    2014-01-01

    Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM) in conjunction with integrated features from position-specific score matrix (PSSM), PROFEAT and Gene Ontology (GO). A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets.

  4. Rapid prediction of particulate, humus and resistant fractions of soil organic carbon in reforested lands using infrared spectroscopy.

    PubMed

    Madhavan, Dinesh B; Baldock, Jeff A; Read, Zoe J; Murphy, Simon C; Cunningham, Shaun C; Perring, Michael P; Herrmann, Tim; Lewis, Tom; Cavagnaro, Timothy R; England, Jacqueline R; Paul, Keryn I; Weston, Christopher J; Baker, Thomas G

    2017-05-15

    Reforestation of agricultural lands with mixed-species environmental plantings can effectively sequester C. While accurate and efficient methods for predicting soil organic C content and composition have recently been developed for soils under agricultural land uses, such methods under forested land uses are currently lacking. This study aimed to develop a method using infrared spectroscopy for accurately predicting total organic C (TOC) and its fractions (particulate, POC; humus, HOC; and resistant, ROC organic C) in soils under environmental plantings. Soils were collected from 117 paired agricultural-reforestation sites across Australia. TOC fractions were determined in a subset of 38 reforested soils using physical fractionation by automated wet-sieving and 13 C nuclear magnetic resonance (NMR) spectroscopy. Mid- and near-infrared spectra (MNIRS, 6000-450 cm -1 ) were acquired from finely-ground soils from environmental plantings and agricultural land. Satisfactory prediction models based on MNIRS and partial least squares regression (PLSR) were developed for TOC and its fractions. Leave-one-out cross-validations of MNIRS-PLSR models indicated accurate predictions (R 2  > 0.90, negligible bias, ratio of performance to deviation > 3) and fraction-specific functional group contributions to beta coefficients in the models. TOC and its fractions were predicted using the cross-validated models and soil spectra for 3109 reforested and agricultural soils. The reliability of predictions determined using k-nearest neighbour score distance indicated that >80% of predictions were within the satisfactory inlier limit. The study demonstrated the utility of infrared spectroscopy (MNIRS-PLSR) to rapidly and economically determine TOC and its fractions and thereby accurately describe the effects of land use change such as reforestation on agricultural soils. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. An efficient link prediction index for complex military organization

    NASA Astrophysics Data System (ADS)

    Fan, Changjun; Liu, Zhong; Lu, Xin; Xiu, Baoxin; Chen, Qing

    2017-03-01

    Quality of information is crucial for decision-makers to judge the battlefield situations and design the best operation plans, however, real intelligence data are often incomplete and noisy, where missing links prediction methods and spurious links identification algorithms can be applied, if modeling the complex military organization as the complex network where nodes represent functional units and edges denote communication links. Traditional link prediction methods usually work well on homogeneous networks, but few for the heterogeneous ones. And the military network is a typical heterogeneous network, where there are different types of nodes and edges. In this paper, we proposed a combined link prediction index considering both the nodes' types effects and nodes' structural similarities, and demonstrated that it is remarkably superior to all the 25 existing similarity-based methods both in predicting missing links and identifying spurious links in a real military network data; we also investigated the algorithms' robustness under noisy environment, and found the mistaken information is more misleading than incomplete information in military areas, which is different from that in recommendation systems, and our method maintained the best performance under the condition of small noise. Since the real military network intelligence must be carefully checked at first due to its significance, and link prediction methods are just adopted to purify the network with the left latent noise, the method proposed here is applicable in real situations. In the end, as the FINC-E model, here used to describe the complex military organizations, is also suitable to many other social organizations, such as criminal networks, business organizations, etc., thus our method has its prospects in these areas for many tasks, like detecting the underground relationships between terrorists, predicting the potential business markets for decision-makers, and so on.

  6. Predicting disulfide connectivity from protein sequence using multiple sequence feature vectors and secondary structure.

    PubMed

    Song, Jiangning; Yuan, Zheng; Tan, Hao; Huber, Thomas; Burrage, Kevin

    2007-12-01

    Disulfide bonds are primary covalent crosslinks between two cysteine residues in proteins that play critical roles in stabilizing the protein structures and are commonly found in extracy-toplasmatic or secreted proteins. In protein folding prediction, the localization of disulfide bonds can greatly reduce the search in conformational space. Therefore, there is a great need to develop computational methods capable of accurately predicting disulfide connectivity patterns in proteins that could have potentially important applications. We have developed a novel method to predict disulfide connectivity patterns from protein primary sequence, using a support vector regression (SVR) approach based on multiple sequence feature vectors and predicted secondary structure by the PSIPRED program. The results indicate that our method could achieve a prediction accuracy of 74.4% and 77.9%, respectively, when averaged on proteins with two to five disulfide bridges using 4-fold cross-validation, measured on the protein and cysteine pair on a well-defined non-homologous dataset. We assessed the effects of different sequence encoding schemes on the prediction performance of disulfide connectivity. It has been shown that the sequence encoding scheme based on multiple sequence feature vectors coupled with predicted secondary structure can significantly improve the prediction accuracy, thus enabling our method to outperform most of other currently available predictors. Our work provides a complementary approach to the current algorithms that should be useful in computationally assigning disulfide connectivity patterns and helps in the annotation of protein sequences generated by large-scale whole-genome projects. The prediction web server and Supplementary Material are accessible at http://foo.maths.uq.edu.au/~huber/disulfide

  7. Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis.

    PubMed

    You, Zhu-Hong; Lei, Ying-Ke; Zhu, Lin; Xia, Junfeng; Wang, Bing

    2013-01-01

    Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time.

  8. Can Mathematical Models Predict the Outcomes of Prostate Cancer Patients Undergoing Intermittent Androgen Deprivation Therapy?

    NASA Astrophysics Data System (ADS)

    Everett, R. A.; Packer, A. M.; Kuang, Y.

    Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.

  9. Can Mathematical Models Predict the Outcomes of Prostate Cancer Patients Undergoing Intermittent Androgen Deprivation Therapy?

    NASA Astrophysics Data System (ADS)

    Everett, R. A.; Packer, A. M.; Kuang, Y.

    2014-04-01

    Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.

  10. The momentum transfer of incompressible turbulent separated flow due to cavities with steps

    NASA Technical Reports Server (NTRS)

    White, R. E.; Norton, D. J.

    1977-01-01

    An experimental study was conducted using a plate test bed having a turbulent boundary layer to determine the momentum transfer to the faces of step/cavity combinations on the plate. Experimental data were obtained from configurations including an isolated configuration and an array of blocks in tile patterns. A momentum transfer correlation model of pressure forces on an isolated step/cavity was developed with experimental results to relate flow and geometry parameters. Results of the experiments reveal that isolated step/cavity excrecences do not have a unique and unifying parameter group due in part to cavity depth effects and in part to width parameter scale effects. Drag predictions for tile patterns by a kinetic pressure empirical method predict experimental results well. Trends were not, however, predicted by a method of variable roughness density phenomenology.

  11. How reliable are ligand-centric methods for Target Fishing?

    NASA Astrophysics Data System (ADS)

    Peon, Antonio; Dang, Cuong; Ballester, Pedro

    2016-04-01

    Computational methods for Target Fishing (TF), also known as Target Prediction or Polypharmacology Prediction, can be used to discover new targets for small-molecule drugs. This may result in repositioning the drug in a new indication or improving our current understanding of its efficacy and side effects. While there is a substantial body of research on TF methods, there is still a need to improve their validation, which is often limited to a small part of the available targets and not easily interpretable by the user. Here we discuss how target-centric TF methods are inherently limited by the number of targets that can possibly predict (this number is by construction much larger in ligand-centric techniques). We also propose a new benchmark to validate TF methods, which is particularly suited to analyse how predictive performance varies with the query molecule. On average over approved drugs, we estimate that only five predicted targets will have to be tested to find two true targets with submicromolar potency (a strong variability in performance is however observed). In addition, we find that an approved drug has currently an average of eight known targets, which reinforces the notion that polypharmacology is a common and strong event. Furthermore, with the assistance of a control group of randomly-selected molecules, we show that the targets of approved drugs are generally harder to predict.

  12. In Silico Prediction of Organ Level Toxicity: Linking Chemistry to Adverse Effects

    PubMed Central

    Cronin, Mark T.D.; Enoch, Steven J.; Mellor, Claire L.; Przybylak, Katarzyna R.; Richarz, Andrea-Nicole; Madden, Judith C.

    2017-01-01

    In silico methods to predict toxicity include the use of (Quantitative) Structure-Activity Relationships ((Q)SARs) as well as grouping (category formation) allowing for read-across. A challenging area for in silico modelling is the prediction of chronic toxicity and the No Observed (Adverse) Effect Level (NO(A)EL) in particular. A proposed solution to the prediction of chronic toxicity is to consider organ level effects, as opposed to modelling the NO(A)EL itself. This review has focussed on the use of structural alerts to identify potential liver toxicants. In silico profilers, or groups of structural alerts, have been developed based on mechanisms of action and informed by current knowledge of Adverse Outcome Pathways. These profilers are robust and can be coded computationally to allow for prediction. However, they do not cover all mechanisms or modes of liver toxicity and recommendations for the improvement of these approaches are given. PMID:28744348

  13. In Silico Prediction of Organ Level Toxicity: Linking Chemistry to Adverse Effects.

    PubMed

    Cronin, Mark T D; Enoch, Steven J; Mellor, Claire L; Przybylak, Katarzyna R; Richarz, Andrea-Nicole; Madden, Judith C

    2017-07-01

    In silico methods to predict toxicity include the use of (Quantitative) Structure-Activity Relationships ((Q)SARs) as well as grouping (category formation) allowing for read-across. A challenging area for in silico modelling is the prediction of chronic toxicity and the No Observed (Adverse) Effect Level (NO(A)EL) in particular. A proposed solution to the prediction of chronic toxicity is to consider organ level effects, as opposed to modelling the NO(A)EL itself. This review has focussed on the use of structural alerts to identify potential liver toxicants. In silico profilers, or groups of structural alerts, have been developed based on mechanisms of action and informed by current knowledge of Adverse Outcome Pathways. These profilers are robust and can be coded computationally to allow for prediction. However, they do not cover all mechanisms or modes of liver toxicity and recommendations for the improvement of these approaches are given.

  14. NetMHCIIpan-2.0 - Improved pan-specific HLA-DR predictions using a novel concurrent alignment and weight optimization training procedure.

    PubMed

    Nielsen, Morten; Justesen, Sune; Lund, Ole; Lundegaard, Claus; Buus, Søren

    2010-11-13

    Binding of peptides to Major Histocompatibility class II (MHC-II) molecules play a central role in governing responses of the adaptive immune system. MHC-II molecules sample peptides from the extracellular space allowing the immune system to detect the presence of foreign microbes from this compartment. Predicting which peptides bind to an MHC-II molecule is therefore of pivotal importance for understanding the immune response and its effect on host-pathogen interactions. The experimental cost associated with characterizing the binding motif of an MHC-II molecule is significant and large efforts have therefore been placed in developing accurate computer methods capable of predicting this binding event. Prediction of peptide binding to MHC-II is complicated by the open binding cleft of the MHC-II molecule, allowing binding of peptides extending out of the binding groove. Moreover, the genes encoding the MHC molecules are immensely diverse leading to a large set of different MHC molecules each potentially binding a unique set of peptides. Characterizing each MHC-II molecule using peptide-screening binding assays is hence not a viable option. Here, we present an MHC-II binding prediction algorithm aiming at dealing with these challenges. The method is a pan-specific version of the earlier published allele-specific NN-align algorithm and does not require any pre-alignment of the input data. This allows the method to benefit also from information from alleles covered by limited binding data. The method is evaluated on a large and diverse set of benchmark data, and is shown to significantly out-perform state-of-the-art MHC-II prediction methods. In particular, the method is found to boost the performance for alleles characterized by limited binding data where conventional allele-specific methods tend to achieve poor prediction accuracy. The method thus shows great potential for efficient boosting the accuracy of MHC-II binding prediction, as accurate predictions can be obtained for novel alleles at highly reduced experimental costs. Pan-specific binding predictions can be obtained for all alleles with know protein sequence and the method can benefit by including data in the training from alleles even where only few binders are known. The method and benchmark data are available at http://www.cbs.dtu.dk/services/NetMHCIIpan-2.0.

  15. A comparison of optimisation methods and knee joint degrees of freedom on muscle force predictions during single-leg hop landings.

    PubMed

    Mokhtarzadeh, Hossein; Perraton, Luke; Fok, Laurence; Muñoz, Mario A; Clark, Ross; Pivonka, Peter; Bryant, Adam L

    2014-09-22

    The aim of this paper was to compare the effect of different optimisation methods and different knee joint degrees of freedom (DOF) on muscle force predictions during a single legged hop. Nineteen subjects performed single-legged hopping manoeuvres and subject-specific musculoskeletal models were developed to predict muscle forces during the movement. Muscle forces were predicted using static optimisation (SO) and computed muscle control (CMC) methods using either 1 or 3 DOF knee joint models. All sagittal and transverse plane joint angles calculated using inverse kinematics or CMC in a 1 DOF or 3 DOF knee were well-matched (RMS error<3°). Biarticular muscles (hamstrings, rectus femoris and gastrocnemius) showed more differences in muscle force profiles when comparing between the different muscle prediction approaches where these muscles showed larger time delays for many of the comparisons. The muscle force magnitudes of vasti, gluteus maximus and gluteus medius were not greatly influenced by the choice of muscle force prediction method with low normalised root mean squared errors (<48%) observed in most comparisons. We conclude that SO and CMC can be used to predict lower-limb muscle co-contraction during hopping movements. However, care must be taken in interpreting the magnitude of force predicted in the biarticular muscles and the soleus, especially when using a 1 DOF knee. Despite this limitation, given that SO is a more robust and computationally efficient method for predicting muscle forces than CMC, we suggest that SO can be used in conjunction with musculoskeletal models that have a 1 or 3 DOF knee joint to study the relative differences and the role of muscles during hopping activities in future studies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Predicting Heart Rate at the Ventilatory Threshold for Aerobic Exercise Prescription in Persons With Chronic Stroke.

    PubMed

    Boyne, Pierce; Buhr, Sarah; Rockwell, Bradley; Khoury, Jane; Carl, Daniel; Gerson, Myron; Kissela, Brett; Dunning, Kari

    2015-10-01

    Treadmill aerobic exercise improves gait, aerobic capacity, and cardiovascular health after stroke, but a lack of specificity in current guidelines could lead to underdosing or overdosing of aerobic intensity. The ventilatory threshold (VT) has been recommended as an optimal, specific starting point for continuous aerobic exercise. However, VT measurement is not available in clinical stroke settings. Therefore, the purpose of this study was to identify an accurate method to predict heart rate at the VT (HRVT) for use as a surrogate for VT. A cross-sectional design was employed. Using symptom-limited graded exercise test (GXT) data from 17 subjects more than 6 months poststroke, prediction methods for HRVT were derived by traditional target HR calculations (percentage of HRpeak achieved during GXT, percentage of peak HR reserve [HRRpeak], percentage of age-predicted maximal HR, and percentage of age-predicted maximal HR reserve) and by regression analysis. The validity of the prediction methods was then tested among 8 additional subjects. All prediction methods were validated by the second sample, so data were pooled to calculate refined prediction equations. HRVT was accurately predicted by 80% HRpeak (R, 0.62; standard deviation of error [SDerror], 7 bpm), 62% HRRpeak (R, 0.66; SDerror, 7 bpm), and regression models that included HRpeak (R, 0.62-0.75; SDerror, 5-6 bpm). Derived regression equations, 80% HRpeak and 62% HRRpeak, provide a specific target intensity for initial aerobic exercise prescription that should minimize underdosing and overdosing for persons with chronic stroke. The specificity of these methods may lead to more efficient and effective treatment for poststroke deconditioning.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A114).

  17. Effectively parameterizing dissipative particle dynamics using COSMO-SAC: A partition coefficient study

    NASA Astrophysics Data System (ADS)

    Saathoff, Jonathan

    2018-04-01

    Dissipative Particle Dynamics (DPD) provides a tool for studying phase behavior and interfacial phenomena for complex mixtures and macromolecules. Methods to quickly and automatically parameterize DPD greatly increase its effectiveness. One such method is to map predicted activity coefficients derived from COSMO-SAC onto DPD parameter sets. However, there are serious limitations to the accuracy of this mapping, including the inability of single DPD beads to reproduce asymmetric infinite dilution activity coefficients, the loss of precision when reusing parameters for different molecular fragments, and the error due to bonding beads together. This report describes these effects in quantitative detail and provides methods to mitigate much of their deleterious effects. This includes a novel approach to remove errors caused by bonding DPD beads together. Using these methods, logarithm hexane/water partition coefficients were calculated for 61 molecules. The root mean-squared error for these calculations was determined to be 0.14—a very low value—with respect to the final mapping procedure. Cognizance of the above limitations can greatly enhance the predictive power of DPD.

  18. Analysis and prediction of operating vehicle load effects on Highway bridges under the weight charge policy

    NASA Astrophysics Data System (ADS)

    Huang, Haiyun; Zhang, Junping; Li, Yonghe

    2018-05-01

    Under the weight charge policy, the weigh in motion data at a toll station on the Jing-Zhu Expressway were collected. The statistic analysis of vehicle load data was carried out. For calculating the operating vehicle load effects on bridges, by Monte Carlo method used to generate random traffic flow and influence line loading method, the maximum bending moment effect of simple supported beams were obtained. The extreme value I distribution and normal distribution were used to simulate the distribution of the maximum bending moment effect. By the extrapolation of Rice formula and the extreme value I distribution, the predicted values of the maximum load effects were obtained. By comparing with vehicle load effect according to current specification, some references were provided for the management of the operating vehicles and the revision of the bridge specifications.

  19. Smart Aquifer Characterisation validated using Information Theory and Cost benefit analysis

    NASA Astrophysics Data System (ADS)

    Moore, Catherine

    2016-04-01

    The field data acquisition required to characterise aquifer systems are time consuming and expensive. Decisions regarding field testing, the type of field measurements to make and the spatial and temporal resolution of measurements have significant cost repercussions and impact the accuracy of various predictive simulations. The Smart Aquifer Characterisation (SAC) research programme (New Zealand (NZ)) addresses this issue by assembling and validating a suite of innovative methods for characterising groundwater systems at the large, regional and national scales. The primary outcome is a suite of cost effective tools and procedures provided to resource managers to advance the understanding and management of groundwater systems and thereby assist decision makers and communities in the management of their groundwater resources, including the setting of land use limits that protect fresh water flows and quality and the ecosystems dependent on that fresh water. The programme has focused novel investigation approaches including the use of geophysics, satellite remote sensing, temperature sensing and age dating. The SMART (Save Money And Reduce Time) aspect of the programme emphasises techniques that use these passive cost effective data sources to characterise groundwater systems at both the aquifer and the national scale by: • Determination of aquifer hydraulic properties • Determination of aquifer dimensions • Quantification of fluxes between ground waters and surface water • Groundwater age dating These methods allow either a lower cost method for estimating these properties and fluxes, or a greater spatial and temporal coverage for the same cost. To demonstrate the cost effectiveness of the methods a 'data worth' analysis is undertaken. The data worth method involves quantification of the utility of observation data in terms of how much it reduces the uncertainty of model parameters and decision focussed predictions which depend on these parameters. Such decision focussed predictions can include many aspects of system behaviour which underpin management decisions e.g., drawdown of groundwater levels, salt water intrusion, stream depletion, or wetland water level. The value of a data type or an observation location (e.g. remote sensing data (Westerhoff 2015) or a distributed temperature sensing measurement) is greater the more it enhances the certainty with which the model is able to predict such environmental behaviour. By comparing the difference in predictive uncertainty with or without such data, the value of potential observations is assessed. This can easily be achieved using rapid linear predictive uncertainty analysis methods (Moore 2005, Moore and Doherty 2006). By assessing the tension between the cost of data acquisition and the predictive accuracy achieved by gathering these observations in a pareto analysis, the relative cost effectiveness of these novel methods can be compared with more traditional measurements (e.g. bore logs, aquifer pumping tests, and simultaneous stream loss gaugings) for a suite of pertinent groundwater management decisions (Wallis et al 2014). This comparison illuminates those field data acquisition methods which offer the best value for the specific issues managers face in any region, and also indicates the diminishing returns of increasingly large and expensive data sets. References: Wallis I, Moore C, Post V, Wolf L, Martens E, Prommer. Using predictive uncertainty analysis to optimise tracer test design and data acquisition. Journal of Hydrology 515 (2014) 191-204. Moore, C. (2005). The use of regularized inversion in groundwater model calibration and prediction uncertainty analysis. Thesis submitted for the degree of Doctor of Philosophy at The University of Queensland, Australia. Moore, C., and Doherty, D. (2005). Role of the calibration process in reducing model predictive error. Water Resources Research 41, no.5 W05050. Westerhoff RS. Using uncertainty of Penman and Penman-Monteith methods in combined satellite and ground-based evapotranspiration estimates. Remote Sensing of Environment 169, 102-112

  20. The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Khavaran, Abbas

    2010-01-01

    Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.

  1. A feature-based approach to modeling protein–protein interaction hot spots

    PubMed Central

    Cho, Kyu-il; Kim, Dongsup; Lee, Doheon

    2009-01-01

    Identifying features that effectively represent the energetic contribution of an individual interface residue to the interactions between proteins remains problematic. Here, we present several new features and show that they are more effective than conventional features. By combining the proposed features with conventional features, we develop a predictive model for interaction hot spots. Initially, 54 multifaceted features, composed of different levels of information including structure, sequence and molecular interaction information, are quantified. Then, to identify the best subset of features for predicting hot spots, feature selection is performed using a decision tree. Based on the selected features, a predictive model for hot spots is created using support vector machine (SVM) and tested on an independent test set. Our model shows better overall predictive accuracy than previous methods such as the alanine scanning methods Robetta and FOLDEF, and the knowledge-based method KFC. Subsequent analysis yields several findings about hot spots. As expected, hot spots have a larger relative surface area burial and are more hydrophobic than other residues. Unexpectedly, however, residue conservation displays a rather complicated tendency depending on the types of protein complexes, indicating that this feature is not good for identifying hot spots. Of the selected features, the weighted atomic packing density, relative surface area burial and weighted hydrophobicity are the top 3, with the weighted atomic packing density proving to be the most effective feature for predicting hot spots. Notably, we find that hot spots are closely related to π–related interactions, especially π · · · π interactions. PMID:19273533

  2. Do causal concentration-response functions exist? A critical review of associational and causal relations between fine particulate matter and mortality.

    PubMed

    Cox, Louis Anthony Tony

    2017-08-01

    Concentration-response (C-R) functions relating concentrations of pollutants in ambient air to mortality risks or other adverse health effects provide the basis for many public health risk assessments, benefits estimates for clean air regulations, and recommendations for revisions to existing air quality standards. The assumption that C-R functions relating levels of exposure and levels of response estimated from historical data usefully predict how future changes in concentrations would change risks has seldom been carefully tested. This paper critically reviews literature on C-R functions for fine particulate matter (PM2.5) and mortality risks. We find that most of them describe historical associations rather than valid causal models for predicting effects of interventions that change concentrations. The few papers that explicitly attempt to model causality rely on unverified modeling assumptions, casting doubt on their predictions about effects of interventions. A large literature on modern causal inference algorithms for observational data has been little used in C-R modeling. Applying these methods to publicly available data from Boston and the South Coast Air Quality Management District around Los Angeles shows that C-R functions estimated for one do not hold for the other. Changes in month-specific PM2.5 concentrations from one year to the next do not help to predict corresponding changes in average elderly mortality rates in either location. Thus, the assumption that estimated C-R relations predict effects of pollution-reducing interventions may not be true. Better causal modeling methods are needed to better predict how reducing air pollution would affect public health.

  3. Massive integration of diverse protein quality assessment methods to improve template based modeling in CASP11

    PubMed Central

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-01-01

    Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. PMID:26369671

  4. Effect of tensile mean stress on fatigue behavior of single-crystal and directionally solidified superalloys

    NASA Technical Reports Server (NTRS)

    Kalluri, Sreeramesh; Mcgaw, Michael A.

    1992-01-01

    Two nickel base superalloys, single crystal PWA 1480 and directionally solidified MAR-M 246 + Hf, were studied in view of the potential usage of the former and usage of the latter as blade materials for the turbomachinery of the Space Shuttle main engine. The baseline zero mean stress (ZMS) fatigue life (FL) behavior of these superalloys was established, and then the effect of tensile mean stress (TMS) on their FL behavior was characterized. A stress range based FL prediction approach was used to characterize both the ZMS and TMS fatigue data. In the past, several researchers have developed methods to account for the detrimental effect of tensile mean stress on the FL for polycrystalline engineering alloys. These methods were applied to characterize the TMS fatigue data of single crystal PWA 1480 and directionally solidified MAR-M 246 + Hf and were found to be unsatisfactory. Therefore, a method of accounting for the TMS effect on FL, that is based on a technique proposed by Heidmann and Manson was developed to characterize the TMS fatigue data of these superalloys. Details of this method and its relationship to the conventionally used mean stress methods in FL prediction are discussed.

  5. Predicting treatment effect from surrogate endpoints and historical trials: an extrapolation involving probabilities of a binary outcome or survival to a specific time

    PubMed Central

    Sargent, Daniel J.; Buyse, Marc; Burzykowski, Tomasz

    2011-01-01

    SUMMARY Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download. PMID:21838732

  6. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison.

    PubMed

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh

    2011-12-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. © The Author(s) 2011. Published by Oxford University Press.

  7. Toward Fully in Silico Melting Point Prediction Using Molecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y; Maginn, EJ

    2013-03-01

    Melting point is one of the most fundamental and practically important properties of a compound. Molecular computation of melting points. However, all of these methods simulation methods have been developed for the accurate need an experimental crystal structure as input, which means that such calculations are not really predictive since the melting point can be measured easily in experiments once a crystal structure is known. On the other hand, crystal structure prediction (CSP) has become an active field and significant progress has been made, although challenges still exist. One of the main challenges is the existence of many crystal structuresmore » (polymorphs) that are very close in energy. Thermal effects and kinetic factors make the situation even more complicated, such that it is still not trivial to predict experimental crystal structures. In this work, we exploit the fact that free energy differences are often small between crystal structures. We show that accurate melting point predictions can be made by using a reasonable crystal structure from CSP as a starting point for a free energy-based melting point calculation. The key is that most crystal structures predicted by CSP have free energies that are close to that of the experimental structure. The proposed method was tested on two rigid molecules and the results suggest that a fully in silico melting point prediction method is possible.« less

  8. Computational modeling of membrane proteins

    PubMed Central

    Leman, Julia Koehler; Ulmschneider, Martin B.; Gray, Jeffrey J.

    2014-01-01

    The determination of membrane protein (MP) structures has always trailed that of soluble proteins due to difficulties in their overexpression, reconstitution into membrane mimetics, and subsequent structure determination. The percentage of MP structures in the protein databank (PDB) has been at a constant 1-2% for the last decade. In contrast, over half of all drugs target MPs, only highlighting how little we understand about drug-specific effects in the human body. To reduce this gap, researchers have attempted to predict structural features of MPs even before the first structure was experimentally elucidated. In this review, we present current computational methods to predict MP structure, starting with secondary structure prediction, prediction of trans-membrane spans, and topology. Even though these methods generate reliable predictions, challenges such as predicting kinks or precise beginnings and ends of secondary structure elements are still waiting to be addressed. We describe recent developments in the prediction of 3D structures of both α-helical MPs as well as β-barrels using comparative modeling techniques, de novo methods, and molecular dynamics (MD) simulations. The increase of MP structures has (1) facilitated comparative modeling due to availability of more and better templates, and (2) improved the statistics for knowledge-based scoring functions. Moreover, de novo methods have benefitted from the use of correlated mutations as restraints. Finally, we outline current advances that will likely shape the field in the forthcoming decade. PMID:25355688

  9. ProTox: a web server for the in silico prediction of rodent oral toxicity.

    PubMed

    Drwal, Malgorzata N; Banerjee, Priyanka; Dunkel, Mathias; Wettig, Martin R; Preissner, Robert

    2014-07-01

    Animal trials are currently the major method for determining the possible toxic effects of drug candidates and cosmetics. In silico prediction methods represent an alternative approach and aim to rationalize the preclinical drug development, thus enabling the reduction of the associated time, costs and animal experiments. Here, we present ProTox, a web server for the prediction of rodent oral toxicity. The prediction method is based on the analysis of the similarity of compounds with known median lethal doses (LD50) and incorporates the identification of toxic fragments, therefore representing a novel approach in toxicity prediction. In addition, the web server includes an indication of possible toxicity targets which is based on an in-house collection of protein-ligand-based pharmacophore models ('toxicophores') for targets associated with adverse drug reactions. The ProTox web server is open to all users and can be accessed without registration at: http://tox.charite.de/tox. The only requirement for the prediction is the two-dimensional structure of the input compounds. All ProTox methods have been evaluated based on a diverse external validation set and displayed strong performance (sensitivity, specificity and precision of 76, 95 and 75%, respectively) and superiority over other toxicity prediction tools, indicating their possible applicability for other compound classes. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. RFDT: A Rotation Forest-based Predictor for Predicting Drug-Target Interactions Using Drug Structure and Protein Sequence Information.

    PubMed

    Wang, Lei; You, Zhu-Hong; Chen, Xing; Yan, Xin; Liu, Gang; Zhang, Wei

    2018-01-01

    Identification of interaction between drugs and target proteins plays an important role in discovering new drug candidates. However, through the experimental method to identify the drug-target interactions remain to be extremely time-consuming, expensive and challenging even nowadays. Therefore, it is urgent to develop new computational methods to predict potential drugtarget interactions (DTI). In this article, a novel computational model is developed for predicting potential drug-target interactions under the theory that each drug-target interaction pair can be represented by the structural properties from drugs and evolutionary information derived from proteins. Specifically, the protein sequences are encoded as Position-Specific Scoring Matrix (PSSM) descriptor which contains information of biological evolutionary and the drug molecules are encoded as fingerprint feature vector which represents the existence of certain functional groups or fragments. Four benchmark datasets involving enzymes, ion channels, GPCRs and nuclear receptors, are independently used for establishing predictive models with Rotation Forest (RF) model. The proposed method achieved the prediction accuracy of 91.3%, 89.1%, 84.1% and 71.1% for four datasets respectively. In order to make our method more persuasive, we compared our classifier with the state-of-theart Support Vector Machine (SVM) classifier. We also compared the proposed method with other excellent methods. Experimental results demonstrate that the proposed method is effective in the prediction of DTI, and can provide assistance for new drug research and development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. Comparison of human skin irritation patch test data with in vitro skin irritation assays and animal data.

    PubMed

    Jírová, Dagmar; Basketter, David; Liebsch, Manfred; Bendová, Hana; Kejlová, Kristina; Marriott, Marie; Kandárová, Helena

    2010-02-01

    Efforts to replace the rabbit skin irritation test have been underway for many years, encouraged by the EU Cosmetics Directive and REACH. Recently various in vitro tests have been developed, evaluated and validated. A key difficulty in confirming the validity of in vitro methods is that animal data are scarce and of limited utility for prediction of human effects, which adversely impacts their acceptance. This study examines whether in vivo or in vitro data most accurately predicted human effects. Using the 4-hr human patch test (HPT) we examined a number of chemicals whose EU classification of skin irritancy is known to be borderline, or where in vitro methods provided conflicting results. Of the 16 chemicals classified as irritants in the rabbit, only five substances were found to be significantly irritating to human skin. Concordance of the rabbit test with the 4-hr HPT was only 56%, whereas concordance of human epidermis models with human data was 76% (EpiDerm) and 70% (EPISKIN). The results confirm observations that rabbits overpredict skin effects in humans. Therefore, when validating in vitro methods, all available information, including human data, should be taken into account before making conclusions about their predictive capacity.

  12. Prediction of Protein Configurational Entropy (Popcoen).

    PubMed

    Goethe, Martin; Gleixner, Jan; Fita, Ignacio; Rubi, J Miguel

    2018-03-13

    A knowledge-based method for configurational entropy prediction of proteins is presented; this methodology is extremely fast, compared to previous approaches, because it does not involve any type of configurational sampling. Instead, the configurational entropy of a query fold is estimated by evaluating an artificial neural network, which was trained on molecular-dynamics simulations of ∼1000 proteins. The predicted entropy can be incorporated into a large class of protein software based on cost-function minimization/evaluation, in which configurational entropy is currently neglected for performance reasons. Software of this type is used for all major protein tasks such as structure predictions, proteins design, NMR and X-ray refinement, docking, and mutation effect predictions. Integrating the predicted entropy can yield a significant accuracy increase as we show exemplarily for native-state identification with the prominent protein software FoldX. The method has been termed Popcoen for Prediction of Protein Configurational Entropy. An implementation is freely available at http://fmc.ub.edu/popcoen/ .

  13. Durability predictions of adhesively bonded composite structures using accelerated characterization methods

    NASA Technical Reports Server (NTRS)

    Brinson, H. F.

    1985-01-01

    The utilization of adhesive bonding for composite structures is briefly assessed. The need for a method to determine damage initiation and propagation for such joints is outlined. Methods currently in use to analyze both adhesive joints and fiber reinforced plastics is mentioned and it is indicated that all methods require the input of the mechanical properties of the polymeric adhesive and composite matrix material. The mechanical properties of polymers are indicated to be viscoelastic and sensitive to environmental effects. A method to analytically characterize environmentally dependent linear and nonlinear viscoelastic properties is given. It is indicated that the methodology can be used to extrapolate short term data to long term design lifetimes. That is, the method can be used for long term durability predictions. Experimental results for near adhesive resins, polymers used as composite matrices and unidirectional composite laminates is given. The data is fitted well with the analytical durability methodology. Finally, suggestions are outlined for the development of an analytical methodology for the durability predictions of adhesively bonded composite structures.

  14. Genomic selection of agronomic traits in hybrid rice using an NCII population.

    PubMed

    Xu, Yang; Wang, Xin; Ding, Xiaowen; Zheng, Xingfei; Yang, Zefeng; Xu, Chenwu; Hu, Zhongli

    2018-05-10

    Hybrid breeding is an effective tool to improve yield in rice, while parental selection remains the key and difficult issue. Genomic selection (GS) provides opportunities to predict the performance of hybrids before phenotypes are measured. However, the application of GS is influenced by several genetic and statistical factors. Here, we used a rice North Carolina II (NC II) population constructed by crossing 115 rice varieties with five male sterile lines as a model to evaluate effects of statistical methods, heritability, marker density and training population size on prediction for hybrid performance. From the comparison of six GS methods, we found that predictabilities for different methods are significantly different, with genomic best linear unbiased prediction (GBLUP) and least absolute shrinkage and selection operation (LASSO) being the best, support vector machine (SVM) and partial least square (PLS) being the worst. The marker density has lower influence on predicting rice hybrid performance compared with the size of training population. Additionally, we used the 575 (115 × 5) hybrid rice as a training population to predict eight agronomic traits of all hybrids derived from 120 (115 + 5) rice varieties each mating with 3023 rice accessions from the 3000 rice genomes project (3 K RGP). Of the 362,760 potential hybrids, selection of the top 100 predicted hybrids would lead to 35.5%, 23.25%, 30.21%, 42.87%, 61.80%, 75.83%, 19.24% and 36.12% increase in grain yield per plant, thousand-grain weight, panicle number per plant, plant height, secondary branch number, grain number per panicle, panicle length and primary branch number, respectively. This study evaluated the factors affecting predictabilities for hybrid prediction and demonstrated the implementation of GS to predict hybrid performance of rice. Our results suggest that GS could enable the rapid selection of superior hybrids, thus increasing the efficiency of rice hybrid breeding.

  15. Estimation of body density based on hydrostatic weighing without head submersion in young Japanese adults.

    PubMed

    Demura, S; Sato, S; Kitabayashi, T

    2006-06-01

    This study examined a method of predicting body density based on hydrostatic weighing without head submersion (HWwithoutHS). Donnelly and Sintek (1984) developed a method to predict body density based on hydrostatic weight without head submersion. This method predicts the difference (D) between HWwithoutHS and hydrostatic weight with head submersion (HWwithHS) from anthropometric variables (head length and head width), and then calculates body density using D as a correction factor. We developed several prediction equations to estimate D based on head anthropometry and differences between the sexes, and compared their prediction accuracy with Donnelly and Sintek's equation. Thirty-two males and 32 females aged 17-26 years participated in the study. Multiple linear regression analysis was performed to obtain the prediction equations, and the systematic errors of their predictions were assessed by Bland-Altman plots. The best prediction equations obtained were: Males: D(g) = -164.12X1 - 125.81X2 - 111.03X3 + 100.66X4 + 6488.63, where X1 = head length (cm), X2 = head circumference (cm), X3 = head breadth (cm), X4 = head thickness (cm) (R = 0.858, R2 = 0.737, adjusted R2 = 0.687, standard error of the estimate = 224.1); Females: D(g) = -156.03X1 - 14.03X2 - 38.45X3 - 8.87X4 + 7852.45, where X1 = head circumference (cm), X2 = body mass (g), X3 = head length (cm), X4 = height (cm) (R = 0.913, R2 = 0.833, adjusted R2 = 0.808, standard error of the estimate = 137.7). The effective predictors in these prediction equations differed from those of Donnelly and Sintek's equation, and head circumference and head length were included in both equations. The prediction accuracy was improved by statistically selecting effective predictors. Since we did not assess cross-validity, the equations cannot be used to generalize to other populations, and further investigation is required.

  16. RT DDA: A hybrid method for predicting the scattering properties by densely packed media

    NASA Astrophysics Data System (ADS)

    Ramezan Pour, B.; Mackowski, D.

    2017-12-01

    The most accurate approaches to predicting the scattering properties of particulate media are based on exact solutions of the Maxwell's equations (MEs), such as the T-matrix and discrete dipole methods. Applying these techniques for optically thick targets is challenging problem due to the large-scale computations and are usually substituted by phenomenological radiative transfer (RT) methods. On the other hand, the RT technique is of questionable validity in media with large particle packing densities. In recent works, we used numerically exact ME solvers to examine the effects of particle concentration on the polarized reflection properties of plane parallel random media. The simulations were performed for plane parallel layers of wavelength-sized spherical particles, and results were compared with RT predictions. We have shown that RTE results monotonically converge to the exact solution as the particle volume fraction becomes smaller and one can observe a nearly perfect fit for packing densities of 2%-5%. This study describes the hybrid technique composed of exact and numerical scalar RT methods. The exact methodology in this work is the plane parallel discrete dipole approximation whereas the numerical method is based on the adding and doubling method. This approach not only decreases the computational time owing to the RT method but also includes the interference and multiple scattering effects, so it may be applicable to large particle density conditions.

  17. Effect of genotyped cows in the reference population on the genomic evaluation of Holstein cattle.

    PubMed

    Uemoto, Y; Osawa, T; Saburi, J

    2017-03-01

    This study evaluated the dependence of reliability and prediction bias on the prediction method, the contribution of including animals (bulls or cows), and the genetic relatedness, when including genotyped cows in the progeny-tested bull reference population. We performed genomic evaluation using a Japanese Holstein population, and assessed the accuracy of genomic enhanced breeding value (GEBV) for three production traits and 13 linear conformation traits. A total of 4564 animals for production traits and 4172 animals for conformation traits were genotyped using Illumina BovineSNP50 array. Single- and multi-step methods were compared for predicting GEBV in genotyped bull-only and genotyped bull-cow reference populations. No large differences in realized reliability and regression coefficient were found between the two reference populations; however, a slight difference was found between the two methods for production traits. The accuracy of GEBV determined by single-step method increased slightly when genotyped cows were included in the bull reference population, but decreased slightly by multi-step method. A validation study was used to evaluate the accuracy of GEBV when 800 additional genotyped bulls (POPbull) or cows (POPcow) were included in the base reference population composed of 2000 genotyped bulls. The realized reliabilities of POPbull were higher than those of POPcow for all traits. For the gain of realized reliability over the base reference population, the average ratios of POPbull gain to POPcow gain for production traits and conformation traits were 2.6 and 7.2, respectively, and the ratios depended on heritabilities of the traits. For regression coefficient, no large differences were found between the results for POPbull and POPcow. Another validation study was performed to investigate the effect of genetic relatedness between cows and bulls in the reference and test populations. The effect of genetic relationship among bulls in the reference population was also assessed. The results showed that it is important to account for relatedness among bulls in the reference population. Our studies indicate that the prediction method, the contribution ratio of including animals, and genetic relatedness could affect the prediction accuracy in genomic evaluation of Holstein cattle, when including genotyped cows in the reference population.

  18. A new solar power output prediction based on hybrid forecast engine and decomposition model.

    PubMed

    Zhang, Weijiang; Dang, Hongshe; Simoes, Rolando

    2018-06-12

    Regarding to the growing trend of photovoltaic (PV) energy as a clean energy source in electrical networks and its uncertain nature, PV energy prediction has been proposed by researchers in recent decades. This problem is directly effects on operation in power network while, due to high volatility of this signal, an accurate prediction model is demanded. A new prediction model based on Hilbert Huang transform (HHT) and integration of improved empirical mode decomposition (IEMD) with feature selection and forecast engine is presented in this paper. The proposed approach is divided into three main sections. In the first section, the signal is decomposed by the proposed IEMD as an accurate decomposition tool. To increase the accuracy of the proposed method, a new interpolation method has been used instead of cubic spline curve (CSC) fitting in EMD. Then the obtained output is entered into the new feature selection procedure to choose the best candidate inputs. Finally, the signal is predicted by a hybrid forecast engine composed of support vector regression (SVR) based on an intelligent algorithm. The effectiveness of the proposed approach has been verified over a number of real-world engineering test cases in comparison with other well-known models. The obtained results prove the validity of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Molecular activity prediction by means of supervised subspace projection based ensembles of classifiers.

    PubMed

    Cerruela García, G; García-Pedrajas, N; Luque Ruiz, I; Gómez-Nieto, M Á

    2018-03-01

    This paper proposes a method for molecular activity prediction in QSAR studies using ensembles of classifiers constructed by means of two supervised subspace projection methods, namely nonparametric discriminant analysis (NDA) and hybrid discriminant analysis (HDA). We studied the performance of the proposed ensembles compared to classical ensemble methods using four molecular datasets and eight different models for the representation of the molecular structure. Using several measures and statistical tests for classifier comparison, we observe that our proposal improves the classification results with respect to classical ensemble methods. Therefore, we show that ensembles constructed using supervised subspace projections offer an effective way of creating classifiers in cheminformatics.

  20. Predicting hot spots in protein interfaces based on protrusion index, pseudo hydrophobicity and electron-ion interaction pseudopotential features

    PubMed Central

    Xia, Junfeng; Yue, Zhenyu; Di, Yunqiang; Zhu, Xiaolei; Zheng, Chun-Hou

    2016-01-01

    The identification of hot spots, a small subset of protein interfaces that accounts for the majority of binding free energy, is becoming more important for the research of drug design and cancer development. Based on our previous methods (APIS and KFC2), here we proposed a novel hot spot prediction method. For each hot spot residue, we firstly constructed a wide variety of 108 sequence, structural, and neighborhood features to characterize potential hot spot residues, including conventional ones and new one (pseudo hydrophobicity) exploited in this study. We then selected 3 top-ranking features that contribute the most in the classification by a two-step feature selection process consisting of minimal-redundancy-maximal-relevance algorithm and an exhaustive search method. We used support vector machines to build our final prediction model. When testing our model on an independent test set, our method showed the highest F1-score of 0.70 and MCC of 0.46 comparing with the existing state-of-the-art hot spot prediction methods. Our results indicate that these features are more effective than the conventional features considered previously, and that the combination of our and traditional features may support the creation of a discriminative feature set for efficient prediction of hot spots in protein interfaces. PMID:26934646

  1. Predictive Analytical Model for Isolator Shock-Train Location in a Mach 2.2 Direct-Connect Supersonic Combustion Tunnel

    NASA Astrophysics Data System (ADS)

    Lingren, Joe; Vanstone, Leon; Hashemi, Kelley; Gogineni, Sivaram; Donbar, Jeffrey; Akella, Maruthi; Clemens, Noel

    2016-11-01

    This study develops an analytical model for predicting the leading shock of a shock-train in the constant area isolator section in a Mach 2.2 direct-connect scramjet simulation tunnel. The effective geometry of the isolator is assumed to be a weakly converging duct owing to boundary-layer growth. For some given pressure rise across the isolator, quasi-1D equations relating to isentropic or normal shock flows can be used to predict the normal shock location in the isolator. The surface pressure distribution through the isolator was measured during experiments and both the actual and predicted locations can be calculated. Three methods of finding the shock-train location are examined, one based on the measured pressure rise, one using a non-physics-based control model, and one using the physics-based analytical model. It is shown that the analytical model performs better than the non-physics-based model in all cases. The analytic model is less accurate than the pressure threshold method but requires significantly less information to compute. In contrast to other methods for predicting shock-train location, this method is relatively accurate and requires as little as a single pressure measurement. This makes this method potentially useful for unstart control applications.

  2. Inductive matrix completion for predicting gene-disease associations.

    PubMed

    Natarajan, Nagarajan; Dhillon, Inderjit S

    2014-06-15

    Most existing methods for predicting causal disease genes rely on specific type of evidence, and are therefore limited in terms of applicability. More often than not, the type of evidence available for diseases varies-for example, we may know linked genes, keywords associated with the disease obtained by mining text, or co-occurrence of disease symptoms in patients. Similarly, the type of evidence available for genes varies-for example, specific microarray probes convey information only for certain sets of genes. In this article, we apply a novel matrix-completion method called Inductive Matrix Completion to the problem of predicting gene-disease associations; it combines multiple types of evidence (features) for diseases and genes to learn latent factors that explain the observed gene-disease associations. We construct features from different biological sources such as microarray expression data and disease-related textual data. A crucial advantage of the method is that it is inductive; it can be applied to diseases not seen at training time, unlike traditional matrix-completion approaches and network-based inference methods that are transductive. Comparison with state-of-the-art methods on diseases from the Online Mendelian Inheritance in Man (OMIM) database shows that the proposed approach is substantially better-it has close to one-in-four chance of recovering a true association in the top 100 predictions, compared to the recently proposed Catapult method (second best) that has <15% chance. We demonstrate that the inductive method is particularly effective for a query disease with no previously known gene associations, and for predicting novel genes, i.e. genes that are previously not linked to diseases. Thus the method is capable of predicting novel genes even for well-characterized diseases. We also validate the novelty of predictions by evaluating the method on recently reported OMIM associations and on associations recently reported in the literature. Source code and datasets can be downloaded from http://bigdata.ices.utexas.edu/project/gene-disease. © The Author 2014. Published by Oxford University Press.

  3. Early Diagnosis and Intervention Strategies for Post-Traumatic Heterotopic Ossification in Severely Injured Extremities

    DTIC Science & Technology

    2013-10-01

    study will recruit wounded warriors with severe extremity trauma, which places them at high risk for heterotopic ossification (HO); bone formation at...involved in HO; 2) to define accurate and practical methods to predict where HO will develop; and 3) to define potential therapies for prevention or...elicit HO. These tools also need to provide effective methods for early diagnosis or risk assessment (prediction) so that therapies for prevention or

  4. Three-dimensional effects on pure tone fan noise due to inflow distortion. [rotor blade noise prediction

    NASA Technical Reports Server (NTRS)

    Kobayashi, H.

    1978-01-01

    Two dimensional, quasi three dimensional and three dimensional theories for the prediction of pure tone fan noise due to the interaction of inflow distortion with a subsonic annular blade row were studied with the aid of an unsteady three dimensional lifting surface theory. The effects of compact and noncompact source distributions on pure tone fan noise in an annular cascade were investigated. Numerical results show that the strip theory and quasi three-dimensional theory are reasonably adequate for fan noise prediction. The quasi three-dimensional method is more accurate for acoustic power and model structure prediction with an acoustic power estimation error of about plus or minus 2db.

  5. Framework for making better predictions by directly estimating variables' predictivity.

    PubMed

    Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa

    2016-12-13

    We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the [Formula: see text]-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the [Formula: see text]-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the [Formula: see text]-score on real data to demonstrate the statistic's predictive performance on sample data. We conjecture that using the partition retention and [Formula: see text]-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired.

  6. Framework for making better predictions by directly estimating variables’ predictivity

    PubMed Central

    Chernoff, Herman; Lo, Shaw-Hwa

    2016-01-01

    We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the I-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the I-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the I-score on real data to demonstrate the statistic’s predictive performance on sample data. We conjecture that using the partition retention and I-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired. PMID:27911830

  7. Symmetric airfoil geometry effects on leading edge noise.

    PubMed

    Gill, James; Zhang, X; Joseph, P

    2013-10-01

    Computational aeroacoustic methods are applied to the modeling of noise due to interactions between gusts and the leading edge of real symmetric airfoils. Single frequency harmonic gusts are interacted with various airfoil geometries at zero angle of attack. The effects of airfoil thickness and leading edge radius on noise are investigated systematically and independently for the first time, at higher frequencies than previously used in computational methods. Increases in both leading edge radius and thickness are found to reduce the predicted noise. This noise reduction effect becomes greater with increasing frequency and Mach number. The dominant noise reduction mechanism for airfoils with real geometry is found to be related to the leading edge stagnation region. It is shown that accurate leading edge noise predictions can be made when assuming an inviscid meanflow, but that it is not valid to assume a uniform meanflow. Analytic flat plate predictions are found to over-predict the noise due to a NACA 0002 airfoil by up to 3 dB at high frequencies. The accuracy of analytic flat plate solutions can be expected to decrease with increasing airfoil thickness, leading edge radius, gust frequency, and Mach number.

  8. Evaluating the evaluation of cancer driver genes

    PubMed Central

    Tokheim, Collin J.; Papadopoulos, Nickolas; Kinzler, Kenneth W.; Vogelstein, Bert; Karchin, Rachel

    2016-01-01

    Sequencing has identified millions of somatic mutations in human cancers, but distinguishing cancer driver genes remains a major challenge. Numerous methods have been developed to identify driver genes, but evaluation of the performance of these methods is hindered by the lack of a gold standard, that is, bona fide driver gene mutations. Here, we establish an evaluation framework that can be applied to driver gene prediction methods. We used this framework to compare the performance of eight such methods. One of these methods, described here, incorporated a machine-learning–based ratiometric approach. We show that the driver genes predicted by each of the eight methods vary widely. Moreover, the P values reported by several of the methods were inconsistent with the uniform values expected, thus calling into question the assumptions that were used to generate them. Finally, we evaluated the potential effects of unexplained variability in mutation rates on false-positive driver gene predictions. Our analysis points to the strengths and weaknesses of each of the currently available methods and offers guidance for improving them in the future. PMID:27911828

  9. Comparison of forward flight effects theory of A. Michalke and U. Michel with measured data

    NASA Technical Reports Server (NTRS)

    Rawls, J. W., Jr.

    1983-01-01

    The scaling laws of a Michalke and Michel predict flyover noise of a single stream shock free circular jet from static data or static predictions. The theory is based on a farfield solution to Lighthill's equation and includes density terms which are important for heated jets. This theory is compared with measured data using two static jet noise prediction methods. The comparisons indicate the theory yields good results when the static noise levels are accurately predicted.

  10. Streamflow hindcasting in European river basins via multi-parametric ensemble of the mesoscale hydrologic model (mHM)

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis

    2016-04-01

    There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.

  11. A reduced-order integral formulation to account for the finite size effect of isotropic square panels using the transfer matrix method.

    PubMed

    Bonfiglio, Paolo; Pompoli, Francesco; Lionti, Riccardo

    2016-04-01

    The transfer matrix method is a well-established prediction tool for the simulation of sound transmission loss and the sound absorption coefficient of flat multilayer systems. Much research has been dedicated to enhancing the accuracy of the method by introducing a finite size effect of the structure to be simulated. The aim of this paper is to present a reduced-order integral formulation to predict radiation efficiency and radiation impedance for a panel with equal lateral dimensions. The results are presented and discussed for different materials in terms of radiation efficiency, sound transmission loss, and the sound absorption coefficient. Finally, the application of the proposed methodology for rectangular multilayer systems is also investigated and validated against experimental data.

  12. Two-Step Screening for Depressive Symptoms and Prediction of Mortality in Patients With Heart Failure.

    PubMed

    Lee, Kyoung Suk; Moser, Debra K; Pelter, Michele; Biddle, Martha J; Dracup, Kathleen

    2017-05-01

    Comorbid depression in patients with heart failure is associated with increased risk for death. In order to effectively identify depressed patients with cardiac disease, the American Heart Association suggests a 2-step screening method: administering the 2-item Patient Health Questionnaire first and then the 9-item Patient Health Questionnaire. However, whether the 2-step method is better for predicting poor prognosis in heart failure than is either the 2-item or the 9-item tool alone is not known. To determine whether the 2-step method is better than either the 2-item or the 9-item questionnaire alone for predicting all-cause mortality in heart failure. During a 2-year period, 562 patients with heart failure were assessed for depression by using the 2-step method. With the 2-step method, results are considered positive if patients endorse either depressed mood or anhedonia on the 2-item screen and have scores of 10 or higher on the 9-item screen. Screening results with the 2-step method were not associated with all-cause mortality. Patients with scores positive for depression on either the 2-item or 9-item screen alone had 53% and 60% greater risk, respectively, for all-cause death than did patients with scores negative for depression after adjustments for covariates (hazard ratio, 1.530; 95% CI, 1.029-2.274 for the 2-item screen; hazard ratio, 1.603; 95% CI, 1.079-2.383 for the 9-item screen). The 2-step method has no clear advantages compared with the 2-item screen alone or the 9-item screen alone for predicting adverse prognostic effects of depressive symptoms in heart failure. ©2017 American Association of Critical-Care Nurses.

  13. A time-domain method for prediction of noise radiated from supersonic rotating sources in a moving medium

    NASA Astrophysics Data System (ADS)

    Huang, Zhongjie; Siozos-Rousoulis, Leonidas; De Troyer, Tim; Ghorbaniasl, Ghader

    2018-02-01

    This paper presents a time-domain method for noise prediction of supersonic rotating sources in a moving medium. The proposed approach can be interpreted as an extensive time-domain solution for the convected permeable Ffowcs Williams and Hawkings equation, which is capable of avoiding the Doppler singularity. The solution requires special treatment for construction of the emission surface. The derived formula can explicitly and efficiently account for subsonic uniform constant flow effects on radiated noise. Implementation of the methodology is realized through the Isom thickness noise case and high-speed impulsive noise prediction from helicopter rotors.

  14. [Application of exponential smoothing method in prediction and warning of epidemic mumps].

    PubMed

    Shi, Yun-ping; Ma, Jia-qi

    2010-06-01

    To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.

  15. Calculation of precise firing statistics in a neural network model

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2017-08-01

    A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.

  16. Lung nodule malignancy prediction using multi-task convolutional neural network

    NASA Astrophysics Data System (ADS)

    Li, Xiuli; Kao, Yueying; Shen, Wei; Li, Xiang; Xie, Guotong

    2017-03-01

    In this paper, we investigated the problem of diagnostic lung nodule malignancy prediction using thoracic Computed Tomography (CT) screening. Unlike most existing studies classify the nodules into two types benign and malignancy, we interpreted the nodule malignancy prediction as a regression problem to predict continuous malignancy level. We proposed a joint multi-task learning algorithm using Convolutional Neural Network (CNN) to capture nodule heterogeneity by extracting discriminative features from alternatingly stacked layers. We trained a CNN regression model to predict the nodule malignancy, and designed a multi-task learning mechanism to simultaneously share knowledge among 9 different nodule characteristics (Subtlety, Calcification, Sphericity, Margin, Lobulation, Spiculation, Texture, Diameter and Malignancy), and improved the final prediction result. Each CNN would generate characteristic-specific feature representations, and then we applied multi-task learning on the features to predict the corresponding likelihood for that characteristic. We evaluated the proposed method on 2620 nodules CT scans from LIDC-IDRI dataset with the 5-fold cross validation strategy. The multitask CNN regression result for regression RMSE and mapped classification ACC were 0.830 and 83.03%, while the results for single task regression RMSE 0.894 and mapped classification ACC 74.9%. Experiments show that the proposed method could predict the lung nodule malignancy likelihood effectively and outperforms the state-of-the-art methods. The learning framework could easily be applied in other anomaly likelihood prediction problem, such as skin cancer and breast cancer. It demonstrated the possibility of our method facilitating the radiologists for nodule staging assessment and individual therapeutic planning.

  17. Quantifying effectiveness of failure prediction and response in HPC systems : methodology and example.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre

    2010-06-01

    Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less

  18. SU-G-JeP4-02: An Investigation of Respiratory Surrogate Motion Data Requirements for Multiple-Step Ahead Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zawisza, I; Ren, L; Yin, F

    Purpose: Respiratory-gated radiotherapy and dynamic tracking employ real-time imaging and surrogate motion-monitoring methods with tumor motion prediction in advance of real-time. This study investigated respiratory motion data length on prediction accuracy of tumor motion. Methods: Predictions generated from the algorithm are validated against a one-dimensional surrogate signal of amplitude versus time. Prediction consists of three major components: extracting top-ranked subcomponents from training data matching the last respiratory cycle; calculating weighting factors from best-matched subcomponents; fusing data proceeding best-matched subcomponents with respective weighting factors to form predictions. Predictions for one respiratory cycle (∼3-6seconds) were assessed using 351 patient data from themore » respiratory management device. Performance was evaluated for correlation coefficient and root mean square error (RMSE) between prediction and final respiratory cycle. Results: Respiratory prediction results fell into two classes, where best predictions for 70 cycles or less performed using relative prediction and greater than 70 cycles are predicted similarly using relative and derivative relative. For 70 respiratory cycles or less, the average correlation between prediction and final respiratory cycle was 0.9999±0.0001, 0.9999±0.0001, 0.9988±0.0003, 0.9985±0.0023, and 0.9981±0.0023 with RMSE values of 0.0091±0.0030, 0.0091±0.0030, 0.0305±0.0051, 0.0299±0.0259, and 0.0299±0.0259 for equal, relative, pattern, derivative equal and derivative relative weighting methods, respectively. Respectively, the total best prediction for each method was 37, 65, 20, 22, and 22. For data with greater than 70 cycles average correlation was 0.9999±0.0001, 0.9999±0.0001, 0.9988±0.0004, 0.9988±0.0020, and 0.9988±0.0020 with RMSE values of 0.0081±0.0031, 0.0082±0.0033, 0.0306±0.0056, 0.0218±0.0222, and 0.0218±0.0222 for equal, relative, pattern, derivative equal and derivative relative weighting methods, respectively. Respectively, the total best prediction for each method was 24, 44, 42, 30, and 45. Conclusion: The prediction algorithms are effective in estimating surrogate motion in advance. These results indicate an advantage in using relative prediction for shorter data and either relative or derivative relative prediction for longer data.« less

  19. An integrated Navier-Stokes - full potential - free wake method for rotor flows

    NASA Astrophysics Data System (ADS)

    Berkman, Mert Enis

    1998-12-01

    The strong wake shed from rotary wings interacts with almost all components of the aircraft, and alters the flow field thus causing performance and noise problems. Understanding and modeling the behavior of this wake, and its effect on the aerodynamics and acoustics of helicopters have remained as challenges. This vortex wake and its effect should be accurately accounted for in any technique that aims to predict rotor flow field and performance. In this study, an advanced and efficient computational technique for predicting three-dimensional unsteady viscous flows over isolated helicopter rotors in hover and in forward flight is developed. In this hybrid technique, the advantages of various existing methods have been combined to accurately and efficiently study rotor flows with a single numerical method. The flow field is viewed in three parts: (i) an inner zone surrounding each blade where the wake and viscous effects are numerically captured, (ii) an outer zone away from the blades where wake is modeled, and (iii) a Lagrangean wake which induces wake effects in the outer zone. This technique was coded in a flow solver and compared with experimental data for hovering and advancing rotors including a two-bladed rotor, the UH-60A rotor and a tapered tip rotor. Detailed surface pressure, integrated thrust and torque, sectional thrust, and tip vortex position predictions compared favorably against experimental data. Results indicated that the hybrid solver provided accurate flow details and performance information typically in one-half to one-eighth cost of complete Navier-Stokes methods.

  20. An approximate theoretical method for modeling the static thrust performance of non-axisymmetric two-dimensional convergent-divergent nozzles. M.S. Thesis - George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Hunter, Craig A.

    1995-01-01

    An analytical/numerical method has been developed to predict the static thrust performance of non-axisymmetric, two-dimensional convergent-divergent exhaust nozzles. Thermodynamic nozzle performance effects due to over- and underexpansion are modeled using one-dimensional compressible flow theory. Boundary layer development and skin friction losses are calculated using an approximate integral momentum method based on the classic karman-Polhausen solution. Angularity effects are included with these two models in a computational Nozzle Performance Analysis Code, NPAC. In four different case studies, results from NPAC are compared to experimental data obtained from subscale nozzle testing to demonstrate the capabilities and limitations of the NPAC method. In several cases, the NPAC prediction matched experimental gross thrust efficiency data to within 0.1 percent at a design NPR, and to within 0.5 percent at off-design conditions.

Top