Sample records for predicted results based

  1. Empirical predictions of hypervelocity impact damage to the space station

    NASA Technical Reports Server (NTRS)

    Rule, W. K.; Hayashida, K. B.

    1991-01-01

    A family of user-friendly, DOS PC based, Microsoft BASIC programs written to provide spacecraft designers with empirical predictions of space debris damage to orbiting spacecraft is described. The spacecraft wall configuration is assumed to consist of multilayer insulation (MLI) placed between a Whipple style bumper and the pressure wall. Predictions are based on data sets of experimental results obtained from simulating debris impacts on spacecraft using light gas guns on Earth. A module of the program facilitates the creation of the data base of experimental results that are used by the damage prediction modules of the code. The user has the choice of three different prediction modules to predict damage to the bumper, the MLI, and the pressure wall. One prediction module is based on fitting low order polynomials through subsets of the experimental data. Another prediction module fits functions based on nondimensional parameters through the data. The last prediction technique is a unique approach that is based on weighting the experimental data according to the distance from the design point.

  2. Data-Based Predictive Control with Multirate Prediction Step

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  3. An object programming based environment for protein secondary structure prediction.

    PubMed

    Giacomini, M; Ruggiero, C; Sacile, R

    1996-01-01

    The most frequently used methods for protein secondary structure prediction are empirical statistical methods and rule based methods. A consensus system based on object-oriented programming is presented, which integrates the two approaches with the aim of improving the prediction quality. This system uses an object-oriented knowledge representation based on the concepts of conformation, residue and protein, where the conformation class is the basis, the residue class derives from it and the protein class derives from the residue class. The system has been tested with satisfactory results on several proteins of the Brookhaven Protein Data Bank. Its results have been compared with the results of the most widely used prediction methods, and they show a higher prediction capability and greater stability. Moreover, the system itself provides an index of the reliability of its current prediction. This system can also be regarded as a basis structure for programs of this kind.

  4. Aggregation Trade Offs in Family Based Recommendations

    NASA Astrophysics Data System (ADS)

    Berkovsky, Shlomo; Freyne, Jill; Coombe, Mac

    Personalized information access tools are frequently based on collaborative filtering recommendation algorithms. Collaborative filtering recommender systems typically suffer from a data sparsity problem, where systems do not have sufficient user data to generate accurate and reliable predictions. Prior research suggested using group-based user data in the collaborative filtering recommendation process to generate group-based predictions and partially resolve the sparsity problem. Although group recommendations are less accurate than personalized recommendations, they are more accurate than general non-personalized recommendations, which are the natural fall back when personalized recommendations cannot be generated. In this work we present initial results of a study that exploits the browsing logs of real families of users gathered in an eHealth portal. The browsing logs allowed us to experimentally compare the accuracy of two group-based recommendation strategies: aggregated group models and aggregated predictions. Our results showed that aggregating individual models into group models resulted in more accurate predictions than aggregating individual predictions into group predictions.

  5. The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network

    NASA Astrophysics Data System (ADS)

    Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.

    2017-05-01

    The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.

  6. Predictive information processing in music cognition. A critical review.

    PubMed

    Rohrmeier, Martin A; Koelsch, Stefan

    2012-02-01

    Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Stringent DDI-based Prediction of H. sapiens-M. tuberculosis H37Rv Protein-Protein Interactions

    PubMed Central

    2013-01-01

    Background H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are very important information to illuminate the infection mechanism of M. tuberculosis H37Rv. But current H. sapiens-M. tuberculosis H37Rv PPI data are very scarce. This seriously limits the study of the interaction between this important pathogen and its host H. sapiens. Computational prediction of H. sapiens-M. tuberculosis H37Rv PPIs is an important strategy to fill in the gap. Domain-domain interaction (DDI) based prediction is one of the frequently used computational approaches in predicting both intra-species and inter-species PPIs. However, the performance of DDI-based host-pathogen PPI prediction has been rather limited. Results We develop a stringent DDI-based prediction approach with emphasis on (i) differences between the specific domain sequences on annotated regions of proteins under the same domain ID and (ii) calculation of the interaction strength of predicted PPIs based on the interacting residues in their interaction interfaces. We compare our stringent DDI-based approach to a conventional DDI-based approach for predicting PPIs based on gold standard intra-species PPIs and coherent informative Gene Ontology terms assessment. The assessment results show that our stringent DDI-based approach achieves much better performance in predicting PPIs than the conventional approach. Using our stringent DDI-based approach, we have predicted a small set of reliable H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies. We also analyze the H. sapiens-M. tuberculosis H37Rv PPIs predicted by our stringent DDI-based approach using cellular compartment distribution analysis, functional category enrichment analysis and pathway enrichment analysis. The analyses support the validity of our prediction result. Also, based on an analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent DDI-based approach, we have discovered some important properties of domains involved in host-pathogen PPIs. We find that both host and pathogen proteins involved in host-pathogen PPIs tend to have more domains than proteins involved in intra-species PPIs, and these domains have more interaction partners than domains on proteins involved in intra-species PPI. Conclusions The stringent DDI-based prediction approach reported in this work provides a stringent strategy for predicting host-pathogen PPIs. It also performs better than a conventional DDI-based approach in predicting PPIs. We have predicted a small set of accurate H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies. PMID:24564941

  8. MLIBlast: A program to empirically predict hypervelocity impact damage to the Space Station

    NASA Technical Reports Server (NTRS)

    Rule, William K.

    1991-01-01

    MLIBlast is described, which consists of a number of DOC PC based MIcrosoft BASIC program modules written to provide spacecraft designers with empirical predictions of space debris damage to orbiting spacecraft. The Spacecraft wall configuration is assumed to consist of multilayer insulation (MLI) placed between a Whipple style bumper and a pressure wall. Predictions are based on data sets of experimental results obtained from simulating debris impact on spacecraft. One module of MLIBlast facilitates creation of the data base of experimental results that is used by the damage prediction modules of the code. The user has a choice of three different prediction modules to predict damage to the bumper, the MLI, and the pressure wall.

  9. Polar body based aneuploidy screening is poorly predictive of embryo ploidy and reproductive potential.

    PubMed

    Salvaggio, C N; Forman, E J; Garnsey, H M; Treff, N R; Scott, R T

    2014-09-01

    Polar body (polar body) biopsy represents one possible solution to performing comprehensive chromosome screening (CCS). This study adds to what is known about the predictive value of polar body based testing for the genetic status of the resulting embryo, but more importantly, provides the first evaluation of the predictive value for actual clinical outcomes after embryo transfer. SNP array was performed on first polar body, second polar body, and either a blastomere or trophectoderm biopsy, or the entire arrested embryo. Concordance of the polar body-based prediction with the observed diagnoses in the embryos was assessed. In addition, the predictive value of the polar body -based diagnosis for the specific clinical outcome of transferred embryos was evaluated through the use of DNA fingerprinting to track individual embryos. There were 459 embryos analyzed from 96 patients with a mean maternal age of 35.3. The polar body-based predictive value for the embryo based diagnosis was 70.3%. The blastocyst implantation predictive value of a euploid trophectoderm was higher than from euploid polar bodies (51% versus 40%). The cleavage stage embryo implantation predictive value of a euploid blastomere was also higher than from euploid polar bodies (31% versus 22%). Polar body based aneuploidy screening results were less predictive of actual clinical outcomes than direct embryo assessment and may not be adequate to improve sustained implantation rates. In nearly one-third of cases the polar body based analysis failed to predict the ploidy of the embryo. This imprecision may hinder efforts for polar body based CCS to improve IVF clinical outcomes.

  10. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    NASA Astrophysics Data System (ADS)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  11. A High Performance Cloud-Based Protein-Ligand Docking Prediction Algorithm

    PubMed Central

    Chen, Jui-Le; Yang, Chu-Sing

    2013-01-01

    The potential of predicting druggability for a particular disease by integrating biological and computer science technologies has witnessed success in recent years. Although the computer science technologies can be used to reduce the costs of the pharmaceutical research, the computation time of the structure-based protein-ligand docking prediction is still unsatisfied until now. Hence, in this paper, a novel docking prediction algorithm, named fast cloud-based protein-ligand docking prediction algorithm (FCPLDPA), is presented to accelerate the docking prediction algorithm. The proposed algorithm works by leveraging two high-performance operators: (1) the novel migration (information exchange) operator is designed specially for cloud-based environments to reduce the computation time; (2) the efficient operator is aimed at filtering out the worst search directions. Our simulation results illustrate that the proposed method outperforms the other docking algorithms compared in this paper in terms of both the computation time and the quality of the end result. PMID:23762864

  12. Prediction-based dynamic load-sharing heuristics

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.; Devarakonda, Murthy; Iyer, Ravishankar K.

    1993-01-01

    The authors present dynamic load-sharing heuristics that use predicted resource requirements of processes to manage workloads in a distributed system. A previously developed statistical pattern-recognition method is employed for resource prediction. While nonprediction-based heuristics depend on a rapidly changing system status, the new heuristics depend on slowly changing program resource usage patterns. Furthermore, prediction-based heuristics can be more effective since they use future requirements rather than just the current system state. Four prediction-based heuristics, two centralized and two distributed, are presented. Using trace driven simulations, they are compared against random scheduling and two effective nonprediction based heuristics. Results show that the prediction-based centralized heuristics achieve up to 30 percent better response times than the nonprediction centralized heuristic, and that the prediction-based distributed heuristics achieve up to 50 percent improvements relative to their nonprediction counterpart.

  13. Prediction of protein secondary structure content for the twilight zone sequences.

    PubMed

    Homaeian, Leila; Kurgan, Lukasz A; Ruan, Jishou; Cios, Krzysztof J; Chen, Ke

    2007-11-15

    Secondary protein structure carries information about local structural arrangements, which include three major conformations: alpha-helices, beta-strands, and coils. Significant majority of successful methods for prediction of the secondary structure is based on multiple sequence alignment. However, multiple alignment fails to provide accurate results when a sequence comes from the twilight zone, that is, it is characterized by low (<30%) homology. To this end, we propose a novel method for prediction of secondary structure content through comprehensive sequence representation, called PSSC-core. The method uses a multiple linear regression model and introduces a comprehensive feature-based sequence representation to predict amount of helices and strands for sequences from the twilight zone. The PSSC-core method was tested and compared with two other state-of-the-art prediction methods on a set of 2187 twilight zone sequences. The results indicate that our method provides better predictions for both helix and strand content. The PSSC-core is shown to provide statistically significantly better results when compared with the competing methods, reducing the prediction error by 5-7% for helix and 7-9% for strand content predictions. The proposed feature-based sequence representation uses a comprehensive set of physicochemical properties that are custom-designed for each of the helix and strand content predictions. It includes composition and composition moment vectors, frequency of tetra-peptides associated with helical and strand conformations, various property-based groups like exchange groups, chemical groups of the side chains and hydrophobic group, auto-correlations based on hydrophobicity, side-chain masses, hydropathy, and conformational patterns for beta-sheets. The PSSC-core method provides an alternative for predicting the secondary structure content that can be used to validate and constrain results of other structure prediction methods. At the same time, it also provides useful insight into design of successful protein sequence representations that can be used in developing new methods related to prediction of different aspects of the secondary protein structure. (c) 2007 Wiley-Liss, Inc.

  14. Effects of urban microcellular environments on ray-tracing-based coverage predictions.

    PubMed

    Liu, Zhongyu; Guo, Lixin; Guan, Xiaowei; Sun, Jiejing

    2016-09-01

    The ray-tracing (RT) algorithm, which is based on geometrical optics and the uniform theory of diffraction, has become a typical deterministic approach of studying wave-propagation characteristics. Under urban microcellular environments, the RT method highly depends on detailed environmental information. The aim of this paper is to provide help in selecting the appropriate level of accuracy required in building databases to achieve good tradeoffs between database costs and prediction accuracy. After familiarization with the operating procedures of the RT-based prediction model, this study focuses on the effect of errors in environmental information on prediction results. The environmental information consists of two parts, namely, geometric and electrical parameters. The geometric information can be obtained from a digital map of a city. To study the effects of inaccuracies in geometry information (building layout) on RT-based coverage prediction, two different artificial erroneous maps are generated based on the original digital map, and systematic analysis is performed by comparing the predictions with the erroneous maps and measurements or the predictions with the original digital map. To make the conclusion more persuasive, the influence of random errors on RMS delay spread results is investigated. Furthermore, given the electrical parameters' effect on the accuracy of the predicted results of the RT model, the dielectric constant and conductivity of building materials are set with different values. The path loss and RMS delay spread under the same circumstances are simulated by the RT prediction model.

  15. Predicting longshore gradients in longshore transport: the CERC formula compared to Delft3D

    USGS Publications Warehouse

    List, Jeffrey H.; Hanes, Daniel M.; Ruggiero, Peter

    2007-01-01

    The prediction of longshore transport gradients is critical for forecasting shoreline change. We employ simple test cases consisting of shoreface pits at varying distances from the shoreline to compare the longshore transport gradients predicted by the CERC formula against results derived from the process-based model Delft3D. Results show that while in some cases the two approaches give very similar results, in many cases the results diverge greatly. Although neither approach is validated with field data here, the Delft3D-based transport gradients provide much more consistent predictions of erosional and accretionary zones as the pit location varies across the shoreface.

  16. The prediction of drug metabolism, tissue distribution, and bioavailability of 50 structurally diverse compounds in rat using mechanism-based absorption, distribution, and metabolism prediction tools.

    PubMed

    De Buck, Stefan S; Sinha, Vikash K; Fenu, Luca A; Gilissen, Ron A; Mackie, Claire E; Nijsen, Marjoleen J

    2007-04-01

    The aim of this study was to assess a physiologically based modeling approach for predicting drug metabolism, tissue distribution, and bioavailability in rat for a structurally diverse set of neutral and moderate-to-strong basic compounds (n = 50). Hepatic blood clearance (CL(h)) was projected using microsomal data and shown to be well predicted, irrespective of the type of hepatic extraction model (80% within 2-fold). Best predictions of CL(h) were obtained disregarding both plasma and microsomal protein binding, whereas strong bias was seen using either blood binding only or both plasma and microsomal protein binding. Two mechanistic tissue composition-based equations were evaluated for predicting volume of distribution (V(dss)) and tissue-to-plasma partitioning (P(tp)). A first approach, which accounted for ionic interactions with acidic phospholipids, resulted in accurate predictions of V(dss) (80% within 2-fold). In contrast, a second approach, which disregarded ionic interactions, was a poor predictor of V(dss) (60% within 2-fold). The first approach also yielded accurate predictions of P(tp) in muscle, heart, and kidney (80% within 3-fold), whereas in lung, liver, and brain, predictions ranged from 47% to 62% within 3-fold. Using the second approach, P(tp) prediction accuracy in muscle, heart, and kidney was on average 70% within 3-fold, and ranged from 24% to 54% in all other tissues. Combining all methods for predicting V(dss) and CL(h) resulted in accurate predictions of the in vivo half-life (70% within 2-fold). Oral bioavailability was well predicted using CL(h) data and Gastroplus Software (80% within 2-fold). These results illustrate that physiologically based prediction tools can provide accurate predictions of rat pharmacokinetics.

  17. Stringent DDI-based prediction of H. sapiens-M. tuberculosis H37Rv protein-protein interactions.

    PubMed

    Zhou, Hufeng; Rezaei, Javad; Hugo, Willy; Gao, Shangzhi; Jin, Jingjing; Fan, Mengyuan; Yong, Chern-Han; Wozniak, Michal; Wong, Limsoon

    2013-01-01

    H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are very important information to illuminate the infection mechanism of M. tuberculosis H37Rv. But current H. sapiens-M. tuberculosis H37Rv PPI data are very scarce. This seriously limits the study of the interaction between this important pathogen and its host H. sapiens. Computational prediction of H. sapiens-M. tuberculosis H37Rv PPIs is an important strategy to fill in the gap. Domain-domain interaction (DDI) based prediction is one of the frequently used computational approaches in predicting both intra-species and inter-species PPIs. However, the performance of DDI-based host-pathogen PPI prediction has been rather limited. We develop a stringent DDI-based prediction approach with emphasis on (i) differences between the specific domain sequences on annotated regions of proteins under the same domain ID and (ii) calculation of the interaction strength of predicted PPIs based on the interacting residues in their interaction interfaces. We compare our stringent DDI-based approach to a conventional DDI-based approach for predicting PPIs based on gold standard intra-species PPIs and coherent informative Gene Ontology terms assessment. The assessment results show that our stringent DDI-based approach achieves much better performance in predicting PPIs than the conventional approach. Using our stringent DDI-based approach, we have predicted a small set of reliable H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies. We also analyze the H. sapiens-M. tuberculosis H37Rv PPIs predicted by our stringent DDI-based approach using cellular compartment distribution analysis, functional category enrichment analysis and pathway enrichment analysis. The analyses support the validity of our prediction result. Also, based on an analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent DDI-based approach, we have discovered some important properties of domains involved in host-pathogen PPIs. We find that both host and pathogen proteins involved in host-pathogen PPIs tend to have more domains than proteins involved in intra-species PPIs, and these domains have more interaction partners than domains on proteins involved in intra-species PPI. The stringent DDI-based prediction approach reported in this work provides a stringent strategy for predicting host-pathogen PPIs. It also performs better than a conventional DDI-based approach in predicting PPIs. We have predicted a small set of accurate H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies.

  18. Prediction of laser cutting heat affected zone by extreme learning machine

    NASA Astrophysics Data System (ADS)

    Anicic, Obrad; Jović, Srđan; Skrijelj, Hivzo; Nedić, Bogdan

    2017-01-01

    Heat affected zone (HAZ) of the laser cutting process may be developed based on combination of different factors. In this investigation the HAZ forecasting, based on the different laser cutting parameters, was analyzed. The main goal was to predict the HAZ according to three inputs. The purpose of this research was to develop and apply the Extreme Learning Machine (ELM) to predict the HAZ. The ELM results were compared with genetic programming (GP) and artificial neural network (ANN). The reliability of the computational models were accessed based on simulation results and by using several statistical indicators. Based upon simulation results, it was demonstrated that ELM can be utilized effectively in applications of HAZ forecasting.

  19. Thin-slice vision: inference of confidence measure from perceptual video quality

    NASA Astrophysics Data System (ADS)

    Hameed, Abdul; Balas, Benjamin; Dai, Rui

    2016-11-01

    There has been considerable research on thin-slice judgments, but no study has demonstrated the predictive validity of confidence measures when assessors watch videos acquired from communication systems, in which the perceptual quality of videos could be degraded by limited bandwidth and unreliable network conditions. This paper studies the relationship between high-level thin-slice judgments of human behavior and factors that contribute to perceptual video quality. Based on a large number of subjective test results, it has been found that the confidence of a single individual present in all the videos, called speaker's confidence (SC), could be predicted by a list of features that contribute to perceptual video quality. Two prediction models, one based on artificial neural network and the other based on a decision tree, were built to predict SC. Experimental results have shown that both prediction models can result in high correlation measures.

  20. Visual form predictions facilitate auditory processing at the N1.

    PubMed

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2017-02-20

    Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.

  1. Model predictive control design for polytopic uncertain systems by synthesising multi-step prediction scenarios

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Xi, Yugeng; Li, Dewei; Xu, Yuli; Gan, Zhongxue

    2018-01-01

    A common objective of model predictive control (MPC) design is the large initial feasible region, low online computational burden as well as satisfactory control performance of the resulting algorithm. It is well known that interpolation-based MPC can achieve a favourable trade-off among these different aspects. However, the existing results are usually based on fixed prediction scenarios, which inevitably limits the performance of the obtained algorithms. So by replacing the fixed prediction scenarios with the time-varying multi-step prediction scenarios, this paper provides a new insight into improvement of the existing MPC designs. The adopted control law is a combination of predetermined multi-step feedback control laws, based on which two MPC algorithms with guaranteed recursive feasibility and asymptotic stability are presented. The efficacy of the proposed algorithms is illustrated by a numerical example.

  2. Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases

    PubMed Central

    Zhang, Hongpo

    2018-01-01

    Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369

  3. Group-regularized individual prediction: theory and application to pain.

    PubMed

    Lindquist, Martin A; Krishnan, Anjali; López-Solà, Marina; Jepma, Marieke; Woo, Choong-Wan; Koban, Leonie; Roy, Mathieu; Atlas, Lauren Y; Schmidt, Liane; Chang, Luke J; Reynolds Losin, Elizabeth A; Eisenbarth, Hedwig; Ashar, Yoni K; Delk, Elizabeth; Wager, Tor D

    2017-01-15

    Multivariate pattern analysis (MVPA) has become an important tool for identifying brain representations of psychological processes and clinical outcomes using fMRI and related methods. Such methods can be used to predict or 'decode' psychological states in individual subjects. Single-subject MVPA approaches, however, are limited by the amount and quality of individual-subject data. In spite of higher spatial resolution, predictive accuracy from single-subject data often does not exceed what can be accomplished using coarser, group-level maps, because single-subject patterns are trained on limited amounts of often-noisy data. Here, we present a method that combines population-level priors, in the form of biomarker patterns developed on prior samples, with single-subject MVPA maps to improve single-subject prediction. Theoretical results and simulations motivate a weighting based on the relative variances of biomarker-based prediction-based on population-level predictive maps from prior groups-and individual-subject, cross-validated prediction. Empirical results predicting pain using brain activity on a trial-by-trial basis (single-trial prediction) across 6 studies (N=180 participants) confirm the theoretical predictions. Regularization based on a population-level biomarker-in this case, the Neurologic Pain Signature (NPS)-improved single-subject prediction accuracy compared with idiographic maps based on the individuals' data alone. The regularization scheme that we propose, which we term group-regularized individual prediction (GRIP), can be applied broadly to within-person MVPA-based prediction. We also show how GRIP can be used to evaluate data quality and provide benchmarks for the appropriateness of population-level maps like the NPS for a given individual or study. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Prediction of hot regions in protein-protein interaction by combining density-based incremental clustering with feature-based classification.

    PubMed

    Hu, Jing; Zhang, Xiaolong; Liu, Xiaoming; Tang, Jinshan

    2015-06-01

    Discovering hot regions in protein-protein interaction is important for drug and protein design, while experimental identification of hot regions is a time-consuming and labor-intensive effort; thus, the development of predictive models can be very helpful. In hot region prediction research, some models are based on structure information, and others are based on a protein interaction network. However, the prediction accuracy of these methods can still be improved. In this paper, a new method is proposed for hot region prediction, which combines density-based incremental clustering with feature-based classification. The method uses density-based incremental clustering to obtain rough hot regions, and uses feature-based classification to remove the non-hot spot residues from the rough hot regions. Experimental results show that the proposed method significantly improves the prediction performance of hot regions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Meta-path based heterogeneous combat network link prediction

    NASA Astrophysics Data System (ADS)

    Li, Jichao; Ge, Bingfeng; Yang, Kewei; Chen, Yingwu; Tan, Yuejin

    2017-09-01

    The combat system-of-systems in high-tech informative warfare, composed of many interconnected combat systems of different types, can be regarded as a type of complex heterogeneous network. Link prediction for heterogeneous combat networks (HCNs) is of significant military value, as it facilitates reconfiguring combat networks to represent the complex real-world network topology as appropriate with observed information. This paper proposes a novel integrated methodology framework called HCNMP (HCN link prediction based on meta-path) to predict multiple types of links simultaneously for an HCN. More specifically, the concept of HCN meta-paths is introduced, through which the HCNMP can accumulate information by extracting different features of HCN links for all the six defined types. Next, an HCN link prediction model, based on meta-path features, is built to predict all types of links of the HCN simultaneously. Then, the solution algorithm for the HCN link prediction model is proposed, in which the prediction results are obtained by iteratively updating with the newly predicted results until the results in the HCN converge or reach a certain maximum iteration number. Finally, numerical experiments on the dataset of a real HCN are conducted to demonstrate the feasibility and effectiveness of the proposed HCNMP, in comparison with 30 baseline methods. The results show that the performance of the HCNMP is superior to those of the baseline methods.

  6. A link prediction approach to cancer drug sensitivity prediction.

    PubMed

    Turki, Turki; Wei, Zhi

    2017-10-03

    Predicting the response to a drug for cancer disease patients based on genomic information is an important problem in modern clinical oncology. This problem occurs in part because many available drug sensitivity prediction algorithms do not consider better quality cancer cell lines and the adoption of new feature representations; both lead to the accurate prediction of drug responses. By predicting accurate drug responses to cancer, oncologists gain a more complete understanding of the effective treatments for each patient, which is a core goal in precision medicine. In this paper, we model cancer drug sensitivity as a link prediction, which is shown to be an effective technique. We evaluate our proposed link prediction algorithms and compare them with an existing drug sensitivity prediction approach based on clinical trial data. The experimental results based on the clinical trial data show the stability of our link prediction algorithms, which yield the highest area under the ROC curve (AUC) and are statistically significant. We propose a link prediction approach to obtain new feature representation. Compared with an existing approach, the results show that incorporating the new feature representation to the link prediction algorithms has significantly improved the performance.

  7. Predicting plant biomass accumulation from image-derived parameters

    PubMed Central

    Chen, Dijun; Shi, Rongli; Pape, Jean-Michel; Neumann, Kerstin; Graner, Andreas; Chen, Ming; Klukas, Christian

    2018-01-01

    Abstract Background Image-based high-throughput phenotyping technologies have been rapidly developed in plant science recently, and they provide a great potential to gain more valuable information than traditionally destructive methods. Predicting plant biomass is regarded as a key purpose for plant breeders and ecologists. However, it is a great challenge to find a predictive biomass model across experiments. Results In the present study, we constructed 4 predictive models to examine the quantitative relationship between image-based features and plant biomass accumulation. Our methodology has been applied to 3 consecutive barley (Hordeum vulgare) experiments with control and stress treatments. The results proved that plant biomass can be accurately predicted from image-based parameters using a random forest model. The high prediction accuracy based on this model will contribute to relieving the phenotyping bottleneck in biomass measurement in breeding applications. The prediction performance is still relatively high across experiments under similar conditions. The relative contribution of individual features for predicting biomass was further quantified, revealing new insights into the phenotypic determinants of the plant biomass outcome. Furthermore, methods could also be used to determine the most important image-based features related to plant biomass accumulation, which would be promising for subsequent genetic mapping to uncover the genetic basis of biomass. Conclusions We have developed quantitative models to accurately predict plant biomass accumulation from image data. We anticipate that the analysis results will be useful to advance our views of the phenotypic determinants of plant biomass outcome, and the statistical methods can be broadly used for other plant species. PMID:29346559

  8. Prediction of Protein Structure by Template-Based Modeling Combined with the UNRES Force Field.

    PubMed

    Krupa, Paweł; Mozolewska, Magdalena A; Joo, Keehyoung; Lee, Jooyoung; Czaplewski, Cezary; Liwo, Adam

    2015-06-22

    A new approach to the prediction of protein structures that uses distance and backbone virtual-bond dihedral angle restraints derived from template-based models and simulations with the united residue (UNRES) force field is proposed. The approach combines the accuracy and reliability of template-based methods for the segments of the target sequence with high similarity to those having known structures with the ability of UNRES to pack the domains correctly. Multiplexed replica-exchange molecular dynamics with restraints derived from template-based models of a given target, in which each restraint is weighted according to the accuracy of the prediction of the corresponding section of the molecule, is used to search the conformational space, and the weighted histogram analysis method and cluster analysis are applied to determine the families of the most probable conformations, from which candidate predictions are selected. To test the capability of the method to recover template-based models from restraints, five single-domain proteins with structures that have been well-predicted by template-based methods were used; it was found that the resulting structures were of the same quality as the best of the original models. To assess whether the new approach can improve template-based predictions with incorrectly predicted domain packing, four such targets were selected from the CASP10 targets; for three of them the new approach resulted in significantly better predictions compared with the original template-based models. The new approach can be used to predict the structures of proteins for which good templates can be found for sections of the sequence or an overall good template can be found for the entire sequence but the prediction quality is remarkably weaker in putative domain-linker regions.

  9. The syntactic complexity of Russian relative clauses

    PubMed Central

    Fedorenko, Evelina; Gibson, Edward

    2012-01-01

    Although syntactic complexity has been investigated across dozens of studies, the available data still greatly underdetermine relevant theories of processing difficulty. Memory-based and expectation-based theories make opposite predictions regarding fine-grained time course of processing difficulty in syntactically constrained contexts, and each class of theory receives support from results on some constructions in some languages. Here we report four self-paced reading experiments on the online comprehension of Russian relative clauses together with related corpus studies, taking advantage of Russian’s flexible word order to disentangle predictions of competing theories. We find support for key predictions of memory-based theories in reading times at RC verbs, and for key predictions of expectation-based theories in processing difficulty at RC-initial accusative noun phrase (NP) objects, which corpus data suggest should be highly unexpected. These results suggest that a complete theory of syntactic complexity must integrate insights from both expectation-based and memory-based theories. PMID:24711687

  10. Life prediction for high temperature low cycle fatigue of two kinds of titanium alloys based on exponential function

    NASA Astrophysics Data System (ADS)

    Mu, G. Y.; Mi, X. Z.; Wang, F.

    2018-01-01

    The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.

  11. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    PubMed

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. A postprocessing method in the HMC framework for predicting gene function based on biological instrumental data

    NASA Astrophysics Data System (ADS)

    Feng, Shou; Fu, Ping; Zheng, Wenbin

    2018-03-01

    Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.

  13. StruLocPred: structure-based protein subcellular localisation prediction using multi-class support vector machine.

    PubMed

    Zhou, Wengang; Dickerson, Julie A

    2012-01-01

    Knowledge of protein subcellular locations can help decipher a protein's biological function. This work proposes new features: sequence-based: Hybrid Amino Acid Pair (HAAP) and two structure-based: Secondary Structural Element Composition (SSEC) and solvent accessibility state frequency. A multi-class Support Vector Machine is developed to predict the locations. Testing on two established data sets yields better prediction accuracies than the best available systems. Comparisons with existing methods show comparable results to ESLPred2. When StruLocPred is applied to the entire Arabidopsis proteome, over 77% of proteins with known locations match the prediction results. An implementation of this system is at http://wgzhou.ece. iastate.edu/StruLocPred/.

  14. Assessment of quantitative structure-activity relationship of toxicity prediction models for Korean chemical substance control legislation

    PubMed Central

    Kim, Kwang-Yon; Shin, Seong Eun; No, Kyoung Tai

    2015-01-01

    Objectives For successful adoption of legislation controlling registration and assessment of chemical substances, it is important to obtain sufficient toxicological experimental evidence and other related information. It is also essential to obtain a sufficient number of predicted risk and toxicity results. Particularly, methods used in predicting toxicities of chemical substances during acquisition of required data, ultimately become an economic method for future dealings with new substances. Although the need for such methods is gradually increasing, the-required information about reliability and applicability range has not been systematically provided. Methods There are various representative environmental and human toxicity models based on quantitative structure-activity relationships (QSAR). Here, we secured the 10 representative QSAR-based prediction models and its information that can make predictions about substances that are expected to be regulated. We used models that predict and confirm usability of the information expected to be collected and submitted according to the legislation. After collecting and evaluating each predictive model and relevant data, we prepared methods quantifying the scientific validity and reliability, which are essential conditions for using predictive models. Results We calculated predicted values for the models. Furthermore, we deduced and compared adequacies of the models using the Alternative non-testing method assessed for Registration, Evaluation, Authorization, and Restriction of Chemicals Substances scoring system, and deduced the applicability domains for each model. Additionally, we calculated and compared inclusion rates of substances expected to be regulated, to confirm the applicability. Conclusions We evaluated and compared the data, adequacy, and applicability of our selected QSAR-based toxicity prediction models, and included them in a database. Based on this data, we aimed to construct a system that can be used with predicted toxicity results. Furthermore, by presenting the suitability of individual predicted results, we aimed to provide a foundation that could be used in actual assessments and regulations. PMID:26206368

  15. [Research on engine remaining useful life prediction based on oil spectrum analysis and particle filtering].

    PubMed

    Sun, Lei; Jia, Yun-xian; Cai, Li-ying; Lin, Guo-yu; Zhao, Jin-song

    2013-09-01

    The spectrometric oil analysis(SOA) is an important technique for machine state monitoring, fault diagnosis and prognosis, and SOA based remaining useful life(RUL) prediction has an advantage of finding out the optimal maintenance strategy for machine system. Because the complexity of machine system, its health state degradation process can't be simply characterized by linear model, while particle filtering(PF) possesses obvious advantages over traditional Kalman filtering for dealing nonlinear and non-Gaussian system, the PF approach was applied to state forecasting by SOA, and the RUL prediction technique based on SOA and PF algorithm is proposed. In the prediction model, according to the estimating result of system's posterior probability, its prior probability distribution is realized, and the multi-step ahead prediction model based on PF algorithm is established. Finally, the practical SOA data of some engine was analyzed and forecasted by the above method, and the forecasting result was compared with that of traditional Kalman filtering method. The result fully shows the superiority and effectivity of the

  16. Laser-Based Trespassing Prediction in Restrictive Environments: A Linear Approach

    PubMed Central

    Cheein, Fernando Auat; Scaglia, Gustavo

    2012-01-01

    Stationary range laser sensors for intruder monitoring, restricted space violation detections and workspace determination are extensively used in risky environments. In this work we present a linear based approach for predicting the presence of moving agents before they trespass a laser-based restricted space. Our approach is based on the Taylor's series expansion of the detected objects' movements. The latter makes our proposal suitable for embedded applications. In the experimental results (carried out in different scenarios) presented herein, our proposal shows 100% of effectiveness in predicting trespassing situations. Several implementation results and statistics analysis showing the performance of our proposal are included in this work.

  17. Experimental evaluation of models for predicting Cherenkov light intensities from short-cooled nuclear fuel assemblies

    NASA Astrophysics Data System (ADS)

    Branger, E.; Grape, S.; Jansson, P.; Jacobsson Svärd, S.

    2018-02-01

    The Digital Cherenkov Viewing Device (DCVD) is a tool used by nuclear safeguards inspectors to verify irradiated nuclear fuel assemblies in wet storage based on the recording of Cherenkov light produced by the assemblies. One type of verification involves comparing the measured light intensity from an assembly with a predicted intensity, based on assembly declarations. Crucial for such analyses is the performance of the prediction model used, and recently new modelling methods have been introduced to allow for enhanced prediction capabilities by taking the irradiation history into account, and by including the cross-talk radiation from neighbouring assemblies in the predictions. In this work, the performance of three models for Cherenkov-light intensity prediction is evaluated by applying them to a set of short-cooled PWR 17x17 assemblies for which experimental DCVD measurements and operator-declared irradiation data was available; (1) a two-parameter model, based on total burnup and cooling time, previously used by the safeguards inspectors, (2) a newly introduced gamma-spectrum-based model, which incorporates cycle-wise burnup histories, and (3) the latter gamma-spectrum-based model with the addition to account for contributions from neighbouring assemblies. The results show that the two gamma-spectrum-based models provide significantly higher precision for the measured inventory compared to the two-parameter model, lowering the standard deviation between relative measured and predicted intensities from 15.2 % to 8.1 % respectively 7.8 %. The results show some systematic differences between assemblies of different designs (produced by different manufacturers) in spite of their similar PWR 17x17 geometries, and possible ways are discussed to address such differences, which may allow for even higher prediction capabilities. Still, it is concluded that the gamma-spectrum-based models enable confident verification of the fuel assembly inventory at the currently used detection limit for partial defects, being a 30 % discrepancy between measured and predicted intensities, while some false detection occurs with the two-parameter model. The results also indicate that the gamma-spectrum-based prediction methods are accurate enough that the 30 % discrepancy limit could potentially be lowered.

  18. Machine learning-based methods for prediction of linear B-cell epitopes.

    PubMed

    Wang, Hsin-Wei; Pai, Tun-Wen

    2014-01-01

    B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

  19. Model-based predictions for dopamine.

    PubMed

    Langdon, Angela J; Sharpe, Melissa J; Schoenbaum, Geoffrey; Niv, Yael

    2018-04-01

    Phasic dopamine responses are thought to encode a prediction-error signal consistent with model-free reinforcement learning theories. However, a number of recent findings highlight the influence of model-based computations on dopamine responses, and suggest that dopamine prediction errors reflect more dimensions of an expected outcome than scalar reward value. Here, we review a selection of these recent results and discuss the implications and complications of model-based predictions for computational theories of dopamine and learning. Copyright © 2017. Published by Elsevier Ltd.

  20. Comparison of in silico models for prediction of mutagenicity.

    PubMed

    Bakhtyari, Nazanin G; Raitano, Giuseppa; Benfenati, Emilio; Martin, Todd; Young, Douglas

    2013-01-01

    Using a dataset with more than 6000 compounds, the performance of eight quantitative structure activity relationships (QSAR) models was evaluated: ACD/Tox Suite, Absorption, Distribution, Metabolism, Elimination, and Toxicity of chemical substances (ADMET) predictor, Derek, Toxicity Estimation Software Tool (T.E.S.T.), TOxicity Prediction by Komputer Assisted Technology (TOPKAT), Toxtree, CEASAR, and SARpy (SAR in python). In general, the results showed a high level of performance. To have a realistic estimate of the predictive ability, the results for chemicals inside and outside the training set for each model were considered. The effect of applicability domain tools (when available) on the prediction accuracy was also evaluated. The predictive tools included QSAR models, knowledge-based systems, and a combination of both methods. Models based on statistical QSAR methods gave better results.

  1. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction.

    PubMed

    Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.

  2. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction

    PubMed Central

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803

  3. Research on cross - Project software defect prediction based on transfer learning

    NASA Astrophysics Data System (ADS)

    Chen, Ya; Ding, Xiaoming

    2018-04-01

    According to the two challenges in the prediction of cross-project software defects, the distribution differences between the source project and the target project dataset and the class imbalance in the dataset, proposing a cross-project software defect prediction method based on transfer learning, named NTrA. Firstly, solving the source project data's class imbalance based on the Augmented Neighborhood Cleaning Algorithm. Secondly, the data gravity method is used to give different weights on the basis of the attribute similarity of source project and target project data. Finally, a defect prediction model is constructed by using Trad boost algorithm. Experiments were conducted using data, come from NASA and SOFTLAB respectively, from a published PROMISE dataset. The results show that the method has achieved good values of recall and F-measure, and achieved good prediction results.

  4. Thermal sensation prediction by soft computing methodology.

    PubMed

    Jović, Srđan; Arsić, Nebojša; Vilimonović, Jovana; Petković, Dalibor

    2016-12-01

    Thermal comfort in open urban areas is very factor based on environmental point of view. Therefore it is need to fulfill demands for suitable thermal comfort during urban planning and design. Thermal comfort can be modeled based on climatic parameters and other factors. The factors are variables and they are changed throughout the year and days. Therefore there is need to establish an algorithm for thermal comfort prediction according to the input variables. The prediction results could be used for planning of time of usage of urban areas. Since it is very nonlinear task, in this investigation was applied soft computing methodology in order to predict the thermal comfort. The main goal was to apply extreme leaning machine (ELM) for forecasting of physiological equivalent temperature (PET) values. Temperature, pressure, wind speed and irradiance were used as inputs. The prediction results are compared with some benchmark models. Based on the results ELM can be used effectively in forecasting of PET. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Prediction versus aetiology: common pitfalls and how to avoid them.

    PubMed

    van Diepen, Merel; Ramspek, Chava L; Jager, Kitty J; Zoccali, Carmine; Dekker, Friedo W

    2017-04-01

    Prediction research is a distinct field of epidemiologic research, which should be clearly separated from aetiological research. Both prediction and aetiology make use of multivariable modelling, but the underlying research aim and interpretation of results are very different. Aetiology aims at uncovering the causal effect of a specific risk factor on an outcome, adjusting for confounding factors that are selected based on pre-existing knowledge of causal relations. In contrast, prediction aims at accurately predicting the risk of an outcome using multiple predictors collectively, where the final prediction model is usually based on statistically significant, but not necessarily causal, associations in the data at hand.In both scientific and clinical practice, however, the two are often confused, resulting in poor-quality publications with limited interpretability and applicability. A major problem is the frequently encountered aetiological interpretation of prediction results, where individual variables in a prediction model are attributed causal meaning. This article stresses the differences in use and interpretation of aetiological and prediction studies, and gives examples of common pitfalls. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  6. MicroRNAfold: pre-microRNA secondary structure prediction based on modified NCM model with thermodynamics-based scoring strategy.

    PubMed

    Han, Dianwei; Zhang, Jun; Tang, Guiliang

    2012-01-01

    An accurate prediction of the pre-microRNA secondary structure is important in miRNA informatics. Based on a recently proposed model, nucleotide cyclic motifs (NCM), to predict RNA secondary structure, we propose and implement a Modified NCM (MNCM) model with a physics-based scoring strategy to tackle the problem of pre-microRNA folding. Our microRNAfold is implemented using a global optimal algorithm based on the bottom-up local optimal solutions. Our experimental results show that microRNAfold outperforms the current leading prediction tools in terms of True Negative rate, False Negative rate, Specificity, and Matthews coefficient ratio.

  7. Path Loss Prediction Formula in Urban Area for the Fourth-Generation Mobile Communication Systems

    NASA Astrophysics Data System (ADS)

    Kitao, Koshiro; Ichitsubo, Shinichi

    A site-general type prediction formula is created based on the measurement results in an urban area in Japan assuming that the prediction frequency range required for Fourth-Generation (4G) Mobile Communication Systems is from 3 to 6GHz, the distance range is 0.1 to 3km, and the base station (BS) height range is from 10 to 100m. Based on the measurement results, the path loss (dB) is found to be proportional to the logarithm of the distance (m), the logarithm of the BS height (m), and the logarithm of the frequency (GHz). Furthermore, we examine the extension of existing formulae such as the Okumura-Hata, Walfisch-Ikegami, and Sakagami formulae for 4G systems and propose a prediction formula based on the Extended Sakagami formula.

  8. SVM and SVM Ensembles in Breast Cancer Prediction.

    PubMed

    Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong

    2017-01-01

    Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.

  9. SVM and SVM Ensembles in Breast Cancer Prediction

    PubMed Central

    Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong

    2017-01-01

    Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers. PMID:28060807

  10. Struct2Net: a web service to predict protein–protein interactions using a structure-based approach

    PubMed Central

    Singh, Rohit; Park, Daniel; Xu, Jinbo; Hosur, Raghavendra; Berger, Bonnie

    2010-01-01

    Struct2Net is a web server for predicting interactions between arbitrary protein pairs using a structure-based approach. Prediction of protein–protein interactions (PPIs) is a central area of interest and successful prediction would provide leads for experiments and drug design; however, the experimental coverage of the PPI interactome remains inadequate. We believe that Struct2Net is the first community-wide resource to provide structure-based PPI predictions that go beyond homology modeling. Also, most web-resources for predicting PPIs currently rely on functional genomic data (e.g. GO annotation, gene expression, cellular localization, etc.). Our structure-based approach is independent of such methods and only requires the sequence information of the proteins being queried. The web service allows multiple querying options, aimed at maximizing flexibility. For the most commonly studied organisms (fly, human and yeast), predictions have been pre-computed and can be retrieved almost instantaneously. For proteins from other species, users have the option of getting a quick-but-approximate result (using orthology over pre-computed results) or having a full-blown computation performed. The web service is freely available at http://struct2net.csail.mit.edu. PMID:20513650

  11. SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal

    PubMed Central

    Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.

    2017-01-01

    Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758

  12. Short-term solar flare prediction using image-case-based reasoning

    NASA Astrophysics Data System (ADS)

    Liu, Jin-Fu; Li, Fei; Zhang, Huai-Peng; Yu, Da-Ren

    2017-10-01

    Solar flares strongly influence space weather and human activities, and their prediction is highly complex. The existing solutions such as data based approaches and model based approaches have a common shortcoming which is the lack of human engagement in the forecasting process. An image-case-based reasoning method is introduced to achieve this goal. The image case library is composed of SOHO/MDI longitudinal magnetograms, the images from which exhibit the maximum horizontal gradient, the length of the neutral line and the number of singular points that are extracted for retrieving similar image cases. Genetic optimization algorithms are employed for optimizing the weight assignment for image features and the number of similar image cases retrieved. Similar image cases and prediction results derived by majority voting for these similar image cases are output and shown to the forecaster in order to integrate his/her experience with the final prediction results. Experimental results demonstrate that the case-based reasoning approach has slightly better performance than other methods, and is more efficient with forecasts improved by humans.

  13. Emerging trend prediction in biomedical literature.

    PubMed

    Moerchen, Fabian; Fradkin, Dmitriy; Dejori, Mathaeus; Wachmann, Bernd

    2008-11-06

    We present a study on how to predict new emerging trends in the biomedical domain based on textual data. We thereby propose a way of anticipating the transformation of arbitrary information into ground truth knowledge by predicting the inclusion of new terms into the MeSH ontology. We also discuss the preparation of a dataset for the evaluation of emerging trend prediction algorithms that is based on PubMed abstracts and related MeSH terms. The results suggest that early prediction of emerging trends is possible.

  14. Comparing niche- and process-based models to reduce prediction uncertainty in species range shifts under climate change.

    PubMed

    Morin, Xavier; Thuiller, Wilfried

    2009-05-01

    Obtaining reliable predictions of species range shifts under climate change is a crucial challenge for ecologists and stakeholders. At the continental scale, niche-based models have been widely used in the last 10 years to predict the potential impacts of climate change on species distributions all over the world, although these models do not include any mechanistic relationships. In contrast, species-specific, process-based predictions remain scarce at the continental scale. This is regrettable because to secure relevant and accurate predictions it is always desirable to compare predictions derived from different kinds of models applied independently to the same set of species and using the same raw data. Here we compare predictions of range shifts under climate change scenarios for 2100 derived from niche-based models with those of a process-based model for 15 North American boreal and temperate tree species. A general pattern emerged from our comparisons: niche-based models tend to predict a stronger level of extinction and a greater proportion of colonization than the process-based model. This result likely arises because niche-based models do not take phenotypic plasticity and local adaptation into account. Nevertheless, as the two kinds of models rely on different assumptions, their complementarity is revealed by common findings. Both modeling approaches highlight a major potential limitation on species tracking their climatic niche because of migration constraints and identify similar zones where species extirpation is likely. Such convergent predictions from models built on very different principles provide a useful way to offset uncertainties at the continental scale. This study shows that the use in concert of both approaches with their own caveats and advantages is crucial to obtain more robust results and that comparisons among models are needed in the near future to gain accuracy regarding predictions of range shifts under climate change.

  15. Predicting drug hydrolysis based on moisture uptake in various packaging designs.

    PubMed

    Naversnik, Klemen; Bohanec, Simona

    2008-12-18

    An attempt was made to predict the stability of a moisture sensitive drug product based on the knowledge of the dependence of the degradation rate on tablet moisture. The moisture increase inside a HDPE bottle with the drug formulation was simulated with the sorption-desorption moisture transfer model, which, in turn, allowed an accurate prediction of the drug degradation kinetics. The stability prediction, obtained by computer simulation, was made in a considerably shorter time frame and required little resources compared to a conventional stability study. The prediction was finally upgraded to a stochastic Monte Carlo simulation, which allowed quantitative incorporation of uncertainty, stemming from various sources. The resulting distribution of the outcome of interest (amount of degradation product at expiry) is a comprehensive way of communicating the result along with its uncertainty, superior to single-value results or confidence intervals.

  16. Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad

    2016-02-01

    Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less

  17. Correaltion of full-scale drag predictions with flight measurements on the C-141A aircraft. Phase 2: Wind tunnel test, analysis, and prediction techniques. Volume 1: Drag predictions, wind tunnel data analysis and correlation

    NASA Technical Reports Server (NTRS)

    Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.

    1974-01-01

    The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.

  18. Drug Distribution. Part 1. Models to Predict Membrane Partitioning.

    PubMed

    Nagar, Swati; Korzekwa, Ken

    2017-03-01

    Tissue partitioning is an important component of drug distribution and half-life. Protein binding and lipid partitioning together determine drug distribution. Two structure-based models to predict partitioning into microsomal membranes are presented. An orientation-based model was developed using a membrane template and atom-based relative free energy functions to select drug conformations and orientations for neutral and basic drugs. The resulting model predicts the correct membrane positions for nine compounds tested, and predicts the membrane partitioning for n = 67 drugs with an average fold-error of 2.4. Next, a more facile descriptor-based model was developed for acids, neutrals and bases. This model considers the partitioning of neutral and ionized species at equilibrium, and can predict membrane partitioning with an average fold-error of 2.0 (n = 92 drugs). Together these models suggest that drug orientation is important for membrane partitioning and that membrane partitioning can be well predicted from physicochemical properties.

  19. When high working memory capacity is and is not beneficial for predicting nonlinear processes.

    PubMed

    Fischer, Helen; Holt, Daniel V

    2017-04-01

    Predicting the development of dynamic processes is vital in many areas of life. Previous findings are inconclusive as to whether higher working memory capacity (WMC) is always associated with using more accurate prediction strategies, or whether higher WMC can also be associated with using overly complex strategies that do not improve accuracy. In this study, participants predicted a range of systematically varied nonlinear processes based on exponential functions where prediction accuracy could or could not be enhanced using well-calibrated rules. Results indicate that higher WMC participants seem to rely more on well-calibrated strategies, leading to more accurate predictions for processes with highly nonlinear trajectories in the prediction region. Predictions of lower WMC participants, in contrast, point toward an increased use of simple exemplar-based prediction strategies, which perform just as well as more complex strategies when the prediction region is approximately linear. These results imply that with respect to predicting dynamic processes, working memory capacity limits are not generally a strength or a weakness, but that this depends on the process to be predicted.

  20. Determining Cutoff Point of Ensemble Trees Based on Sample Size in Predicting Clinical Dose with DNA Microarray Data.

    PubMed

    Yılmaz Isıkhan, Selen; Karabulut, Erdem; Alpar, Celal Reha

    2016-01-01

    Background/Aim . Evaluating the success of dose prediction based on genetic or clinical data has substantially advanced recently. The aim of this study is to predict various clinical dose values from DNA gene expression datasets using data mining techniques. Materials and Methods . Eleven real gene expression datasets containing dose values were included. First, important genes for dose prediction were selected using iterative sure independence screening. Then, the performances of regression trees (RTs), support vector regression (SVR), RT bagging, SVR bagging, and RT boosting were examined. Results . The results demonstrated that a regression-based feature selection method substantially reduced the number of irrelevant genes from raw datasets. Overall, the best prediction performance in nine of 11 datasets was achieved using SVR; the second most accurate performance was provided using a gradient-boosting machine (GBM). Conclusion . Analysis of various dose values based on microarray gene expression data identified common genes found in our study and the referenced studies. According to our findings, SVR and GBM can be good predictors of dose-gene datasets. Another result of the study was to identify the sample size of n = 25 as a cutoff point for RT bagging to outperform a single RT.

  1. Extending Theory-Based Quantitative Predictions to New Health Behaviors.

    PubMed

    Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O

    2016-04-01

    Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.

  2. Stringent homology-based prediction of H. sapiens-M. tuberculosis H37Rv protein-protein interactions

    PubMed Central

    2014-01-01

    Background H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are essential for understanding the infection mechanism of the formidable pathogen M. tuberculosis H37Rv. Computational prediction is an important strategy to fill the gap in experimental H. sapiens-M. tuberculosis H37Rv PPI data. Homology-based prediction is frequently used in predicting both intra-species and inter-species PPIs. However, some limitations are not properly resolved in several published works that predict eukaryote-prokaryote inter-species PPIs using intra-species template PPIs. Results We develop a stringent homology-based prediction approach by taking into account (i) differences between eukaryotic and prokaryotic proteins and (ii) differences between inter-species and intra-species PPI interfaces. We compare our stringent homology-based approach to a conventional homology-based approach for predicting host-pathogen PPIs, based on cellular compartment distribution analysis, disease gene list enrichment analysis, pathway enrichment analysis and functional category enrichment analysis. These analyses support the validity of our prediction result, and clearly show that our approach has better performance in predicting H. sapiens-M. tuberculosis H37Rv PPIs. Using our stringent homology-based approach, we have predicted a set of highly plausible H. sapiens-M. tuberculosis H37Rv PPIs which might be useful for many of related studies. Based on our analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent homology-based approach, we have discovered several interesting properties which are reported here for the first time. We find that both host proteins and pathogen proteins involved in the host-pathogen PPIs tend to be hubs in their own intra-species PPI network. Also, both host and pathogen proteins involved in host-pathogen PPIs tend to have longer primary sequence, tend to have more domains, tend to be more hydrophilic, etc. And the protein domains from both host and pathogen proteins involved in host-pathogen PPIs tend to have lower charge, and tend to be more hydrophilic. Conclusions Our stringent homology-based prediction approach provides a better strategy in predicting PPIs between eukaryotic hosts and prokaryotic pathogens than a conventional homology-based approach. The properties we have observed from the predicted H. sapiens-M. tuberculosis H37Rv PPI network are useful for understanding inter-species host-pathogen PPI networks and provide novel insights for host-pathogen interaction studies. Reviewers This article was reviewed by Michael Gromiha, Narayanaswamy Srinivasan and Thomas Dandekar. PMID:24708540

  3. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review

    PubMed Central

    Sharafoddini, Anis; Dubin, Joel A

    2017-01-01

    Background Physicians and health policy makers are required to make predictions during their decision making in various medical problems. Many advances have been made in predictive modeling toward outcome prediction, but these innovations target an average patient and are insufficiently adjustable for individual patients. One developing idea in this field is individualized predictive analytics based on patient similarity. The goal of this approach is to identify patients who are similar to an index patient and derive insights from the records of similar patients to provide personalized predictions.. Objective The aim is to summarize and review published studies describing computer-based approaches for predicting patients’ future health status based on health data and patient similarity, identify gaps, and provide a starting point for related future research. Methods The method involved (1) conducting the review by performing automated searches in Scopus, PubMed, and ISI Web of Science, selecting relevant studies by first screening titles and abstracts then analyzing full-texts, and (2) documenting by extracting publication details and information on context, predictors, missing data, modeling algorithm, outcome, and evaluation methods into a matrix table, synthesizing data, and reporting results. Results After duplicate removal, 1339 articles were screened in abstracts and titles and 67 were selected for full-text review. In total, 22 articles met the inclusion criteria. Within included articles, hospitals were the main source of data (n=10). Cardiovascular disease (n=7) and diabetes (n=4) were the dominant patient diseases. Most studies (n=18) used neighborhood-based approaches in devising prediction models. Two studies showed that patient similarity-based modeling outperformed population-based predictive methods. Conclusions Interest in patient similarity-based predictive modeling for diagnosis and prognosis has been growing. In addition to raw/coded health data, wavelet transform and term frequency-inverse document frequency methods were employed to extract predictors. Selecting predictors with potential to highlight special cases and defining new patient similarity metrics were among the gaps identified in the existing literature that provide starting points for future work. Patient status prediction models based on patient similarity and health data offer exciting potential for personalizing and ultimately improving health care, leading to better patient outcomes. PMID:28258046

  4. Biomarkers for predicting type 2 diabetes development—Can metabolomics improve on existing biomarkers?

    PubMed Central

    Savolainen, Otto; Fagerberg, Björn; Vendelbo Lind, Mads; Sandberg, Ann-Sofie; Ross, Alastair B.; Bergström, Göran

    2017-01-01

    Aim The aim was to determine if metabolomics could be used to build a predictive model for type 2 diabetes (T2D) risk that would improve prediction of T2D over current risk markers. Methods Gas chromatography-tandem mass spectrometry metabolomics was used in a nested case-control study based on a screening sample of 64-year-old Caucasian women (n = 629). Candidate metabolic markers of T2D were identified in plasma obtained at baseline and the power to predict diabetes was tested in 69 incident cases occurring during 5.5 years follow-up. The metabolomics results were used as a standalone prediction model and in combination with established T2D predictive biomarkers for building eight T2D prediction models that were compared with each other based on their sensitivity and selectivity for predicting T2D. Results Established markers of T2D (impaired fasting glucose, impaired glucose tolerance, insulin resistance (HOMA), smoking, serum adiponectin)) alone, and in combination with metabolomics had the largest areas under the curve (AUC) (0.794 (95% confidence interval [0.738–0.850]) and 0.808 [0.749–0.867] respectively), with the standalone metabolomics model based on nine fasting plasma markers having a lower predictive power (0.657 [0.577–0.736]). Prediction based on non-blood based measures was 0.638 [0.565–0.711]). Conclusions Established measures of T2D risk remain the best predictor of T2D risk in this population. Additional markers detected using metabolomics are likely related to these measures as they did not enhance the overall prediction in a combined model. PMID:28692646

  5. A data-driven SVR model for long-term runoff prediction and uncertainty analysis based on the Bayesian framework

    NASA Astrophysics Data System (ADS)

    Liang, Zhongmin; Li, Yujie; Hu, Yiming; Li, Binquan; Wang, Jun

    2017-06-01

    Accurate and reliable long-term forecasting plays an important role in water resources management and utilization. In this paper, a hybrid model called SVR-HUP is presented to predict long-term runoff and quantify the prediction uncertainty. The model is created based on three steps. First, appropriate predictors are selected according to the correlations between meteorological factors and runoff. Second, a support vector regression (SVR) model is structured and optimized based on the LibSVM toolbox and a genetic algorithm. Finally, using forecasted and observed runoff, a hydrologic uncertainty processor (HUP) based on a Bayesian framework is used to estimate the posterior probability distribution of the simulated values, and the associated uncertainty of prediction was quantitatively analyzed. Six precision evaluation indexes, including the correlation coefficient (CC), relative root mean square error (RRMSE), relative error (RE), mean absolute percentage error (MAPE), Nash-Sutcliffe efficiency (NSE), and qualification rate (QR), are used to measure the prediction accuracy. As a case study, the proposed approach is applied in the Han River basin, South Central China. Three types of SVR models are established to forecast the monthly, flood season and annual runoff volumes. The results indicate that SVR yields satisfactory accuracy and reliability at all three scales. In addition, the results suggest that the HUP cannot only quantify the uncertainty of prediction based on a confidence interval but also provide a more accurate single value prediction than the initial SVR forecasting result. Thus, the SVR-HUP model provides an alternative method for long-term runoff forecasting.

  6. Estimation of the Viscosities of Liquid Sn-Based Binary Lead-Free Solder Alloys

    NASA Astrophysics Data System (ADS)

    Wu, Min; Li, Jinquan

    2018-01-01

    The viscosity of a binary Sn-based lead-free solder alloy was calculated by combining the predicted model with the Miedema model. The viscosity factor was proposed and the relationship between the viscosity and surface tension was analyzed as well. The investigation result shows that the viscosity of Sn-based lead-free solders predicted from the predicted model shows excellent agreement with the reported values. The viscosity factor is determined by three physical parameters: atomic volume, electronic density, and electro-negativity. In addition, the apparent correlation between the surface tension and viscosity of the binary Sn-based Pb-free solder was obtained based on the predicted model.

  7. Grid Quality and Resolution Issues from the Drag Prediction Workshop Series

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Vassberg, John C.; Tinoco, Edward N.; Mani, Mori; Brodersen, Olaf P.; Eisfeld, Bernhard; Wahls, Richard A.; Morrison, Joseph H.; Zickuhr, Tom; Levy, David; hide

    2008-01-01

    The drag prediction workshop series (DPW), held over the last six years, and sponsored by the AIAA Applied Aerodynamics Committee, has been extremely useful in providing an assessment of the state-of-the-art in computationally based aerodynamic drag prediction. An emerging consensus from the three workshop series has been the identification of spatial discretization errors as a dominant error source in absolute as well as incremental drag prediction. This paper provides an overview of the collective experience from the workshop series regarding the effect of grid-related issues on overall drag prediction accuracy. Examples based on workshop results are used to illustrate the effect of grid resolution and grid quality on drag prediction, and grid convergence behavior is examined in detail. For fully attached flows, various accurate and successful workshop results are demonstrated, while anomalous behavior is identified for a number of cases involving substantial regions of separated flow. Based on collective workshop experiences, recommendations for improvements in mesh generation technology which have the potential to impact the state-of-the-art of aerodynamic drag prediction are given.

  8. Thematic and spatial resolutions affect model-based predictions of tree species distribution.

    PubMed

    Liang, Yu; He, Hong S; Fraser, Jacob S; Wu, ZhiWei

    2013-01-01

    Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution.

  9. Thematic and Spatial Resolutions Affect Model-Based Predictions of Tree Species Distribution

    PubMed Central

    Liang, Yu; He, Hong S.; Fraser, Jacob S.; Wu, ZhiWei

    2013-01-01

    Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution. PMID:23861828

  10. Probabilistic Assessment of Above Zone Pressure Predictions at a Geologic Carbon Storage Site

    PubMed Central

    Namhata, Argha; Oladyshkin, Sergey; Dilmore, Robert M.; Zhang, Liwei; Nakles, David V.

    2016-01-01

    Carbon dioxide (CO2) storage into geological formations is regarded as an important mitigation strategy for anthropogenic CO2 emissions to the atmosphere. This study first simulates the leakage of CO2 and brine from a storage reservoir through the caprock. Then, we estimate the resulting pressure changes at the zone overlying the caprock also known as Above Zone Monitoring Interval (AZMI). A data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is then used to quantify the uncertainty in the above zone pressure prediction based on the uncertainties in different geologic parameters. Finally, a global sensitivity analysis is performed with Sobol indices based on the aPC technique to determine the relative importance of different parameters on pressure prediction. The results indicate that there can be uncertainty in pressure prediction locally around the leakage zones. The degree of such uncertainty in prediction depends on the quality of site specific information available for analysis. The scientific results from this study provide substantial insight that there is a need for site-specific data for efficient predictions of risks associated with storage activities. The presented approach can provide a basis of optimized pressure based monitoring network design at carbon storage sites. PMID:27996043

  11. Probabilistic Assessment of Above Zone Pressure Predictions at a Geologic Carbon Storage Site

    NASA Astrophysics Data System (ADS)

    Namhata, Argha; Oladyshkin, Sergey; Dilmore, Robert M.; Zhang, Liwei; Nakles, David V.

    2016-12-01

    Carbon dioxide (CO2) storage into geological formations is regarded as an important mitigation strategy for anthropogenic CO2 emissions to the atmosphere. This study first simulates the leakage of CO2 and brine from a storage reservoir through the caprock. Then, we estimate the resulting pressure changes at the zone overlying the caprock also known as Above Zone Monitoring Interval (AZMI). A data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is then used to quantify the uncertainty in the above zone pressure prediction based on the uncertainties in different geologic parameters. Finally, a global sensitivity analysis is performed with Sobol indices based on the aPC technique to determine the relative importance of different parameters on pressure prediction. The results indicate that there can be uncertainty in pressure prediction locally around the leakage zones. The degree of such uncertainty in prediction depends on the quality of site specific information available for analysis. The scientific results from this study provide substantial insight that there is a need for site-specific data for efficient predictions of risks associated with storage activities. The presented approach can provide a basis of optimized pressure based monitoring network design at carbon storage sites.

  12. Multiaxial Fatigue Life Prediction Based on Nonlinear Continuum Damage Mechanics and Critical Plane Method

    NASA Astrophysics Data System (ADS)

    Wu, Z. R.; Li, X.; Fang, L.; Song, Y. D.

    2018-04-01

    A new multiaxial fatigue life prediction model has been proposed in this paper. The concepts of nonlinear continuum damage mechanics and critical plane criteria were incorporated in the proposed model. The shear strain-based damage control parameter was chosen to account for multiaxial fatigue damage under constant amplitude loading. Fatigue tests were conducted on nickel-based superalloy GH4169 tubular specimens at the temperature of 400 °C under proportional and nonproportional loading. The proposed method was checked against the multiaxial fatigue test data of GH4169. Most of prediction results are within a factor of two scatter band of the test results.

  13. In silico prediction of pharmaceutical degradation pathways: a benchmarking study.

    PubMed

    Kleinman, Mark H; Baertschi, Steven W; Alsante, Karen M; Reid, Darren L; Mowery, Mark D; Shimanovich, Roman; Foti, Chris; Smith, William K; Reynolds, Dan W; Nefliu, Marcela; Ott, Martin A

    2014-11-03

    Zeneth is a new software application capable of predicting degradation products derived from small molecule active pharmaceutical ingredients. This study was aimed at understanding the current status of Zeneth's predictive capabilities and assessing gaps in predictivity. Using data from 27 small molecule drug substances from five pharmaceutical companies, the evolution of Zeneth predictions through knowledge base development since 2009 was evaluated. The experimentally observed degradation products from forced degradation, accelerated, and long-term stability studies were compared to Zeneth predictions. Steady progress in predictive performance was observed as the knowledge bases grew and were refined. Over the course of the development covered within this evaluation, the ability of Zeneth to predict experimentally observed degradants increased from 31% to 54%. In particular, gaps in predictivity were noted in the areas of epimerizations, N-dealkylation of N-alkylheteroaromatic compounds, photochemical decarboxylations, and electrocyclic reactions. The results of this study show that knowledge base development efforts have increased the ability of Zeneth to predict relevant degradation products and aid pharmaceutical research. This study has also provided valuable information to help guide further improvements to Zeneth and its knowledge base.

  14. Genetic-based prediction of disease traits: prediction is very difficult, especially about the future†

    PubMed Central

    Schrodi, Steven J.; Mukherjee, Shubhabrata; Shan, Ying; Tromp, Gerard; Sninsky, John J.; Callear, Amy P.; Carter, Tonia C.; Ye, Zhan; Haines, Jonathan L.; Brilliant, Murray H.; Crane, Paul K.; Smelser, Diane T.; Elston, Robert C.; Weeks, Daniel E.

    2014-01-01

    Translation of results from genetic findings to inform medical practice is a highly anticipated goal of human genetics. The aim of this paper is to review and discuss the role of genetics in medically-relevant prediction. Germline genetics presages disease onset and therefore can contribute prognostic signals that augment laboratory tests and clinical features. As such, the impact of genetic-based predictive models on clinical decisions and therapy choice could be profound. However, given that (i) medical traits result from a complex interplay between genetic and environmental factors, (ii) the underlying genetic architectures for susceptibility to common diseases are not well-understood, and (iii) replicable susceptibility alleles, in combination, account for only a moderate amount of disease heritability, there are substantial challenges to constructing and implementing genetic risk prediction models with high utility. In spite of these challenges, concerted progress has continued in this area with an ongoing accumulation of studies that identify disease predisposing genotypes. Several statistical approaches with the aim of predicting disease have been published. Here we summarize the current state of disease susceptibility mapping and pharmacogenetics efforts for risk prediction, describe methods used to construct and evaluate genetic-based predictive models, and discuss applications. PMID:24917882

  15. Analysis of temporal transcription expression profiles reveal links between protein function and developmental stages of Drosophila melanogaster.

    PubMed

    Wan, Cen; Lees, Jonathan G; Minneci, Federico; Orengo, Christine A; Jones, David T

    2017-10-01

    Accurate gene or protein function prediction is a key challenge in the post-genome era. Most current methods perform well on molecular function prediction, but struggle to provide useful annotations relating to biological process functions due to the limited power of sequence-based features in that functional domain. In this work, we systematically evaluate the predictive power of temporal transcription expression profiles for protein function prediction in Drosophila melanogaster. Our results show significantly better performance on predicting protein function when transcription expression profile-based features are integrated with sequence-derived features, compared with the sequence-derived features alone. We also observe that the combination of expression-based and sequence-based features leads to further improvement of accuracy on predicting all three domains of gene function. Based on the optimal feature combinations, we then propose a novel multi-classifier-based function prediction method for Drosophila melanogaster proteins, FFPred-fly+. Interpreting our machine learning models also allows us to identify some of the underlying links between biological processes and developmental stages of Drosophila melanogaster.

  16. Parameter prediction based on Improved Process neural network and ARMA error compensation in Evaporation Process

    NASA Astrophysics Data System (ADS)

    Qian, Xiaoshan

    2018-01-01

    The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.

  17. Prediction of welding shrinkage deformation of bridge steel box girder based on wavelet neural network

    NASA Astrophysics Data System (ADS)

    Tao, Yulong; Miao, Yunshui; Han, Jiaqi; Yan, Feiyun

    2018-05-01

    Aiming at the low accuracy of traditional forecasting methods such as linear regression method, this paper presents a prediction method for predicting the relationship between bridge steel box girder and its displacement with wavelet neural network. Compared with traditional forecasting methods, this scheme has better local characteristics and learning ability, which greatly improves the prediction ability of deformation. Through analysis of the instance and found that after compared with the traditional prediction method based on wavelet neural network, the rigid beam deformation prediction accuracy is higher, and is superior to the BP neural network prediction results, conform to the actual demand of engineering design.

  18. The wind power prediction research based on mind evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Ling; Zhao, Xinjian; Ji, Tianming; Miao, Jingwen; Cui, Haina

    2018-04-01

    When the wind power is connected to the power grid, its characteristics of fluctuation, intermittent and randomness will affect the stability of the power system. The wind power prediction can guarantee the power quality and reduce the operating cost of power system. There were some limitations in several traditional wind power prediction methods. On the basis, the wind power prediction method based on Mind Evolutionary Algorithm (MEA) is put forward and a prediction model is provided. The experimental results demonstrate that MEA performs efficiently in term of the wind power prediction. The MEA method has broad prospect of engineering application.

  19. Application of two direct runoff prediction methods in Puerto Rico

    USGS Publications Warehouse

    Sepulveda, N.

    1997-01-01

    Two methods for predicting direct runoff from rainfall data were applied to several basins and the resulting hydrographs compared to measured values. The first method uses a geomorphology-based unit hydrograph to predict direct runoff through its convolution with the excess rainfall hyetograph. The second method shows how the resulting hydraulic routing flow equation from a kinematic wave approximation is solved using a spectral method based on the matrix representation of the spatial derivative with Chebyshev collocation and a fourth-order Runge-Kutta time discretization scheme. The calibrated Green-Ampt (GA) infiltration parameters are obtained by minimizing the sum, over several rainfall events, of absolute differences between the total excess rainfall volume computed from the GA equations and the total direct runoff volume computed from a hydrograph separation technique. The improvement made in predicting direct runoff using a geomorphology-based unit hydrograph with the ephemeral and perennial stream network instead of the strictly perennial stream network is negligible. The hydraulic routing scheme presented here is highly accurate in predicting the magnitude and time of the hydrograph peak although the much faster unit hydrograph method also yields reasonable results.

  20. To Communicate or Not to Communicate: Factors Predicting Passengers' Intentions to Ask a Driver to Stop Text Messaging While Driving.

    PubMed

    Wang, Xiao

    2016-01-01

    Interpersonal communication is important in health campaigns. This research examined factors that are associated with passengers' intentions to communicate no texting with a texting driver in a scenario where the driver is their friend. Based on survey data collected from 546 college students, results showed that students' attitudes toward communication about no texting while driving were predicted by their utilitarian (i.e., safety), value-expressive, and ego-defensive motivations, in addition to being predicted by self-efficacy and norms. Additional results revealed that empathic concern was correlated with the value-expressive motivation and anticipated guilt. Anticipated guilt, together with attitudes, norms, and efficacy, predicted communication intentions. Results revealed that including attitude functions (motivations) in the reasoned action model could help propose and test theory-based predictions in interpersonal communication and health behaviors.

  1. Development of a percutaneous penetration predictive model by SR-FTIR.

    PubMed

    Jungman, E; Laugel, C; Rutledge, D N; Dumas, P; Baillet-Guffroy, A

    2013-01-30

    This work focused on developing a new evaluation criterion of percutaneous penetration, in complement to Log Pow and MW and based on high spatial resolution Fourier transformed infrared (FTIR) microspectroscopy with a synchrotron source (SR-FTIR). Classic Franz cell experiments were run and after 22 h molecule distribution in skin was determined either by HPLC or by SR-FTIR. HPLC data served as reference. HPLC and SR-FTIR results were compared and a new predictive criterion based from SR-FTIR results, named S(index), was determined using a multi-block data analysis technique (ComDim). A predictive cartography of the distribution of molecules in the skin was built and compared to OECD predictive cartography. This new criterion S(index) and the cartography using SR-FTIR/HPLC results provides relevant information for risk analysis regarding prediction of percutaneous penetration and could be used to build a new mathematical model. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Evaluation of Machine Learning and Rules-Based Approaches for Predicting Antimicrobial Resistance Profiles in Gram-negative Bacilli from Whole Genome Sequence Data.

    PubMed

    Pesesky, Mitchell W; Hussain, Tahir; Wallace, Meghan; Patel, Sanket; Andleeb, Saadia; Burnham, Carey-Ann D; Dantas, Gautam

    2016-01-01

    The time-to-result for culture-based microorganism recovery and phenotypic antimicrobial susceptibility testing necessitates initial use of empiric (frequently broad-spectrum) antimicrobial therapy. If the empiric therapy is not optimal, this can lead to adverse patient outcomes and contribute to increasing antibiotic resistance in pathogens. New, more rapid technologies are emerging to meet this need. Many of these are based on identifying resistance genes, rather than directly assaying resistance phenotypes, and thus require interpretation to translate the genotype into treatment recommendations. These interpretations, like other parts of clinical diagnostic workflows, are likely to be increasingly automated in the future. We set out to evaluate the two major approaches that could be amenable to automation pipelines: rules-based methods and machine learning methods. The rules-based algorithm makes predictions based upon current, curated knowledge of Enterobacteriaceae resistance genes. The machine-learning algorithm predicts resistance and susceptibility based on a model built from a training set of variably resistant isolates. As our test set, we used whole genome sequence data from 78 clinical Enterobacteriaceae isolates, previously identified to represent a variety of phenotypes, from fully-susceptible to pan-resistant strains for the antibiotics tested. We tested three antibiotic resistance determinant databases for their utility in identifying the complete resistome for each isolate. The predictions of the rules-based and machine learning algorithms for these isolates were compared to results of phenotype-based diagnostics. The rules based and machine-learning predictions achieved agreement with standard-of-care phenotypic diagnostics of 89.0 and 90.3%, respectively, across twelve antibiotic agents from six major antibiotic classes. Several sources of disagreement between the algorithms were identified. Novel variants of known resistance factors and incomplete genome assembly confounded the rules-based algorithm, resulting in predictions based on gene family, rather than on knowledge of the specific variant found. Low-frequency resistance caused errors in the machine-learning algorithm because those genes were not seen or seen infrequently in the test set. We also identified an example of variability in the phenotype-based results that led to disagreement with both genotype-based methods. Genotype-based antimicrobial susceptibility testing shows great promise as a diagnostic tool, and we outline specific research goals to further refine this methodology.

  3. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    NASA Astrophysics Data System (ADS)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  4. A Simplified Approach for the Rapid Generation of Transient Heat-Shield Environments

    NASA Technical Reports Server (NTRS)

    Wurster, Kathryn E.; Zoby, E. Vincent; Mills, Janelle C.; Kamhawi, Hilmi

    2007-01-01

    A simplified approach has been developed whereby transient entry heating environments are reliably predicted based upon a limited set of benchmark radiative and convective solutions. Heating, pressure and shear-stress levels, non-dimensionalized by an appropriate parameter at each benchmark condition are applied throughout the entry profile. This approach was shown to be valid based on the observation that the fully catalytic, laminar distributions examined were relatively insensitive to altitude as well as velocity throughout the regime of significant heating. In order to establish a best prediction by which to judge the results that can be obtained using a very limited benchmark set, predictions based on a series of benchmark cases along a trajectory are used. Solutions which rely only on the limited benchmark set, ideally in the neighborhood of peak heating, are compared against the resultant transient heating rates and total heat loads from the best prediction. Predictions based on using two or fewer benchmark cases at or near the trajectory peak heating condition, yielded results to within 5-10 percent of the best predictions. Thus, the method provides transient heating environments over the heat-shield face with sufficient resolution and accuracy for thermal protection system design and also offers a significant capability to perform rapid trade studies such as the effect of different trajectories, atmospheres, or trim angle of attack, on convective and radiative heating rates and loads, pressure, and shear-stress levels.

  5. SCPRED: Accurate prediction of protein structural class for sequences of twilight-zone similarity with predicting sequences

    PubMed Central

    Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke

    2008-01-01

    Background Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. Results SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. Conclusion The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are capable of separating the structural classes in spite of their low dimensionality. We also demonstrate that the SCPRED's predictions can be successfully used as a post-processing filter to improve performance of modern fold classification methods. PMID:18452616

  6. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model

    PubMed Central

    Li, Xiaoqing; Wang, Yu

    2018-01-01

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology. PMID:29351254

  7. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.

    PubMed

    Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu

    2018-01-19

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology.

  8. An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation

    DTIC Science & Technology

    2012-09-01

    94035, USA abhinav.saxena@nasa.gov ABSTRACT Prognostics deals with the prediction of the end of life ( EOL ) of a system. EOL is a random variable, due...future evolution of the system, accumulating additional uncertainty into the predicted EOL . Prediction algorithms that do not account for these sources of...uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in

  9. Solar Atmosphere to Earth's Surface: Long Lead Time dB/dt Predictions with the Space Weather Modeling Framework

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Manchester, W.; Savani, N.; Sokolov, I.; van der Holst, B.; Jin, M.; Toth, G.; Liemohn, M. W.; Gombosi, T. I.

    2017-12-01

    The future of space weather prediction depends on the community's ability to predict L1 values from observations of the solar atmosphere, which can yield hours of lead time. While both empirical and physics-based L1 forecast methods exist, it is not yet known if this nascent capability can translate to skilled dB/dt forecasts at the Earth's surface. This paper shows results for the first forecast-quality, solar-atmosphere-to-Earth's-surface dB/dt predictions. Two methods are used to predict solar wind and IMF conditions at L1 for several real-world coronal mass ejection events. The first method is an empirical and observationally based system to estimate the plasma characteristics. The magnetic field predictions are based on the Bz4Cast system which assumes that the CME has a cylindrical flux rope geometry locally around Earth's trajectory. The remaining plasma parameters of density, temperature and velocity are estimated from white-light coronagraphs via a variety of triangulation methods and forward based modelling. The second is a first-principles-based approach that combines the Eruptive Event Generator using Gibson-Low configuration (EEGGL) model with the Alfven Wave Solar Model (AWSoM). EEGGL specifies parameters for the Gibson-Low flux rope such that it erupts, driving a CME in the coronal model that reproduces coronagraph observations and propagates to 1AU. The resulting solar wind predictions are used to drive the operational Space Weather Modeling Framework (SWMF) for geospace. Following the configuration used by NOAA's Space Weather Prediction Center, this setup couples the BATS-R-US global magnetohydromagnetic model to the Rice Convection Model (RCM) ring current model and a height-integrated ionosphere electrodynamics model. The long lead time predictions of dB/dt are compared to model results that are driven by L1 solar wind observations. Both are compared to real-world observations from surface magnetometers at a variety of geomagnetic latitudes. Metrics are calculated to examine how the simulated solar wind drivers impact forecast skill. These results illustrate the current state of long-lead-time forecasting and the promise of this technology for operational use.

  10. MLITemp: A computer program to predict the thermal effects associated with hypervelocity impact damage to space station MLI

    NASA Technical Reports Server (NTRS)

    Rule, W. K.; Giridharan, V.

    1991-01-01

    A family of user-friendly, DOS PC based, Microsoft BASIC programs written to provide spacecraft designers with empirical predictions of space debris damage to orbiting spacecraft are described. Spacecraft wall temperatures and condensate formation is also predicted. The spacecraft wall configuration is assumed to consist of multilayered insulation (MLI) placed between a Whipple style bumper and the pressure wall. Impact damage predictions are based on data sets of experimental results obtained from simulating debris impacts on spacecraft using light gas guns on earth. A module of the program facilitates the creation of the database of experimental results that is used by the damage prediction modules to predict damage to the bumper, the MLI, and the pressure wall. A finite difference technique is used to predict temperature distributions in the pressure wall, the MLI, and the bumper. Condensate layer thickness is predicted for the case where the pressure wall temperature drops below the dew point temperature of the spacecraft atmosphere.

  11. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China

    PubMed Central

    Liu, Dong-jun; Li, Li

    2015-01-01

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332

  12. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China.

    PubMed

    Liu, Dong-jun; Li, Li

    2015-06-23

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.

  13. On the comparison of stochastic model predictive control strategies applied to a hydrogen-based microgrid

    NASA Astrophysics Data System (ADS)

    Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.

    2017-03-01

    In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.

  14. The influence of weather on migraine – are migraine attacks predictable?

    PubMed Central

    Hoffmann, Jan; Schirra, Tonio; Lo, Hendra; Neeb, Lars; Reuter, Uwe; Martus, Peter

    2015-01-01

    Objective The study aimed at elucidating a potential correlation between specific meteorological variables and the prevalence and intensity of migraine attacks as well as exploring a potential individual predictability of a migraine attack based on meteorological variables and their changes. Methods Attack prevalence and intensity of 100 migraineurs were correlated with atmospheric pressure, relative air humidity, and ambient temperature in 4-h intervals over 12 consecutive months. For each correlation, meteorological parameters at the time of the migraine attack as well as their variation within the preceding 24 h were analyzed. For migraineurs showing a positive correlation, logistic regression analysis was used to assess the predictability of a migraine attack based on meteorological information. Results In a subgroup of migraineurs, a significant weather sensitivity could be observed. In contrast, pooled analysis of all patients did not reveal a significant association. An individual prediction of a migraine attack based on meteorological data was not possible, mainly as a result of the small prevalence of attacks. Interpretation The results suggest that only a subgroup of migraineurs is sensitive to specific weather conditions. Our findings may provide an explanation as to why previous studies, which commonly rely on a pooled analysis, show inconclusive results. The lack of individual attack predictability indicates that the use of preventive measures based on meteorological conditions is not feasible. PMID:25642431

  15. Automated Clinical Assessment from Smart home-based Behavior Data

    PubMed Central

    Dawadi, Prafulla Nath; Cook, Diane Joyce; Schmitter-Edgecombe, Maureen

    2016-01-01

    Smart home technologies offer potential benefits for assisting clinicians by automating health monitoring and well-being assessment. In this paper, we examine the actual benefits of smart home-based analysis by monitoring daily behaviour in the home and predicting standard clinical assessment scores of the residents. To accomplish this goal, we propose a Clinical Assessment using Activity Behavior (CAAB) approach to model a smart home resident’s daily behavior and predict the corresponding standard clinical assessment scores. CAAB uses statistical features that describe characteristics of a resident’s daily activity performance to train machine learning algorithms that predict the clinical assessment scores. We evaluate the performance of CAAB utilizing smart home sensor data collected from 18 smart homes over two years using prediction and classification-based experiments. In the prediction-based experiments, we obtain a statistically significant correlation (r = 0.72) between CAAB-predicted and clinician-provided cognitive assessment scores and a statistically significant correlation (r = 0.45) between CAAB-predicted and clinician-provided mobility scores. Similarly, for the classification-based experiments, we find CAAB has a classification accuracy of 72% while classifying cognitive assessment scores and 76% while classifying mobility scores. These prediction and classification results suggest that it is feasible to predict standard clinical scores using smart home sensor data and learning-based data analysis. PMID:26292348

  16. Research on light rail electric load forecasting based on ARMA model

    NASA Astrophysics Data System (ADS)

    Huang, Yifan

    2018-04-01

    The article compares a variety of time series models and combines the characteristics of power load forecasting. Then, a light load forecasting model based on ARMA model is established. Based on this model, a light rail system is forecasted. The prediction results show that the accuracy of the model prediction is high.

  17. Prediction of physical workload in reduced gravity environments

    NASA Technical Reports Server (NTRS)

    Goldberg, Joseph H.

    1987-01-01

    The background, development, and application of a methodology to predict human energy expenditure and physical workload in low gravity environments, such as a Lunar or Martian base, is described. Based on a validated model to predict energy expenditures in Earth-based industrial jobs, the model relies on an elemental analysis of the proposed job. Because the job itself need not physically exist, many alternative job designs may be compared in their physical workload. The feasibility of using the model for prediction of low gravity work was evaluated by lowering body and load weights, while maintaining basal energy expenditure. Comparison of model results was made both with simulated low gravity energy expenditure studies and with reported Apollo 14 Lunar EVA expenditure. Prediction accuracy was very good for walking and for cart pulling on slopes less than 15 deg, but the model underpredicted the most difficult work conditions. This model was applied to example core sampling and facility construction jobs, as presently conceptualized for a Lunar or Martian base. Resultant energy expenditures and suggested work-rest cycles were well within the range of moderate work difficulty. Future model development requirements were also discussed.

  18. Predicting links based on knowledge dissemination in complex network

    NASA Astrophysics Data System (ADS)

    Zhou, Wen; Jia, Yifan

    2017-04-01

    Link prediction is the task of mining the missing links in networks or predicting the next vertex pair to be connected by a link. A lot of link prediction methods were inspired by evolutionary processes of networks. In this paper, a new mechanism for the formation of complex networks called knowledge dissemination (KD) is proposed with the assumption of knowledge disseminating through the paths of a network. Accordingly, a new link prediction method-knowledge dissemination based link prediction (KDLP)-is proposed to test KD. KDLP characterizes vertex similarity based on knowledge quantity (KQ) which measures the importance of a vertex through H-index. Extensive numerical simulations on six real-world networks demonstrate that KDLP is a strong link prediction method which performs at a higher prediction accuracy than four well-known similarity measures including common neighbors, local path index, average commute time and matrix forest index. Furthermore, based on the common conclusion that an excellent link prediction method reveals a good evolving mechanism, the experiment results suggest that KD is a considerable network evolving mechanism for the formation of complex networks.

  19. Conformal Prediction Based on K-Nearest Neighbors for Discrimination of Ginsengs by a Home-Made Electronic Nose

    PubMed Central

    Sun, Xiyang; Miao, Jiacheng; Wang, You; Luo, Zhiyuan; Li, Guang

    2017-01-01

    An estimate on the reliability of prediction in the applications of electronic nose is essential, which has not been paid enough attention. An algorithm framework called conformal prediction is introduced in this work for discriminating different kinds of ginsengs with a home-made electronic nose instrument. Nonconformity measure based on k-nearest neighbors (KNN) is implemented separately as underlying algorithm of conformal prediction. In offline mode, the conformal predictor achieves a classification rate of 84.44% based on 1NN and 80.63% based on 3NN, which is better than that of simple KNN. In addition, it provides an estimate of reliability for each prediction. In online mode, the validity of predictions is guaranteed, which means that the error rate of region predictions never exceeds the significance level set by a user. The potential of this framework for detecting borderline examples and outliers in the application of E-nose is also investigated. The result shows that conformal prediction is a promising framework for the application of electronic nose to make predictions with reliability and validity. PMID:28805721

  20. Plausibility and parameter sensitivity of micro-finite element-based joint load prediction at the proximal femur.

    PubMed

    Synek, Alexander; Pahr, Dieter H

    2018-06-01

    A micro-finite element-based method to estimate the bone loading history based on bone architecture was recently presented in the literature. However, a thorough investigation of the parameter sensitivity and plausibility of this method to predict joint loads is still missing. The goals of this study were (1) to analyse the parameter sensitivity of the joint load predictions at one proximal femur and (2) to assess the plausibility of the results by comparing load predictions of ten proximal femora to in vivo hip joint forces measured with instrumented prostheses (available from www.orthoload.com ). Joint loads were predicted by optimally scaling the magnitude of four unit loads (inclined [Formula: see text] to [Formula: see text] with respect to the vertical axis) applied to micro-finite element models created from high-resolution computed tomography scans ([Formula: see text]m voxel size). Parameter sensitivity analysis was performed by varying a total of nine parameters and showed that predictions of the peak load directions (range 10[Formula: see text]-[Formula: see text]) are more robust than the predicted peak load magnitudes (range 2344.8-4689.5 N). Comparing the results of all ten femora with the in vivo loading data of ten subjects showed that peak loads are plausible both in terms of the load direction (in vivo: [Formula: see text], predicted: [Formula: see text]) and magnitude (in vivo: [Formula: see text], predicted: [Formula: see text]). Overall, this study suggests that micro-finite element-based joint load predictions are both plausible and robust in terms of the predicted peak load direction, but predicted load magnitudes should be interpreted with caution.

  1. Ternary isocratic mobile phase optimization utilizing resolution Design Space based on retention time and peak width modeling.

    PubMed

    Kawabe, Takefumi; Tomitsuka, Toshiaki; Kajiro, Toshi; Kishi, Naoyuki; Toyo'oka, Toshimasa

    2013-01-18

    An optimization procedure of ternary isocratic mobile phase composition in the HPLC method using a statistical prediction model and visualization technique is described. In this report, two prediction models were first evaluated to obtain reliable prediction results. The retention time prediction model was constructed by modification from past respectable knowledge of retention modeling against ternary solvent strength changes. An excellent correlation between observed and predicted retention time was given in various kinds of pharmaceutical compounds by the multiple regression modeling of solvent strength parameters. The peak width of half height prediction model employed polynomial fitting of the retention time, because a linear relationship between the peak width of half height and the retention time was not obtained even after taking into account the contribution of the extra-column effect based on a moment method. Accurate prediction results were able to be obtained by such model, showing mostly over 0.99 value of correlation coefficient between observed and predicted peak width of half height. Then, a procedure to visualize a resolution Design Space was tried as the secondary challenge. An artificial neural network method was performed to link directly between ternary solvent strength parameters and predicted resolution, which were determined by accurate prediction results of retention time and a peak width of half height, and to visualize appropriate ternary mobile phase compositions as a range of resolution over 1.5 on the contour profile. By using mixtures of similar pharmaceutical compounds in case studies, we verified a possibility of prediction to find the optimal range of condition. Observed chromatographic results on the optimal condition mostly matched with the prediction and the average of difference between observed and predicted resolution were approximately 0.3. This means that enough accuracy for prediction could be achieved by the proposed procedure. Consequently, the procedure to search the optimal range of ternary solvent strength achieving an appropriate separation is provided by using the resolution Design Space based on accurate prediction. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Predictive control strategies for wind turbine system based on permanent magnet synchronous generator.

    PubMed

    Maaoui-Ben Hassine, Ikram; Naouar, Mohamed Wissem; Mrabet-Bellaaj, Najiba

    2016-05-01

    In this paper, Model Predictive Control and Dead-beat predictive control strategies are proposed for the control of a PMSG based wind energy system. The proposed MPC considers the model of the converter-based system to forecast the possible future behavior of the controlled variables. It allows selecting the voltage vector to be applied that leads to a minimum error by minimizing a predefined cost function. The main features of the MPC are low current THD and robustness against parameters variations. The Dead-beat predictive control is based on the system model to compute the optimum voltage vector that ensures zero-steady state error. The optimum voltage vector is then applied through Space Vector Modulation (SVM) technique. The main advantages of the Dead-beat predictive control are low current THD and constant switching frequency. The proposed control techniques are presented and detailed for the control of back-to-back converter in a wind turbine system based on PMSG. Simulation results (under Matlab-Simulink software environment tool) and experimental results (under developed prototyping platform) are presented in order to show the performances of the considered control strategies. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Testing the Predictive Power of Coulomb Stress on Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.

    2009-12-01

    Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.

  4. Biological Networks for Predicting Chemical Hepatocarcinogenicity Using Gene Expression Data from Treated Mice and Relevance across Human and Rat Species

    PubMed Central

    Thomas, Reuben; Thomas, Russell S.; Auerbach, Scott S.; Portier, Christopher J.

    2013-01-01

    Background Several groups have employed genomic data from subchronic chemical toxicity studies in rodents (90 days) to derive gene-centric predictors of chronic toxicity and carcinogenicity. Genes are annotated to belong to biological processes or molecular pathways that are mechanistically well understood and are described in public databases. Objectives To develop a molecular pathway-based prediction model of long term hepatocarcinogenicity using 90-day gene expression data and to evaluate the performance of this model with respect to both intra-species, dose-dependent and cross-species predictions. Methods Genome-wide hepatic mRNA expression was retrospectively measured in B6C3F1 mice following subchronic exposure to twenty-six (26) chemicals (10 were positive, 2 equivocal and 14 negative for liver tumors) previously studied by the US National Toxicology Program. Using these data, a pathway-based predictor model for long-term liver cancer risk was derived using random forests. The prediction model was independently validated on test sets associated with liver cancer risk obtained from mice, rats and humans. Results Using 5-fold cross validation, the developed prediction model had reasonable predictive performance with the area under receiver-operator curve (AUC) equal to 0.66. The developed prediction model was then used to extrapolate the results to data associated with rat and human liver cancer. The extrapolated model worked well for both extrapolated species (AUC value of 0.74 for rats and 0.91 for humans). The prediction models implied a balanced interplay between all pathway responses leading to carcinogenicity predictions. Conclusions Pathway-based prediction models estimated from sub-chronic data hold promise for predicting long-term carcinogenicity and also for its ability to extrapolate results across multiple species. PMID:23737943

  5. Integrated detection of fractures and caves in carbonate fractured-vuggy reservoirs based on seismic data and well data

    NASA Astrophysics Data System (ADS)

    Cao, Zhanning; Li, Xiangyang; Sun, Shaohan; Liu, Qun; Deng, Guangxiao

    2018-04-01

    Aiming at the prediction of carbonate fractured-vuggy reservoirs, we put forward an integrated approach based on seismic and well data. We divide a carbonate fracture-cave system into four scales for study: micro-scale fracture, meso-scale fracture, macro-scale fracture and cave. Firstly, we analyze anisotropic attributes of prestack azimuth gathers based on multi-scale rock physics forward modeling. We select the frequency attenuation gradient attribute to calculate azimuth anisotropy intensity, and we constrain the result with Formation MicroScanner image data and trial production data to predict the distribution of both micro-scale and meso-scale fracture sets. Then, poststack seismic attributes, variance, curvature and ant algorithms are used to predict the distribution of macro-scale fractures. We also constrain the results with trial production data for accuracy. Next, the distribution of caves is predicted by the amplitude corresponding to the instantaneous peak frequency of the seismic imaging data. Finally, the meso-scale fracture sets, macro-scale fractures and caves are combined to obtain an integrated result. This integrated approach is applied to a real field in Tarim Basin in western China for the prediction of fracture-cave reservoirs. The results indicate that this approach can well explain the spatial distribution of carbonate reservoirs. It can solve the problem of non-uniqueness and improve fracture prediction accuracy.

  6. Predicting fundamental and realized distributions based on thermal niche: A case study of a freshwater turtle

    NASA Astrophysics Data System (ADS)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.

    2018-04-01

    Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.

  7. Estimation of relative effectiveness of phylogenetic programs by machine learning.

    PubMed

    Krivozubov, Mikhail; Goebels, Florian; Spirin, Sergei

    2014-04-01

    Reconstruction of phylogeny of a protein family from a sequence alignment can produce results of different quality. Our goal is to predict the quality of phylogeny reconstruction basing on features that can be extracted from the input alignment. We used Fitch-Margoliash (FM) method of phylogeny reconstruction and random forest as a predictor. For training and testing the predictor, alignments of orthologous series (OS) were used, for which the result of phylogeny reconstruction can be evaluated by comparison with trees of corresponding organisms. Our results show that the quality of phylogeny reconstruction can be predicted with more than 80% precision. Also, we tried to predict which phylogeny reconstruction method, FM or UPGMA, is better for a particular alignment. With the used set of features, among alignments for which the obtained predictor predicts a better performance of UPGMA, 56% really give a better result with UPGMA. Taking into account that in our testing set only for 34% alignments UPGMA performs better, this result shows a principal possibility to predict the better phylogeny reconstruction method basing on features of a sequence alignment.

  8. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  9. Multi-agent cooperation rescue algorithm based on influence degree and state prediction

    NASA Astrophysics Data System (ADS)

    Zheng, Yanbin; Ma, Guangfu; Wang, Linlin; Xi, Pengxue

    2018-04-01

    Aiming at the multi-agent cooperative rescue in disaster, a multi-agent cooperative rescue algorithm based on impact degree and state prediction is proposed. Firstly, based on the influence of the information in the scene on the collaborative task, the influence degree function is used to filter the information. Secondly, using the selected information to predict the state of the system and Agent behavior. Finally, according to the result of the forecast, the cooperative behavior of Agent is guided and improved the efficiency of individual collaboration. The simulation results show that this algorithm can effectively solve the cooperative rescue problem of multi-agent and ensure the efficient completion of the task.

  10. Riometer based Neural Network Prediction of Kp

    NASA Astrophysics Data System (ADS)

    Arnason, K. M.; Spanswick, E.; Chaddock, D.; Tabrizi, A. F.; Behjat, L.

    2017-12-01

    The Canadian Geospace Observatory Riometer Array is a network of 11 wide-beam riometers deployed across Central and Northern Canada. The geographic coverage of the network affords a near continent scale view of high energy (>30keV) electron precipitation at a very course spatial resolution. In this paper we present the first results from a neural network based analysis of riometer data. Trained on decades of riometer data, the neural network is tuned to predict a simple index of global geomagnetic activity (Kp) based solely on the information provided by the high energy electron precipitation over Canada. We present results from various configurations of training and discuss the applicability of this technique for short term prediction of geomagnetic activity.

  11. A general framework for multivariate multi-index drought prediction based on Multivariate Ensemble Streamflow Prediction (MESP)

    NASA Astrophysics Data System (ADS)

    Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.

    2016-08-01

    Drought is among the costliest natural hazards worldwide and extreme drought events in recent years have caused huge losses to various sectors. Drought prediction is therefore critically important for providing early warning information to aid decision making to cope with drought. Due to the complicated nature of drought, it has been recognized that the univariate drought indicator may not be sufficient for drought characterization and hence multivariate drought indices have been developed for drought monitoring. Alongside the substantial effort in drought monitoring with multivariate drought indices, it is of equal importance to develop a drought prediction method with multivariate drought indices to integrate drought information from various sources. This study proposes a general framework for multivariate multi-index drought prediction that is capable of integrating complementary prediction skills from multiple drought indices. The Multivariate Ensemble Streamflow Prediction (MESP) is employed to sample from historical records for obtaining statistical prediction of multiple variables, which is then used as inputs to achieve multivariate prediction. The framework is illustrated with a linearly combined drought index (LDI), which is a commonly used multivariate drought index, based on climate division data in California and New York in the United States with different seasonality of precipitation. The predictive skill of LDI (represented with persistence) is assessed by comparison with the univariate drought index and results show that the LDI prediction skill is less affected by seasonality than the meteorological drought prediction based on SPI. Prediction results from the case study show that the proposed multivariate drought prediction outperforms the persistence prediction, implying a satisfactory performance of multivariate drought prediction. The proposed method would be useful for drought prediction to integrate drought information from various sources for early drought warning.

  12. A class-based link prediction using Distance Dependent Chinese Restaurant Process

    NASA Astrophysics Data System (ADS)

    Andalib, Azam; Babamir, Seyed Morteza

    2016-08-01

    One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.

  13. An auxiliary optimization method for complex public transit route network based on link prediction

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian

    2018-02-01

    Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.

  14. Knowledge-based analysis of microarrays for the discovery of transcriptional regulation relationships

    PubMed Central

    2010-01-01

    Background The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. Results In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. Conclusion High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data. PMID:20122245

  15. Fatigue criterion to system design, life and reliability

    NASA Technical Reports Server (NTRS)

    Zaretsky, E. V.

    1985-01-01

    A generalized methodology to structural life prediction, design, and reliability based upon a fatigue criterion is advanced. The life prediction methodology is based in part on work of W. Weibull and G. Lundberg and A. Palmgren. The approach incorporates the computed life of elemental stress volumes of a complex machine element to predict system life. The results of coupon fatigue testing can be incorporated into the analysis allowing for life prediction and component or structural renewal rates with reasonable statistical certainty.

  16. Finite element based model predictive control for active vibration suppression of a one-link flexible manipulator.

    PubMed

    Dubay, Rickey; Hassan, Marwan; Li, Chunying; Charest, Meaghan

    2014-09-01

    This paper presents a unique approach for active vibration control of a one-link flexible manipulator. The method combines a finite element model of the manipulator and an advanced model predictive controller to suppress vibration at its tip. This hybrid methodology improves significantly over the standard application of a predictive controller for vibration control. The finite element model used in place of standard modelling in the control algorithm provides a more accurate prediction of dynamic behavior, resulting in enhanced control. Closed loop control experiments were performed using the flexible manipulator, instrumented with strain gauges and piezoelectric actuators. In all instances, experimental and simulation results demonstrate that the finite element based predictive controller provides improved active vibration suppression in comparison with using a standard predictive control strategy. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Prediction model of dissolved oxygen in ponds based on ELM neural network

    NASA Astrophysics Data System (ADS)

    Li, Xinfei; Ai, Jiaoyan; Lin, Chunhuan; Guan, Haibin

    2018-02-01

    Dissolved oxygen in ponds is affected by many factors, and its distribution is unbalanced. In this paper, in order to improve the imbalance of dissolved oxygen distribution more effectively, the dissolved oxygen prediction model of Extreme Learning Machine (ELM) intelligent algorithm is established, based on the method of improving dissolved oxygen distribution by artificial push flow. Select the Lake Jing of Guangxi University as the experimental area. Using the model to predict the dissolved oxygen concentration of different voltage pumps, the results show that the ELM prediction accuracy is higher than the BP algorithm, and its mean square error is MSEELM=0.0394, the correlation coefficient RELM=0.9823. The prediction results of the 24V voltage pump push flow show that the discrete prediction curve can approximate the measured values well. The model can provide the basis for the artificial improvement of the dissolved oxygen distribution decision.

  18. Predicting subcontractor performance using web-based Evolutionary Fuzzy Neural Networks.

    PubMed

    Ko, Chien-Ho

    2013-01-01

    Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism.

  19. Predicting Subcontractor Performance Using Web-Based Evolutionary Fuzzy Neural Networks

    PubMed Central

    2013-01-01

    Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism. PMID:23864830

  20. Developing symptom-based predictive models of endometriosis as a clinical screening tool: results from a multicenter study

    PubMed Central

    Nnoaham, Kelechi E.; Hummelshoj, Lone; Kennedy, Stephen H.; Jenkinson, Crispin; Zondervan, Krina T.

    2012-01-01

    Objective To generate and validate symptom-based models to predict endometriosis among symptomatic women prior to undergoing their first laparoscopy. Design Prospective, observational, two-phase study, in which women completed a 25-item questionnaire prior to surgery. Setting Nineteen hospitals in 13 countries. Patient(s) Symptomatic women (n = 1,396) scheduled for laparoscopy without a previous surgical diagnosis of endometriosis. Intervention(s) None. Main Outcome Measure(s) Sensitivity and specificity of endometriosis diagnosis predicted by symptoms and patient characteristics from optimal models developed using multiple logistic regression analyses in one data set (phase I), and independently validated in a second data set (phase II) by receiver operating characteristic (ROC) curve analysis. Result(s) Three hundred sixty (46.7%) women in phase I and 364 (58.2%) in phase II were diagnosed with endometriosis at laparoscopy. Menstrual dyschezia (pain on opening bowels) and a history of benign ovarian cysts most strongly predicted both any and stage III and IV endometriosis in both phases. Prediction of any-stage endometriosis, although improved by ultrasound scan evidence of cyst/nodules, was relatively poor (area under the curve [AUC] = 68.3). Stage III and IV disease was predicted with good accuracy (AUC = 84.9, sensitivity of 82.3% and specificity 75.8% at an optimal cut-off of 0.24). Conclusion(s) Our symptom-based models predict any-stage endometriosis relatively poorly and stage III and IV disease with good accuracy. Predictive tools based on such models could help to prioritize women for surgical investigation in clinical practice and thus contribute to reducing time to diagnosis. We invite other researchers to validate the key models in additional populations. PMID:22657249

  1. Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter

    NASA Astrophysics Data System (ADS)

    Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei

    2017-10-01

    To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.

  2. Density-driven transport of gas phase chemicals in unsaturated soils

    NASA Astrophysics Data System (ADS)

    Fen, Chiu-Shia; Sun, Yong-tai; Cheng, Yuen; Chen, Yuanchin; Yang, Whaiwan; Pan, Changtai

    2018-01-01

    Variations of gas phase density are responsible for advective and diffusive transports of organic vapors in unsaturated soils. Laboratory experiments were conducted to explore dense gas transport (sulfur hexafluoride, SF6) from different source densities through a nitrogen gas-dry soil column. Gas pressures and SF6 densities at transient state were measured along the soil column for three transport configurations (horizontal, vertically upward and vertically downward transport). These measurements and others reported in the literature were compared with simulation results obtained from two models based on different diffusion approaches: the dusty gas model (DGM) equations and a Fickian-type molar fraction-based diffusion expression. The results show that the DGM and Fickian-based models predicted similar dense gas density profiles which matched the measured data well for horizontal transport of dense gas at low to high source densities, despite the pressure variations predicted in the soil column were opposite to the measurements. The pressure evolutions predicted by both models were in trend similar to the measured ones for vertical transport of dense gas. However, differences between the dense gas densities predicted by the DGM and Fickian-based models were discernible for vertically upward transport of dense gas even at low source densities, as the DGM-based predictions matched the measured data better than the Fickian results did. For vertically downward transport, the dense gas densities predicted by both models were not greatly different from our experimental measurements, but substantially greater than the observations obtained from the literature, especially at high source densities. Further research will be necessary for exploring factors affecting downward transport of dense gas in soil columns. Use of the measured data to compute flux components of SF6 showed that the magnitudes of diffusive flux component based on the Fickian-type diffusion expressions in terms of molar concentration, molar fraction and mass density fraction gradient were almost the same. However, they were greater than the result computed with the mass fraction gradient for > 24% and the DGM-based result for more than one time. As a consequence, the DGM-based total flux of SF6 was in magnitude greatly less than the Fickian result not only for horizontal transport (diffusion-dominating) but also for vertical transport (advection and diffusion) of dense gas. Particularly, the Fickian-based total flux was more than two times in magnitude as much as the DGM result for vertically upward transport of dense gas.

  3. Neural mechanisms of rhythm-based temporal prediction: Delta phase-locking reflects temporal predictability but not rhythmic entrainment.

    PubMed

    Breska, Assaf; Deouell, Leon Y

    2017-02-01

    Predicting the timing of upcoming events enables efficient resource allocation and action preparation. Rhythmic streams, such as music, speech, and biological motion, constitute a pervasive source for temporal predictions. Widely accepted entrainment theories postulate that rhythm-based predictions are mediated by synchronizing low-frequency neural oscillations to the rhythm, as indicated by increased phase concentration (PC) of low-frequency neural activity for rhythmic compared to random streams. However, we show here that PC enhancement in scalp recordings is not specific to rhythms but is observed to the same extent in less periodic streams if they enable memory-based prediction. This is inconsistent with the predictions of a computational entrainment model of stronger PC for rhythmic streams. Anticipatory change in alpha activity and facilitation of electroencephalogram (EEG) manifestations of response selection are also comparable between rhythm- and memory-based predictions. However, rhythmic sequences uniquely result in obligatory depression of preparation-related premotor brain activity when an on-beat event is omitted, even when it is strategically beneficial to maintain preparation, leading to larger behavioral costs for violation of prediction. Thus, while our findings undermine the validity of PC as a sign of rhythmic entrainment, they constitute the first electrophysiological dissociation, to our knowledge, between mechanisms of rhythmic predictions and of memory-based predictions: the former obligatorily lead to resonance-like preparation patterns (that are in line with entrainment), while the latter allow flexible resource allocation in time regardless of periodicity in the input. Taken together, they delineate the neural mechanisms of three distinct modes of preparation: continuous vigilance, interval-timing-based prediction and rhythm-based prediction.

  4. Neural mechanisms of rhythm-based temporal prediction: Delta phase-locking reflects temporal predictability but not rhythmic entrainment

    PubMed Central

    Deouell, Leon Y.

    2017-01-01

    Predicting the timing of upcoming events enables efficient resource allocation and action preparation. Rhythmic streams, such as music, speech, and biological motion, constitute a pervasive source for temporal predictions. Widely accepted entrainment theories postulate that rhythm-based predictions are mediated by synchronizing low-frequency neural oscillations to the rhythm, as indicated by increased phase concentration (PC) of low-frequency neural activity for rhythmic compared to random streams. However, we show here that PC enhancement in scalp recordings is not specific to rhythms but is observed to the same extent in less periodic streams if they enable memory-based prediction. This is inconsistent with the predictions of a computational entrainment model of stronger PC for rhythmic streams. Anticipatory change in alpha activity and facilitation of electroencephalogram (EEG) manifestations of response selection are also comparable between rhythm- and memory-based predictions. However, rhythmic sequences uniquely result in obligatory depression of preparation-related premotor brain activity when an on-beat event is omitted, even when it is strategically beneficial to maintain preparation, leading to larger behavioral costs for violation of prediction. Thus, while our findings undermine the validity of PC as a sign of rhythmic entrainment, they constitute the first electrophysiological dissociation, to our knowledge, between mechanisms of rhythmic predictions and of memory-based predictions: the former obligatorily lead to resonance-like preparation patterns (that are in line with entrainment), while the latter allow flexible resource allocation in time regardless of periodicity in the input. Taken together, they delineate the neural mechanisms of three distinct modes of preparation: continuous vigilance, interval-timing-based prediction and rhythm-based prediction. PMID:28187128

  5. Hierarchical prediction of industrial water demand based on refined Laspeyres decomposition analysis.

    PubMed

    Shang, Yizi; Lu, Shibao; Gong, Jiaguo; Shang, Ling; Li, Xiaofei; Wei, Yongping; Shi, Hongwang

    2017-12-01

    A recent study decomposed the changes in industrial water use into three hierarchies (output, technology, and structure) using a refined Laspeyres decomposition model, and found monotonous and exclusive trends in the output and technology hierarchies. Based on that research, this study proposes a hierarchical prediction approach to forecast future industrial water demand. Three water demand scenarios (high, medium, and low) were then established based on potential future industrial structural adjustments, and used to predict water demand for the structural hierarchy. The predictive results of this approach were compared with results from a grey prediction model (GPM (1, 1)). The comparison shows that the results of the two approaches were basically identical, differing by less than 10%. Taking Tianjin, China, as a case, and using data from 2003-2012, this study predicts that industrial water demand will continuously increase, reaching 580 million m 3 , 776.4 million m 3 , and approximately 1.09 billion m 3 by the years 2015, 2020 and 2025 respectively. It is concluded that Tianjin will soon face another water crisis if no immediate measures are taken. This study recommends that Tianjin adjust its industrial structure with water savings as the main objective, and actively seek new sources of water to increase its supply.

  6. Incorporating information on predicted solvent accessibility to the co-evolution-based study of protein interactions.

    PubMed

    Ochoa, David; García-Gutiérrez, Ponciano; Juan, David; Valencia, Alfonso; Pazos, Florencio

    2013-01-27

    A widespread family of methods for studying and predicting protein interactions using sequence information is based on co-evolution, quantified as similarity of phylogenetic trees. Part of the co-evolution observed between interacting proteins could be due to co-adaptation caused by inter-protein contacts. In this case, the co-evolution is expected to be more evident when evaluated on the surface of the proteins or the internal layers close to it. In this work we study the effect of incorporating information on predicted solvent accessibility to three methods for predicting protein interactions based on similarity of phylogenetic trees. We evaluate the performance of these methods in predicting different types of protein associations when trees based on positions with different characteristics of predicted accessibility are used as input. We found that predicted accessibility improves the results of two recent versions of the mirrortree methodology in predicting direct binary physical interactions, while it neither improves these methods, nor the original mirrortree method, in predicting other types of interactions. That improvement comes at no cost in terms of applicability since accessibility can be predicted for any sequence. We also found that predictions of protein-protein interactions are improved when multiple sequence alignments with a richer representation of sequences (including paralogs) are incorporated in the accessibility prediction.

  7. Comparison of Prediction Model for Cardiovascular Autonomic Dysfunction Using Artificial Neural Network and Logistic Regression Analysis

    PubMed Central

    Zeng, Fangfang; Li, Zhongtao; Yu, Xiaoling; Zhou, Linuo

    2013-01-01

    Background This study aimed to develop the artificial neural network (ANN) and multivariable logistic regression (LR) analyses for prediction modeling of cardiovascular autonomic (CA) dysfunction in the general population, and compare the prediction models using the two approaches. Methods and Materials We analyzed a previous dataset based on a Chinese population sample consisting of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN and LR analysis, and were tested in the validation set. Performances of these prediction models were then compared. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with the prevalence of CA dysfunction (P<0.05). The mean area under the receiver-operating curve was 0.758 (95% CI 0.724–0.793) for LR and 0.762 (95% CI 0.732–0.793) for ANN analysis, but noninferiority result was found (P<0.001). The similar results were found in comparisons of sensitivity, specificity, and predictive values in the prediction models between the LR and ANN analyses. Conclusion The prediction models for CA dysfunction were developed using ANN and LR. ANN and LR are two effective tools for developing prediction models based on our dataset. PMID:23940593

  8. Determination of Minimum Training Sample Size for Microarray-Based Cancer Outcome Prediction–An Empirical Assessment

    PubMed Central

    Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu

    2013-01-01

    The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920

  9. A micromechanics-based strength prediction methodology for notched metal matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1992-01-01

    An analytical micromechanics based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and post fatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.

  10. A micromechanics-based strength prediction methodology for notched metal-matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1993-01-01

    An analytical micromechanics-based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three-dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and postfatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics-based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.

  11. Consistent prediction of GO protein localization.

    PubMed

    Spetale, Flavio E; Arce, Debora; Krsticevic, Flavia; Bulacio, Pilar; Tapia, Elizabeth

    2018-05-17

    The GO-Cellular Component (GO-CC) ontology provides a controlled vocabulary for the consistent description of the subcellular compartments or macromolecular complexes where proteins may act. Current machine learning-based methods used for the automated GO-CC annotation of proteins suffer from the inconsistency of individual GO-CC term predictions. Here, we present FGGA-CC + , a class of hierarchical graph-based classifiers for the consistent GO-CC annotation of protein coding genes at the subcellular compartment or macromolecular complex levels. Aiming to boost the accuracy of GO-CC predictions, we make use of the protein localization knowledge in the GO-Biological Process (GO-BP) annotations to boost the accuracy of GO-CC prediction. As a result, FGGA-CC + classifiers are built from annotation data in both the GO-CC and GO-BP ontologies. Due to their graph-based design, FGGA-CC + classifiers are fully interpretable and their predictions amenable to expert analysis. Promising results on protein annotation data from five model organisms were obtained. Additionally, successful validation results in the annotation of a challenging subset of tandem duplicated genes in the tomato non-model organism were accomplished. Overall, these results suggest that FGGA-CC + classifiers can indeed be useful for satisfying the huge demand of GO-CC annotation arising from ubiquitous high throughout sequencing and proteomic projects.

  12. Application of a GIS-/remote sensing-based approach for predicting groundwater potential zones using a multi-criteria data mining methodology.

    PubMed

    Mogaji, Kehinde Anthony; Lim, Hwee San

    2017-07-01

    This study integrates the application of Dempster-Shafer-driven evidential belief function (DS-EBF) methodology with remote sensing and geographic information system techniques to analyze surface and subsurface data sets for the spatial prediction of groundwater potential in Perak Province, Malaysia. The study used additional data obtained from the records of the groundwater yield rate of approximately 28 bore well locations. The processed surface and subsurface data produced sets of groundwater potential conditioning factors (GPCFs) from which multiple surface hydrologic and subsurface hydrogeologic parameter thematic maps were generated. The bore well location inventories were partitioned randomly into a ratio of 70% (19 wells) for model training to 30% (9 wells) for model testing. Application results of the DS-EBF relationship model algorithms of the surface- and subsurface-based GPCF thematic maps and the bore well locations produced two groundwater potential prediction (GPP) maps based on surface hydrologic and subsurface hydrogeologic characteristics which established that more than 60% of the study area falling within the moderate-high groundwater potential zones and less than 35% falling within the low potential zones. The estimated uncertainty values within the range of 0 to 17% for the predicted potential zones were quantified using the uncertainty algorithm of the model. The validation results of the GPP maps using relative operating characteristic curve method yielded 80 and 68% success rates and 89 and 53% prediction rates for the subsurface hydrogeologic factor (SUHF)- and surface hydrologic factor (SHF)-based GPP maps, respectively. The study results revealed that the SUHF-based GPP map accurately delineated groundwater potential zones better than the SHF-based GPP map. However, significant information on the low degree of uncertainty of the predicted potential zones established the suitability of the two GPP maps for future development of groundwater resources in the area. The overall results proved the efficacy of the data mining model and the geospatial technology in groundwater potential mapping.

  13. PrePhyloPro: phylogenetic profile-based prediction of whole proteome linkages

    PubMed Central

    Niu, Yulong; Liu, Chengcheng; Moghimyfiroozabad, Shayan; Yang, Yi

    2017-01-01

    Direct and indirect functional links between proteins as well as their interactions as part of larger protein complexes or common signaling pathways may be predicted by analyzing the correlation of their evolutionary patterns. Based on phylogenetic profiling, here we present a highly scalable and time-efficient computational framework for predicting linkages within the whole human proteome. We have validated this method through analysis of 3,697 human pathways and molecular complexes and a comparison of our results with the prediction outcomes of previously published co-occurrency model-based and normalization methods. Here we also introduce PrePhyloPro, a web-based software that uses our method for accurately predicting proteome-wide linkages. We present data on interactions of human mitochondrial proteins, verifying the performance of this software. PrePhyloPro is freely available at http://prephylopro.org/phyloprofile/. PMID:28875072

  14. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    PubMed

    Xiao, WenBo; Nazario, Gina; Wu, HuaMing; Zhang, HuaMing; Cheng, Feng

    2017-01-01

    In this article, we introduced an artificial neural network (ANN) based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-), multi-crystalline (multi-), and amorphous (amor-) crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  15. Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution

    NASA Astrophysics Data System (ADS)

    Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike

    2011-04-01

    Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.

  16. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    USGS Publications Warehouse

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  17. Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges

    PubMed Central

    Lee, Jaebeom; Lee, Young-Joo

    2018-01-01

    Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance. PMID:29747421

  18. Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges.

    PubMed

    Lee, Jaebeom; Lee, Kyoung-Chan; Lee, Young-Joo

    2018-05-09

    Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.

  19. Passenger Flow Forecasting Research for Airport Terminal Based on SARIMA Time Series Model

    NASA Astrophysics Data System (ADS)

    Li, Ziyu; Bi, Jun; Li, Zhiyin

    2017-12-01

    Based on the data of practical operating of Kunming Changshui International Airport during2016, this paper proposes Seasonal Autoregressive Integrated Moving Average (SARIMA) model to predict the passenger flow. This article not only considers the non-stationary and autocorrelation of the sequence, but also considers the daily periodicity of the sequence. The prediction results can accurately describe the change trend of airport passenger flow and provide scientific decision support for the optimal allocation of airport resources and optimization of departure process. The result shows that this model is applicable to the short-term prediction of airport terminal departure passenger traffic and the average error ranges from 1% to 3%. The difference between the predicted and the true values of passenger traffic flow is quite small, which indicates that the model has fairly good passenger traffic flow prediction ability.

  20. The predictive value of the sacral base pressure test in detecting specific types of sacroiliac dysfunction

    PubMed Central

    Mitchell, Travis D.; Urli, Kristina E.; Breitenbach, Jacques; Yelverton, Chris

    2007-01-01

    Abstract Objective This study aimed to evaluate the validity of the sacral base pressure test in diagnosing sacroiliac joint dysfunction. It also determined the predictive powers of the test in determining which type of sacroiliac joint dysfunction was present. Methods This was a double-blind experimental study with 62 participants. The results from the sacral base pressure test were compared against a cluster of previously validated tests of sacroiliac joint dysfunction to determine its validity and predictive powers. The external rotation of the feet, occurring during the sacral base pressure test, was measured using a digital inclinometer. Results There was no statistically significant difference in the results of the sacral base pressure test between the types of sacroiliac joint dysfunction. In terms of the results of validity, the sacral base pressure test was useful in identifying positive values of sacroiliac joint dysfunction. It was fairly helpful in correctly diagnosing patients with negative test results; however, it had only a “slight” agreement with the diagnosis for κ interpretation. Conclusions In this study, the sacral base pressure test was not a valid test for determining the presence of sacroiliac joint dysfunction or the type of dysfunction present. Further research comparing the agreement of the sacral base pressure test or other sacroiliac joint dysfunction tests with a criterion standard of diagnosis is necessary. PMID:19674694

  1. The construction of life prediction models for the design of Stirling engine heater components

    NASA Technical Reports Server (NTRS)

    Petrovich, A.; Bright, A.; Cronin, M.; Arnold, S.

    1983-01-01

    The service life of Stirling-engine heater structures of Fe-based high-temperature alloys is predicted using a numerical model based on a linear-damage approach and published test data (engine test data for a Co-based alloy and tensile-test results for both the Co-based and the Fe-based alloys). The operating principle of the automotive Stirling engine is reviewed; the economic and technical factors affecting the choice of heater material are surveyed; the test results are summarized in tables and graphs; the engine environment and automotive duty cycle are characterized; and the modeling procedure is explained. It is found that the statistical scatter of the fatigue properties of the heater components needs to be reduced (by decreasing the porosity of the cast material or employing wrought material in fatigue-prone locations) before the accuracy of life predictions can be improved.

  2. Observational attachment theory-based parenting measures predict children's attachment narratives independently from social learning theory-based measures.

    PubMed

    Matias, Carla; O'Connor, Thomas G; Futh, Annabel; Scott, Stephen

    2014-01-01

    Conceptually and methodologically distinct models exist for assessing quality of parent-child relationships, but few studies contrast competing models or assess their overlap in predicting developmental outcomes. Using observational methodology, the current study examined the distinctiveness of attachment theory-based and social learning theory-based measures of parenting in predicting two key measures of child adjustment: security of attachment narratives and social acceptance in peer nominations. A total of 113 5-6-year-old children from ethnically diverse families participated. Parent-child relationships were rated using standard paradigms. Measures derived from attachment theory included sensitive responding and mutuality; measures derived from social learning theory included positive attending, directives, and criticism. Child outcomes were independently-rated attachment narrative representations and peer nominations. Results indicated that Attachment theory-based and Social Learning theory-based measures were modestly correlated; nonetheless, parent-child mutuality predicted secure child attachment narratives independently of social learning theory-based measures; in contrast, criticism predicted peer-nominated fighting independently of attachment theory-based measures. In young children, there is some evidence that attachment theory-based measures may be particularly predictive of attachment narratives; however, no single model of measuring parent-child relationships is likely to best predict multiple developmental outcomes. Assessment in research and applied settings may benefit from integration of different theoretical and methodological paradigms.

  3. A hybrid predictive model for acoustic noise in urban areas based on time series analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine

    2017-06-01

    The dangerous effect of noise on human health is well known. Both the auditory and non-auditory effects are largely documented in literature, and represent an important hazard in human activities. Particular care is devoted to road traffic noise, since it is growing according to the growth of residential, industrial and commercial areas. For these reasons, it is important to develop effective models able to predict the noise in a certain area. In this paper, a hybrid predictive model is presented. The model is based on the mixing of two different approach: the Time Series Analysis (TSA) and the Artificial Neural Network (ANN). The TSA model is based on the evaluation of trend and seasonality in the data, while the ANN model is based on the capacity of the network to "learn" the behavior of the data. The mixed approach will consist in the evaluation of noise levels by means of TSA and, once the differences (residuals) between TSA estimations and observed data have been calculated, in the training of a ANN on the residuals. This hybrid model will exploit interesting features and results, with a significant variation related to the number of steps forward in the prediction. It will be shown that the best results, in terms of prediction, are achieved predicting one step ahead in the future. Anyway, a 7 days prediction can be performed, with a slightly greater error, but offering a larger range of prediction, with respect to the single day ahead predictive model.

  4. Aerodynamic heating environment definition/thermal protection system selection for the HL-20

    NASA Astrophysics Data System (ADS)

    Wurster, K. E.; Stone, H. W.

    1993-09-01

    Definition of the aerothermal environment is critical to any vehicle such as the HL-20 Personnel Launch System that operates within the hypersonic flight regime. Selection of an appropriate thermal protection system design is highly dependent on the accuracy of the heating-environment prediction. It is demonstrated that the entry environment determines the thermal protection system design for this vehicle. The methods used to predict the thermal environment for the HL-20 Personnel Launch System vehicle are described. Comparisons of the engineering solutions with computational fluid dynamic predictions, as well as wind-tunnel test results, show good agreement. The aeroheating predictions over several critical regions of the vehicle, including the stagnation areas of the nose and leading edges, windward centerline and wing surfaces, and leeward surfaces, are discussed. Results of predictions based on the engineering methods found within the MINIVER aerodynamic heating code are used in conjunction with the results of the extensive wind-tunnel tests on this configuration to define a flight thermal environment. Finally, the selection of the thermal protection system based on these predictions and current technology is described.

  5. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information

    PubMed Central

    Wang, Xiaohong; Wang, Lizhi

    2017-01-01

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system. PMID:28926930

  6. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information.

    PubMed

    Wang, Jingbin; Wang, Xiaohong; Wang, Lizhi

    2017-09-15

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system.

  7. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    NASA Astrophysics Data System (ADS)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  8. Remote Sensing of Supercooled Cloud Layers in Cold Climate Using Ground Based Integrated Sensors System and Comparison with Pilot Reports and model forecasts

    NASA Astrophysics Data System (ADS)

    Boudala, Faisal; Wu, Di; Gultepe, Ismail; Anderson, Martha; turcotte, marie-france

    2017-04-01

    In-flight aircraft icing is one of the major weather hazards to aviation . It occurs when an aircraft passes through a cloud layer containing supercooled drops (SD). The SD in contact with the airframe freezes on the surface which degrades the performance of the aircraft.. Prediction of in-flight icing requires accurate prediction of SD sizes, liquid water content (LWC), and temperature. The current numerical weather predicting (NWP) models are not capable of making accurate prediction of SD sizes and associated LWC. Aircraft icing environment is normally studied by flying research aircraft, which is quite expensive. Thus, developing a ground based remote sensing system for detection of supercooled liquid clouds and characterization of their impact on severity of aircraft icing one of the important tasks for improving the NWPs based predictions and validations. In this respect, Environment and Climate Change Canada (ECCC) in cooperation with the Department of National Defense (DND) installed a number of specialized ground based remote sensing platforms and present weather sensors at Cold Lake, Alberta that includes a multi-channel microwave radiometer (MWR), K-band Micro Rain radar (MRR), Ceilometer, Parsivel distrometer and Vaisala PWD22 present weather sensor. In this study, a number of pilot reports confirming icing events and freezing precipitation that occurred at Cold Lake during the 2014-2016 winter periods and associated observation data for the same period are examined. The icing events are also examined using aircraft icing intensity estimated using ice accumulation model which is based on a cylindrical shape approximation of airfoil and the Canadian High Resolution Regional Deterministic Prediction System (HRDPS) model predicted LWC, median volume diameter and temperature. The results related to vertical atmospheric profiling conditions, surface observations, and the Canadian High Resolution Regional Deterministic Prediction System (HRDPS) model predictions are given. Preliminary results suggest that remote sensing and present weather sensors based observations of cloud SD regions can be used to describe micro and macro physical characteristics of the icing conditions. The model based icing intensity prediction reasonably agreed with the PIREPs and MWR observations.

  9. Next Day Building Load Predictions based on Limited Input Features Using an On-Line Laterally Primed Adaptive Resonance Theory Artificial Neural Network.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Christian Birk; Robinson, Matt; Yasaei, Yasser

    Optimal integration of thermal energy storage within commercial building applications requires accurate load predictions. Several methods exist that provide an estimate of a buildings future needs. Methods include component-based models and data-driven algorithms. This work implemented a previously untested algorithm for this application that is called a Laterally Primed Adaptive Resonance Theory (LAPART) artificial neural network (ANN). The LAPART algorithm provided accurate results over a two month period where minimal historical data and a small amount of input types were available. These results are significant, because common practice has often overlooked the implementation of an ANN. ANN have often beenmore » perceived to be too complex and require large amounts of data to provide accurate results. The LAPART neural network was implemented in an on-line learning manner. On-line learning refers to the continuous updating of training data as time occurs. For this experiment, training began with a singe day and grew to two months of data. This approach provides a platform for immediate implementation that requires minimal time and effort. The results from the LAPART algorithm were compared with statistical regression and a component-based model. The comparison was based on the predictions linear relationship with the measured data, mean squared error, mean bias error, and cost savings achieved by the respective prediction techniques. The results show that the LAPART algorithm provided a reliable and cost effective means to predict the building load for the next day.« less

  10. Frequency, probability, and prediction: easy solutions to cognitive illusions?

    PubMed

    Griffin, D; Buehler, R

    1999-02-01

    Many errors in probabilistic judgment have been attributed to people's inability to think in statistical terms when faced with information about a single case. Prior theoretical analyses and empirical results imply that the errors associated with case-specific reasoning may be reduced when people make frequentistic predictions about a set of cases. In studies of three previously identified cognitive biases, we find that frequency-based predictions are different from-but no better than-case-specific judgments of probability. First, in studies of the "planning fallacy, " we compare the accuracy of aggregate frequency and case-specific probability judgments in predictions of students' real-life projects. When aggregate and single-case predictions are collected from different respondents, there is little difference between the two: Both are overly optimistic and show little predictive validity. However, in within-subject comparisons, the aggregate judgments are significantly more conservative than the single-case predictions, though still optimistically biased. Results from studies of overconfidence in general knowledge and base rate neglect in categorical prediction underline a general conclusion. Frequentistic predictions made for sets of events are no more statistically sophisticated, nor more accurate, than predictions made for individual events using subjective probability. Copyright 1999 Academic Press.

  11. Improving hot region prediction by parameter optimization of density clustering in PPI.

    PubMed

    Hu, Jing; Zhang, Xiaolong

    2016-11-01

    This paper proposed an optimized algorithm which combines density clustering of parameter selection with feature-based classification for hot region prediction. First, all the residues are classified by SVM to remove non-hot spot residues, then density clustering of parameter selection is used to find hot regions. In the density clustering, this paper studies how to select input parameters. There are two parameters radius and density in density-based incremental clustering. We firstly fix density and enumerate radius to find a pair of parameters which leads to maximum number of clusters, and then we fix radius and enumerate density to find another pair of parameters which leads to maximum number of clusters. Experiment results show that the proposed method using both two pairs of parameters provides better prediction performance than the other method, and compare these two predictive results, the result by fixing radius and enumerating density have slightly higher prediction accuracy than that by fixing density and enumerating radius. Copyright © 2016. Published by Elsevier Inc.

  12. Probability-based collaborative filtering model for predicting gene-disease associations.

    PubMed

    Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan

    2017-12-28

    Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene-disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used. We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches. PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.

  13. Gamma Prime Precipitate Evolution During Aging of a Model Nickel-Based Superalloy

    NASA Astrophysics Data System (ADS)

    Goodfellow, A. J.; Galindo-Nava, E. I.; Christofidou, K. A.; Jones, N. G.; Martin, T.; Bagot, P. A. J.; Boyer, C. D.; Hardy, M. C.; Stone, H. J.

    2018-03-01

    The microstructural stability of nickel-based superalloys is critical for maintaining alloy performance during service in gas turbine engines. In this study, the precipitate evolution in a model polycrystalline Ni-based superalloy during aging to 1000 hours has been studied via transmission electron microscopy, atom probe tomography, and neutron diffraction. Variations in phase composition and precipitate morphology, size, and volume fraction were observed during aging, while the constrained lattice misfit remained constant at approximately zero. The experimental composition of the γ matrix phase was consistent with thermodynamic equilibrium predictions, while significant differences were identified between the experimental and predicted results from the γ' phase. These results have implications for the evolution of mechanical properties in service and their prediction using modeling methods.

  14. Exploring Mouse Protein Function via Multiple Approaches.

    PubMed

    Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong

    2016-01-01

    Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.

  15. Exploring Mouse Protein Function via Multiple Approaches

    PubMed Central

    Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning

    2016-01-01

    Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315

  16. Nondestructive testing methods to predict effect of degradation on wood : a critical assessment

    Treesearch

    J. Kaiserlik

    1978-01-01

    Results are reported for an assessment of methods for predicting strength of wood, wood-based, or related material. Research directly applicable to nondestructive strength prediction was very limited. In wood, strength prediction research is limited to vibration decay, wave attenuation, and multiparameter "degradation models." Nonwood methods with potential...

  17. A Prediction Model for Functional Outcomes in Spinal Cord Disorder Patients Using Gaussian Process Regression.

    PubMed

    Lee, Sunghoon Ivan; Mortazavi, Bobak; Hoffman, Haydn A; Lu, Derek S; Li, Charles; Paak, Brian H; Garst, Jordan H; Razaghy, Mehrdad; Espinal, Marie; Park, Eunjeong; Lu, Daniel C; Sarrafzadeh, Majid

    2016-01-01

    Predicting the functional outcomes of spinal cord disorder patients after medical treatments, such as a surgical operation, has always been of great interest. Accurate posttreatment prediction is especially beneficial for clinicians, patients, care givers, and therapists. This paper introduces a prediction method for postoperative functional outcomes by a novel use of Gaussian process regression. The proposed method specifically considers the restricted value range of the target variables by modeling the Gaussian process based on a truncated Normal distribution, which significantly improves the prediction results. The prediction has been made in assistance with target tracking examinations using a highly portable and inexpensive handgrip device, which greatly contributes to the prediction performance. The proposed method has been validated through a dataset collected from a clinical cohort pilot involving 15 patients with cervical spinal cord disorder. The results show that the proposed method can accurately predict postoperative functional outcomes, Oswestry disability index and target tracking scores, based on the patient's preoperative information with a mean absolute error of 0.079 and 0.014 (out of 1.0), respectively.

  18. Gross domestic product estimation based on electricity utilization by artificial neural network

    NASA Astrophysics Data System (ADS)

    Stevanović, Mirjana; Vujičić, Slađana; Gajić, Aleksandar M.

    2018-01-01

    The main goal of the paper was to estimate gross domestic product (GDP) based on electricity estimation by artificial neural network (ANN). The electricity utilization was analyzed based on different sources like renewable, coal and nuclear sources. The ANN network was trained with two training algorithms namely extreme learning method and back-propagation algorithm in order to produce the best prediction results of the GDP. According to the results it can be concluded that the ANN model with extreme learning method could produce the acceptable prediction of the GDP based on the electricity utilization.

  19. Prediction in complex systems: The case of the international trade network

    NASA Astrophysics Data System (ADS)

    Vidmer, Alexandre; Zeng, An; Medo, Matúš; Zhang, Yi-Cheng

    2015-10-01

    Predicting the future evolution of complex systems is one of the main challenges in complexity science. Based on a current snapshot of a network, link prediction algorithms aim to predict its future evolution. We apply here link prediction algorithms to data on the international trade between countries. This data can be represented as a complex network where links connect countries with the products that they export. Link prediction techniques based on heat and mass diffusion processes are employed to obtain predictions for products exported in the future. These baseline predictions are improved using a recent metric of country fitness and product similarity. The overall best results are achieved with a newly developed metric of product similarity which takes advantage of causality in the network evolution.

  20. Ensemble-based prediction of RNA secondary structures.

    PubMed

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.

  1. Critical Evaluation of Prediction Models for Phosphorus Partition between CaO-based Slags and Iron-based Melts during Dephosphorization Processes

    NASA Astrophysics Data System (ADS)

    Yang, Xue-Min; Li, Jin-Yan; Chai, Guo-Ming; Duan, Dong-Ping; Zhang, Jian

    2016-08-01

    According to the experimental results of hot metal dephosphorization by CaO-based slags at a commercial-scale hot metal pretreatment station, the collected 16 models of equilibrium quotient k_{{P}} or phosphorus partition L_{{P}} between CaO-based slags and iron-based melts from the literature have been evaluated. The collected 16 models for predicting equilibrium quotient k_{{P}} can be transferred to predict phosphorus partition L_{{P}} . The predicted results by the collected 16 models cannot be applied to be criteria for evaluating k_{{P}} or L_{{P}} due to various forms or definitions of k_{{P}} or L_{{P}} . Thus, the measured phosphorus content [pct P] in a hot metal bath at the end point of the dephosphorization pretreatment process is applied to be the fixed criteria for evaluating the collected 16 models. The collected 16 models can be described in the form of linear functions as y = c0 + c1 x , in which independent variable x represents the chemical composition of slags, intercept c0 including the constant term depicts the temperature effect and other unmentioned or acquiescent thermodynamic factors, and slope c1 is regressed by the experimental results of k_{{P}} or L_{{P}} . Thus, a general approach to developing the thermodynamic model for predicting equilibrium quotient k_{{P}} or phosphorus partition L P or [pct P] in iron-based melts during the dephosphorization process is proposed by revising the constant term in intercept c0 for the summarized 15 models except for Suito's model (M3). The better models with an ideal revising possibility or flexibility among the collected 16 models have been selected and recommended. Compared with the predicted result by the revised 15 models and Suito's model (M3), the developed IMCT- L_{{P}} model coupled with the proposed dephosphorization mechanism by the present authors can be applied to accurately predict phosphorus partition L_{{P}} with the lowest mean deviation δ_{{L_{{P}} }} of log L_{{P}} as 2.33, as well as to predict [pct P] in a hot metal bath with the smallest mean deviation δ_{{[% {{ P}}]}} of [pct P] as 12.31.

  2. Improving Simulations of Precipitation Phase and Snowpack at a Site Subject to Cold Air Intrusions: Snoqualmie Pass, WA

    NASA Astrophysics Data System (ADS)

    Wayand, N. E.; Stimberis, J.; Zagrodnik, J.; Mass, C.; Lundquist, J. D.

    2016-12-01

    Low-level cold air from eastern Washington state often flows westward through mountain passes in the Washington Cascades, creating localized inversions and locally reducing climatological temperatures. The persistence of this inversion during a frontal passage can result in complex patterns of snow and rain that are difficult to predict. Yet, these predictions are critical to support highway avalanche control, ski resort operations, and modeling of headwater snowpack storage. In this study we used observations of precipitation phase from a disdrometer and snow depth sensors across Snoqualmie Pass, WA, to evaluate surface-air-temperature-based and mesoscale-model-based predictions of precipitation phase during the anomalously warm 2014-2015 winter. The skill of surface-based methods was greatly improved by using air temperature from a nearby higher-elevation station, which was less impacted by low-level inversions. Alternatively, we found a hybrid method that combines surface-based predictions with output from the Weather Research and Forecasting mesoscale model to have improved skill over both parent models. These results suggest that prediction of precipitation phase in mountain passes can be improved by incorporating observations or models from above the surface layer.

  3. Utilizing high throughput screening data for predictive toxicology models: protocols and application to MLSCN assays

    NASA Astrophysics Data System (ADS)

    Guha, Rajarshi; Schürer, Stephan C.

    2008-06-01

    Computational toxicology is emerging as an encouraging alternative to experimental testing. The Molecular Libraries Screening Center Network (MLSCN) as part of the NIH Molecular Libraries Roadmap has recently started generating large and diverse screening datasets, which are publicly available in PubChem. In this report, we investigate various aspects of developing computational models to predict cell toxicity based on cell proliferation screening data generated in the MLSCN. By capturing feature-based information in those datasets, such predictive models would be useful in evaluating cell-based screening results in general (for example from reporter assays) and could be used as an aid to identify and eliminate potentially undesired compounds. Specifically we present the results of random forest ensemble models developed using different cell proliferation datasets and highlight protocols to take into account their extremely imbalanced nature. Depending on the nature of the datasets and the descriptors employed we were able to achieve percentage correct classification rates between 70% and 85% on the prediction set, though the accuracy rate dropped significantly when the models were applied to in vivo data. In this context we also compare the MLSCN cell proliferation results with animal acute toxicity data to investigate to what extent animal toxicity can be correlated and potentially predicted by proliferation results. Finally, we present a visualization technique that allows one to compare a new dataset to the training set of the models to decide whether the new dataset may be reliably predicted.

  4. Research on reverse logistics location under uncertainty environment based on grey prediction

    NASA Astrophysics Data System (ADS)

    Zhenqiang, Bao; Congwei, Zhu; Yuqin, Zhao; Quanke, Pan

    This article constructs reverse logistic network based on uncertain environment, integrates the reverse logistics network and distribution network, and forms a closed network. An optimization model based on cost is established to help intermediate center, manufacturing center and remanufacturing center make location decision. A gray model GM (1, 1) is used to predict the product holdings of the collection points, and then prediction results are carried into the cost optimization model and a solution is got. Finally, an example is given to verify the effectiveness and feasibility of the model.

  5. Long-range prediction of Indian summer monsoon rainfall using data mining and statistical approaches

    NASA Astrophysics Data System (ADS)

    H, Vathsala; Koolagudi, Shashidhar G.

    2017-10-01

    This paper presents a hybrid model to better predict Indian summer monsoon rainfall. The algorithm considers suitable techniques for processing dense datasets. The proposed three-step algorithm comprises closed itemset generation-based association rule mining for feature selection, cluster membership for dimensionality reduction, and simple logistic function for prediction. The application of predicting rainfall into flood, excess, normal, deficit, and drought based on 36 predictors consisting of land and ocean variables is presented. Results show good accuracy in the considered study period of 37years (1969-2005).

  6. Improving lung cancer prognosis assessment by incorporating synthetic minority oversampling technique and score fusion method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Shiju; Qian, Wei; Guan, Yubao

    2016-06-15

    Purpose: This study aims to investigate the potential to improve lung cancer recurrence risk prediction performance for stage I NSCLS patients by integrating oversampling, feature selection, and score fusion techniques and develop an optimal prediction model. Methods: A dataset involving 94 early stage lung cancer patients was retrospectively assembled, which includes CT images, nine clinical and biological (CB) markers, and outcome of 3-yr disease-free survival (DFS) after surgery. Among the 94 patients, 74 remained DFS and 20 had cancer recurrence. Applying a computer-aided detection scheme, tumors were segmented from the CT images and 35 quantitative image (QI) features were initiallymore » computed. Two normalized Gaussian radial basis function network (RBFN) based classifiers were built based on QI features and CB markers separately. To improve prediction performance, the authors applied a synthetic minority oversampling technique (SMOTE) and a BestFirst based feature selection method to optimize the classifiers and also tested fusion methods to combine QI and CB based prediction results. Results: Using a leave-one-case-out cross-validation (K-fold cross-validation) method, the computed areas under a receiver operating characteristic curve (AUCs) were 0.716 ± 0.071 and 0.642 ± 0.061, when using the QI and CB based classifiers, respectively. By fusion of the scores generated by the two classifiers, AUC significantly increased to 0.859 ± 0.052 (p < 0.05) with an overall prediction accuracy of 89.4%. Conclusions: This study demonstrated the feasibility of improving prediction performance by integrating SMOTE, feature selection, and score fusion techniques. Combining QI features and CB markers and performing SMOTE prior to feature selection in classifier training enabled RBFN based classifier to yield improved prediction accuracy.« less

  7. Minimalist ensemble algorithms for genome-wide protein localization prediction

    PubMed Central

    2012-01-01

    Background Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. Results This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. Conclusions We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi. PMID:22759391

  8. A data base approach for prediction of deforestation-induced mass wasting events

    NASA Technical Reports Server (NTRS)

    Logan, T. L.

    1981-01-01

    A major topic of concern in timber management is determining the impact of clear-cutting on slope stability. Deforestation treatments on steep mountain slopes have often resulted in a high frequency of major mass wasting events. The Geographic Information System (GIS) is a potentially useful tool for predicting the location of mass wasting sites. With a raster-based GIS, digitally encoded maps of slide hazard parameters can be overlayed and modeled to produce new maps depicting high probability slide areas. The present investigation has the objective to examine the raster-based information system as a tool for predicting the location of the clear-cut mountain slopes which are most likely to experience shallow soil debris avalanches. A literature overview is conducted, taking into account vegetation, roads, precipitation, soil type, slope-angle and aspect, and models predicting mass soil movements. Attention is given to a data base approach and aspects of slide prediction.

  9. Novel Near-Lossless Compression Algorithm for Medical Sequence Images with Adaptive Block-Based Spatial Prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2016-12-01

    To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.

  10. Inferential modeling and predictive feedback control in real-time motion compensation using the treatment couch during radiotherapy

    NASA Astrophysics Data System (ADS)

    Qiu, Peng; D'Souza, Warren D.; McAvoy, Thomas J.; Liu, K. J. Ray

    2007-09-01

    Tumor motion induced by respiration presents a challenge to the reliable delivery of conformal radiation treatments. Real-time motion compensation represents the technologically most challenging clinical solution but has the potential to overcome the limitations of existing methods. The performance of a real-time couch-based motion compensation system is mainly dependent on two aspects: the ability to infer the internal anatomical position and the performance of the feedback control system. In this paper, we propose two novel methods for the two aspects respectively, and then combine the proposed methods into one system. To accurately estimate the internal tumor position, we present partial-least squares (PLS) regression to predict the position of the diaphragm using skin-based motion surrogates. Four radio-opaque markers were placed on the abdomen of patients who underwent fluoroscopic imaging of the diaphragm. The coordinates of the markers served as input variables and the position of the diaphragm served as the output variable. PLS resulted in lower prediction errors compared with standard multiple linear regression (MLR). The performance of the feedback control system depends on the system dynamics and dead time (delay between the initiation and execution of the control action). While the dynamics of the system can be inverted in a feedback control system, the dead time cannot be inverted. To overcome the dead time of the system, we propose a predictive feedback control system by incorporating forward prediction using least-mean-square (LMS) and recursive least square (RLS) filtering into the couch-based control system. Motion data were obtained using a skin-based marker. The proposed predictive feedback control system was benchmarked against pure feedback control (no forward prediction) and resulted in a significant performance gain. Finally, we combined the PLS inference model and the predictive feedback control to evaluate the overall performance of the feedback control system. Our results show that, with the tumor motion unknown but inferred by skin-based markers through the PLS model, the predictive feedback control system was able to effectively compensate intra-fraction motion.

  11. Application of General Regression Neural Network to the Prediction of LOD Change

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao

    2012-01-01

    Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.

  12. Can-Evo-Ens: Classifier stacking based evolutionary ensemble system for prediction of human breast cancer using amino acid sequences.

    PubMed

    Ali, Safdar; Majid, Abdul

    2015-04-01

    The diagnostic of human breast cancer is an intricate process and specific indicators may produce negative results. In order to avoid misleading results, accurate and reliable diagnostic system for breast cancer is indispensable. Recently, several interesting machine-learning (ML) approaches are proposed for prediction of breast cancer. To this end, we developed a novel classifier stacking based evolutionary ensemble system "Can-Evo-Ens" for predicting amino acid sequences associated with breast cancer. In this paper, first, we selected four diverse-type of ML algorithms of Naïve Bayes, K-Nearest Neighbor, Support Vector Machines, and Random Forest as base-level classifiers. These classifiers are trained individually in different feature spaces using physicochemical properties of amino acids. In order to exploit the decision spaces, the preliminary predictions of base-level classifiers are stacked. Genetic programming (GP) is then employed to develop a meta-classifier that optimal combine the predictions of the base classifiers. The most suitable threshold value of the best-evolved predictor is computed using Particle Swarm Optimization technique. Our experiments have demonstrated the robustness of Can-Evo-Ens system for independent validation dataset. The proposed system has achieved the highest value of Area Under Curve (AUC) of ROC Curve of 99.95% for cancer prediction. The comparative results revealed that proposed approach is better than individual ML approaches and conventional ensemble approaches of AdaBoostM1, Bagging, GentleBoost, and Random Subspace. It is expected that the proposed novel system would have a major impact on the fields of Biomedical, Genomics, Proteomics, Bioinformatics, and Drug Development. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Towards collaborative filtering recommender systems for tailored health communications.

    PubMed

    Marlin, Benjamin M; Adams, Roy J; Sadasivam, Rajani; Houston, Thomas K

    2013-01-01

    The goal of computer tailored health communications (CTHC) is to promote healthy behaviors by sending messages tailored to individual patients. Current CTHC systems collect baseline patient "profiles" and then use expert-written, rule-based systems to target messages to subsets of patients. Our main interest in this work is the study of collaborative filtering-based CTHC systems that can learn to tailor future message selections to individual patients based explicit feedback about past message selections. This paper reports the results of a study designed to collect explicit feedback (ratings) regarding four aspects of messages from 100 subjects in the smoking cessation support domain. Our results show that most users have positive opinions of most messages and that the ratings for all four aspects of the messages are highly correlated with each other. Finally, we conduct a range of rating prediction experiments comparing several different model variations. Our results show that predicting future ratings based on each user's past ratings contributes the most to predictive accuracy.

  14. Towards Collaborative Filtering Recommender Systems for Tailored Health Communications

    PubMed Central

    Marlin, Benjamin M.; Adams, Roy J.; Sadasivam, Rajani; Houston, Thomas K.

    2013-01-01

    The goal of computer tailored health communications (CTHC) is to promote healthy behaviors by sending messages tailored to individual patients. Current CTHC systems collect baseline patient “profiles” and then use expert-written, rule-based systems to target messages to subsets of patients. Our main interest in this work is the study of collaborative filtering-based CTHC systems that can learn to tailor future message selections to individual patients based explicit feedback about past message selections. This paper reports the results of a study designed to collect explicit feedback (ratings) regarding four aspects of messages from 100 subjects in the smoking cessation support domain. Our results show that most users have positive opinions of most messages and that the ratings for all four aspects of the messages are highly correlated with each other. Finally, we conduct a range of rating prediction experiments comparing several different model variations. Our results show that predicting future ratings based on each user’s past ratings contributes the most to predictive accuracy. PMID:24551430

  15. Relative Packing Groups in Template-Based Structure Prediction: Cooperative Effects of True Positive Constraints

    PubMed Central

    Day, Ryan; Qu, Xiaotao; Swanson, Rosemarie; Bohannan, Zach; Bliss, Robert

    2011-01-01

    Abstract Most current template-based structure prediction methods concentrate on finding the correct backbone conformation and then packing sidechains within that backbone. Our packing-based method derives distance constraints from conserved relative packing groups (RPGs). In our refinement approach, the RPGs provide a level of resolution that restrains global topology while allowing conformational sampling. In this study, we test our template-based structure prediction method using 51 prediction units from CASP7 experiments. RPG-based constraints are able to substantially improve approximately two-thirds of starting templates. Upon deeper investigation, we find that true positive spatial constraints, especially those non-local in sequence, derived from the RPGs were important to building nearer native models. Surprisingly, the fraction of incorrect or false positive constraints does not strongly influence the quality of the final candidate. This result indicates that our RPG-based true positive constraints sample the self-consistent, cooperative interactions of the native structure. The lack of such reinforcing cooperativity explains the weaker effect of false positive constraints. Generally, these findings are encouraging indications that RPGs will improve template-based structure prediction. PMID:21210729

  16. Predicting Student Performance in Statewide High-Stakes Tests for Middle School Mathematics Using the Results from Third Party Testing Instruments

    ERIC Educational Resources Information Center

    Meylani, Rusen; Bitter, Gary G.; Castaneda, Rene

    2014-01-01

    In this study regression and neural networks based methods are used to predict statewide high-stakes test results for middle school mathematics using the scores obtained from third party tests throughout the school year. Such prediction is of utmost significance for school districts to live up to the state's educational standards mandated by the…

  17. Calm water resistance prediction of a bulk carrier using Reynolds averaged Navier-Stokes based solver

    NASA Astrophysics Data System (ADS)

    Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan

    2017-12-01

    Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.

  18. Improving local clustering based top-L link prediction methods via asymmetric link clustering information

    NASA Astrophysics Data System (ADS)

    Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan

    2018-02-01

    Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.

  19. Methods of developing core collections based on the predicted genotypic value of rice ( Oryza sativa L.).

    PubMed

    Li, C T; Shi, C H; Wu, J G; Xu, H M; Zhang, H Z; Ren, Y L

    2004-04-01

    The selection of an appropriate sampling strategy and a clustering method is important in the construction of core collections based on predicted genotypic values in order to retain the greatest degree of genetic diversity of the initial collection. In this study, methods of developing rice core collections were evaluated based on the predicted genotypic values for 992 rice varieties with 13 quantitative traits. The genotypic values of the traits were predicted by the adjusted unbiased prediction (AUP) method. Based on the predicted genotypic values, Mahalanobis distances were calculated and employed to measure the genetic similarities among the rice varieties. Six hierarchical clustering methods, including the single linkage, median linkage, centroid, unweighted pair-group average, weighted pair-group average and flexible-beta methods, were combined with random, preferred and deviation sampling to develop 18 core collections of rice germplasm. The results show that the deviation sampling strategy in combination with the unweighted pair-group average method of hierarchical clustering retains the greatest degree of genetic diversities of the initial collection. The core collections sampled using predicted genotypic values had more genetic diversity than those based on phenotypic values.

  20. Predicting drug-target interactions by dual-network integrated logistic matrix factorization

    NASA Astrophysics Data System (ADS)

    Hao, Ming; Bryant, Stephen H.; Wang, Yanli

    2017-01-01

    In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.

  1. Symbolic Processing Combined with Model-Based Reasoning

    NASA Technical Reports Server (NTRS)

    James, Mark

    2009-01-01

    A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.

  2. Prediction-based Dynamic Energy Management in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei

    2007-01-01

    Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.

  3. Variability in the Propagation Phase of CFD-Based Noise Prediction: Summary of Results From Category 8 of the BANC-III Workshop

    NASA Technical Reports Server (NTRS)

    Lopes, Leonard; Redonnet, Stephane; Imamura, Taro; Ikeda, Tomoaki; Zawodny, Nikolas; Cunha, Guilherme

    2015-01-01

    The usage of Computational Fluid Dynamics (CFD) in noise prediction typically has been a two part process: accurately predicting the flow conditions in the near-field and then propagating the noise from the near-field to the observer. Due to the increase in computing power and the cost benefit when weighed against wind tunnel testing, the usage of CFD to estimate the local flow field of complex geometrical structures has become more routine. Recently, the Benchmark problems in Airframe Noise Computation (BANC) workshops have provided a community focus on accurately simulating the local flow field near the body with various CFD approaches. However, to date, little effort has been given into assessing the impact of the propagation phase of noise prediction. This paper includes results from the BANC-III workshop which explores variability in the propagation phase of CFD-based noise prediction. This includes two test cases: an analytical solution of a quadrupole source near a sphere and a computational solution around a nose landing gear. Agreement between three codes was very good for the analytic test case, but CFD-based noise predictions indicate that the propagation phase can introduce 3dB or more of variability in noise predictions.

  4. A model for prediction of color change after tooth bleaching based on CIELAB color space

    NASA Astrophysics Data System (ADS)

    Herrera, Luis J.; Santana, Janiley; Yebra, Ana; Rivas, María. José; Pulgar, Rosa; Pérez, María. M.

    2017-08-01

    An experimental study aiming to develop a model based on CIELAB color space for prediction of color change after a tooth bleaching procedure is presented. Multivariate linear regression models were obtained to predict the L*, a*, b* and W* post-bleaching values using the pre-bleaching L*, a*and b*values. Moreover, univariate linear regression models were obtained to predict the variation in chroma (C*), hue angle (h°) and W*. The results demonstrated that is possible to estimate color change when using a carbamide peroxide tooth-bleaching system. The models obtained can be applied in clinic to predict the colour change after bleaching.

  5. Predictive Performance of Physiologically Based Pharmacokinetic Models for the Effect of Food on Oral Drug Absorption: Current Status

    PubMed Central

    Zhao, Ping; Pan, Yuzhuo; Wagner, Christian

    2017-01-01

    A comprehensive search in literature and published US Food and Drug Administration reviews was conducted to assess whether physiologically based pharmacokinetic (PBPK) modeling could be prospectively used to predict clinical food effect on oral drug absorption. Among the 48 resulted food effect predictions, ∼50% were predicted within 1.25‐fold of observed, and 75% within 2‐fold. Dissolution rate and precipitation time were commonly optimized parameters when PBPK modeling was not able to capture the food effect. The current work presents a knowledgebase for documenting PBPK experience to predict food effect. PMID:29168611

  6. Mated vertical ground vibration test

    NASA Technical Reports Server (NTRS)

    Ivey, E. W.

    1980-01-01

    The Mated Vertical Ground Vibration Test (MVGVT) was considered to provide an experimental base in the form of structural dynamic characteristics for the shuttle vehicle. This data base was used in developing high confidence analytical models for the prediction and design of loads, pogo controls, and flutter criteria under various payloads and operational missions. The MVGVT boost and launch program evolution, test configurations, and their suspensions are described. Test results are compared with predicted analytical results.

  7. The effects of geometric uncertainties on computational modelling of knee biomechanics

    NASA Astrophysics Data System (ADS)

    Meng, Qingen; Fisher, John; Wilcox, Ruth

    2017-08-01

    The geometry of the articular components of the knee is an important factor in predicting joint mechanics in computational models. There are a number of uncertainties in the definition of the geometry of cartilage and meniscus, and evaluating the effects of these uncertainties is fundamental to understanding the level of reliability of the models. In this study, the sensitivity of knee mechanics to geometric uncertainties was investigated by comparing polynomial-based and image-based knee models and varying the size of meniscus. The results suggested that the geometric uncertainties in cartilage and meniscus resulting from the resolution of MRI and the accuracy of segmentation caused considerable effects on the predicted knee mechanics. Moreover, even if the mathematical geometric descriptors can be very close to the imaged-based articular surfaces, the detailed contact pressure distribution produced by the mathematical geometric descriptors was not the same as that of the image-based model. However, the trends predicted by the models based on mathematical geometric descriptors were similar to those of the imaged-based models.

  8. Subsonic Longitudinal Performance Coefficient Extraction from Shuttle Flight Data: an Accuracy Assessment for Determination of Data Base Updates

    NASA Technical Reports Server (NTRS)

    Findlay, J. T.; Kelly, G. M.; Mcconnell, J. G.; Compton, H. R.

    1983-01-01

    Longitudinal performance comparisons between flight derived and predicted values are presented for the first five NASA Space Shuttle Columbia flights. Though subsonic comparisons are emphasized, comparisons during the transonic and low supersonic regions of flight are included. Computed air data information based on the remotely sensed atmospheric measurements as well as in situ Orbiter Air Data System (ADS) measurements were incorporated. Each air data source provides for comparisons versus the predicted values from the LaRC data base. Principally, L/D, C sub L, and C sub D, comparisons are presented, though some pitching moment results are included. Similarities in flight conditions and spacecraft configuration during the first five flights are discussed. Contributions from the various elements of the data base are presented and the overall differences observed between the flight and predicted values are discussed in terms of expected variations. A discussion on potential data base updates is presented based on the results from the five flights to date.

  9. The Relationship between Perceived Discrimination and Generalized Anxiety Disorder among African Americans, Afro Caribbeans and non-Hispanic Whites

    PubMed Central

    Soto, José A.; Dawson-Andoh, Nana A.; BeLue, Rhonda

    2010-01-01

    The present study examined the relationship between frequency of race based and non-race based discrimination experiences and Generalized Anxiety Disorder (GAD) in a sample of 3,570 African Americans, 1,438 Afro Caribbeans, and 891 non-Hispanic Whites from the National Survey of American Life (NSAL). Because GAD and the experience of racial discrimination are both associated with symptoms of worry and tension, we expected race based discrimination to predict GAD prevalence for African Americans, but not other groups. We did not expect non-race based discrimination to predict GAD. Results showed that while more frequent experiences of non-race based discrimination predicted GAD for all groups, experiencing race based discrimination was associated with significantly higher odds of endorsing lifetime GAD for African Americans only. Results are interpreted in light of the different contexts that these three ethnic groups represent relative to their history within the United States as well as their present day circumstances. PMID:21041059

  10. LOCSVMPSI: a web server for subcellular localization of eukaryotic proteins using SVM and profile of PSI-BLAST

    PubMed Central

    Xie, Dan; Li, Ao; Wang, Minghui; Fan, Zhewen; Feng, Huanqing

    2005-01-01

    Subcellular location of a protein is one of the key functional characters as proteins must be localized correctly at the subcellular level to have normal biological function. In this paper, a novel method named LOCSVMPSI has been introduced, which is based on the support vector machine (SVM) and the position-specific scoring matrix generated from profiles of PSI-BLAST. With a jackknife test on the RH2427 data set, LOCSVMPSI achieved a high overall prediction accuracy of 90.2%, which is higher than the prediction results by SubLoc and ESLpred on this data set. In addition, prediction performance of LOCSVMPSI was evaluated with 5-fold cross validation test on the PK7579 data set and the prediction results were consistently better than the previous method based on several SVMs using composition of both amino acids and amino acid pairs. Further test on the SWISSPROT new-unique data set showed that LOCSVMPSI also performed better than some widely used prediction methods, such as PSORTII, TargetP and LOCnet. All these results indicate that LOCSVMPSI is a powerful tool for the prediction of eukaryotic protein subcellular localization. An online web server (current version is 1.3) based on this method has been developed and is freely available to both academic and commercial users, which can be accessed by at . PMID:15980436

  11. Predictive local receptive fields based respiratory motion tracking for motion-adaptive radiotherapy.

    PubMed

    Yubo Wang; Tatinati, Sivanagaraja; Liyu Huang; Kim Jeong Hong; Shafiq, Ghufran; Veluvolu, Kalyana C; Khong, Andy W H

    2017-07-01

    Extracranial robotic radiotherapy employs external markers and a correlation model to trace the tumor motion caused by the respiration. The real-time tracking of tumor motion however requires a prediction model to compensate the latencies induced by the software (image data acquisition and processing) and hardware (mechanical and kinematic) limitations of the treatment system. A new prediction algorithm based on local receptive fields extreme learning machines (pLRF-ELM) is proposed for respiratory motion prediction. All the existing respiratory motion prediction methods model the non-stationary respiratory motion traces directly to predict the future values. Unlike these existing methods, the pLRF-ELM performs prediction by modeling the higher-level features obtained by mapping the raw respiratory motion into the random feature space of ELM instead of directly modeling the raw respiratory motion. The developed method is evaluated using the dataset acquired from 31 patients for two horizons in-line with the latencies of treatment systems like CyberKnife. Results showed that pLRF-ELM is superior to that of existing prediction methods. Results further highlight that the abstracted higher-level features are suitable to approximate the nonlinear and non-stationary characteristics of respiratory motion for accurate prediction.

  12. Prostate Cancer Probability Prediction By Machine Learning Technique.

    PubMed

    Jović, Srđan; Miljković, Milica; Ivanović, Miljan; Šaranović, Milena; Arsić, Milena

    2017-11-26

    The main goal of the study was to explore possibility of prostate cancer prediction by machine learning techniques. In order to improve the survival probability of the prostate cancer patients it is essential to make suitable prediction models of the prostate cancer. If one make relevant prediction of the prostate cancer it is easy to create suitable treatment based on the prediction results. Machine learning techniques are the most common techniques for the creation of the predictive models. Therefore in this study several machine techniques were applied and compared. The obtained results were analyzed and discussed. It was concluded that the machine learning techniques could be used for the relevant prediction of prostate cancer.

  13. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  14. Semi-supervised protein subcellular localization.

    PubMed

    Xu, Qian; Hu, Derek Hao; Xue, Hong; Yu, Weichuan; Yang, Qiang

    2009-01-30

    Protein subcellular localization is concerned with predicting the location of a protein within a cell using computational method. The location information can indicate key functionalities of proteins. Accurate predictions of subcellular localizations of protein can aid the prediction of protein function and genome annotation, as well as the identification of drug targets. Computational methods based on machine learning, such as support vector machine approaches, have already been widely used in the prediction of protein subcellular localization. However, a major drawback of these machine learning-based approaches is that a large amount of data should be labeled in order to let the prediction system learn a classifier of good generalization ability. However, in real world cases, it is laborious, expensive and time-consuming to experimentally determine the subcellular localization of a protein and prepare instances of labeled data. In this paper, we present an approach based on a new learning framework, semi-supervised learning, which can use much fewer labeled instances to construct a high quality prediction model. We construct an initial classifier using a small set of labeled examples first, and then use unlabeled instances to refine the classifier for future predictions. Experimental results show that our methods can effectively reduce the workload for labeling data using the unlabeled data. Our method is shown to enhance the state-of-the-art prediction results of SVM classifiers by more than 10%.

  15. Learning Temporal Statistics for Sensory Predictions in Aging.

    PubMed

    Luft, Caroline Di Bernardi; Baker, Rosalind; Goldstone, Aimee; Zhang, Yang; Kourtzi, Zoe

    2016-03-01

    Predicting future events based on previous knowledge about the environment is critical for successful everyday interactions. Here, we ask which brain regions support our ability to predict the future based on implicit knowledge about the past in young and older age. Combining behavioral and fMRI measurements, we test whether training on structured temporal sequences improves the ability to predict upcoming sensory events; we then compare brain regions involved in learning predictive structures between young and older adults. Our behavioral results demonstrate that exposure to temporal sequences without feedback facilitates the ability of young and older adults to predict the orientation of an upcoming stimulus. Our fMRI results provide evidence for the involvement of corticostriatal regions in learning predictive structures in both young and older learners. In particular, we showed learning-dependent fMRI responses for structured sequences in frontoparietal regions and the striatum (putamen) for young adults. However, for older adults, learning-dependent activations were observed mainly in subcortical (putamen, thalamus) regions but were weaker in frontoparietal regions. Significant correlations of learning-dependent behavioral and fMRI changes in these regions suggest a strong link between brain activations and behavioral improvement rather than general overactivation. Thus, our findings suggest that predicting future events based on knowledge of temporal statistics engages brain regions involved in implicit learning in both young and older adults.

  16. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  17. Predicting DNA hybridization kinetics from sequence

    NASA Astrophysics Data System (ADS)

    Zhang, Jinny X.; Fang, John Z.; Duan, Wei; Wu, Lucia R.; Zhang, Angela W.; Dalchau, Neil; Yordanov, Boyan; Petersen, Rasmus; Phillips, Andrew; Zhang, David Yu

    2018-01-01

    Hybridization is a key molecular process in biology and biotechnology, but so far there is no predictive model for accurately determining hybridization rate constants based on sequence information. Here, we report a weighted neighbour voting (WNV) prediction algorithm, in which the hybridization rate constant of an unknown sequence is predicted based on similarity reactions with known rate constants. To construct this algorithm we first performed 210 fluorescence kinetics experiments to observe the hybridization kinetics of 100 different DNA target and probe pairs (36 nt sub-sequences of the CYCS and VEGF genes) at temperatures ranging from 28 to 55 °C. Automated feature selection and weighting optimization resulted in a final six-feature WNV model, which can predict hybridization rate constants of new sequences to within a factor of 3 with ∼91% accuracy, based on leave-one-out cross-validation. Accurate prediction of hybridization kinetics allows the design of efficient probe sequences for genomics research.

  18. Can administrative health utilisation data provide an accurate diabetes prevalence estimate for a geographical region?

    PubMed

    Chan, Wing Cheuk; Papaconstantinou, Dean; Lee, Mildred; Telfer, Kendra; Jo, Emmanuel; Drury, Paul L; Tobias, Martin

    2018-05-01

    To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory results and to develop an improved algorithm for estimating diabetes prevalence at a population level. The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory results stored in the Auckland regional laboratory result repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive predictive value of the each of the VDR algorithm rules individually and as a combination. The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive predictive value of 76% and negative predictive value of 98%. The modified VDR algorithm has improved the positive predictive value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative predictive value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe. The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more accurate diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory results and administrative data. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. An Internet of Things System for Underground Mine Air Quality Pollutant Prediction Based on Azure Machine Learning

    PubMed Central

    Jo, ByungWan

    2018-01-01

    The implementation of wireless sensor networks (WSNs) for monitoring the complex, dynamic, and harsh environment of underground coal mines (UCMs) is sought around the world to enhance safety. However, previously developed smart systems are limited to monitoring or, in a few cases, can report events. Therefore, this study introduces a reliable, efficient, and cost-effective internet of things (IoT) system for air quality monitoring with newly added features of assessment and pollutant prediction. This system is comprised of sensor modules, communication protocols, and a base station, running Azure Machine Learning (AML) Studio over it. Arduino-based sensor modules with eight different parameters were installed at separate locations of an operational UCM. Based on the sensed data, the proposed system assesses mine air quality in terms of the mine environment index (MEI). Principal component analysis (PCA) identified CH4, CO, SO2, and H2S as the most influencing gases significantly affecting mine air quality. The results of PCA were fed into the ANN model in AML studio, which enabled the prediction of MEI. An optimum number of neurons were determined for both actual input and PCA-based input parameters. The results showed a better performance of the PCA-based ANN for MEI prediction, with R2 and RMSE values of 0.6654 and 0.2104, respectively. Therefore, the proposed Arduino and AML-based system enhances mine environmental safety by quickly assessing and predicting mine air quality. PMID:29561777

  20. An Internet of Things System for Underground Mine Air Quality Pollutant Prediction Based on Azure Machine Learning.

    PubMed

    Jo, ByungWan; Khan, Rana Muhammad Asad

    2018-03-21

    The implementation of wireless sensor networks (WSNs) for monitoring the complex, dynamic, and harsh environment of underground coal mines (UCMs) is sought around the world to enhance safety. However, previously developed smart systems are limited to monitoring or, in a few cases, can report events. Therefore, this study introduces a reliable, efficient, and cost-effective internet of things (IoT) system for air quality monitoring with newly added features of assessment and pollutant prediction. This system is comprised of sensor modules, communication protocols, and a base station, running Azure Machine Learning (AML) Studio over it. Arduino-based sensor modules with eight different parameters were installed at separate locations of an operational UCM. Based on the sensed data, the proposed system assesses mine air quality in terms of the mine environment index (MEI). Principal component analysis (PCA) identified CH₄, CO, SO₂, and H₂S as the most influencing gases significantly affecting mine air quality. The results of PCA were fed into the ANN model in AML studio, which enabled the prediction of MEI. An optimum number of neurons were determined for both actual input and PCA-based input parameters. The results showed a better performance of the PCA-based ANN for MEI prediction, with R ² and RMSE values of 0.6654 and 0.2104, respectively. Therefore, the proposed Arduino and AML-based system enhances mine environmental safety by quickly assessing and predicting mine air quality.

  1. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.

    1993-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.

  2. Pseudo CT estimation from MRI using patch-based random forest

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Lei, Yang; Shu, Hui-Kuo; Rossi, Peter; Mao, Hui; Shim, Hyunsuk; Curran, Walter J.; Liu, Tian

    2017-02-01

    Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.

  3. Improving stability of prediction models based on correlated omics data by using network approaches.

    PubMed

    Tissier, Renaud; Houwing-Duistermaat, Jeanine; Rodríguez-Girondo, Mar

    2018-01-01

    Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1) network construction, 2) clustering to empirically derive modules or pathways, and 3) building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM) and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  4. HBC-Evo: predicting human breast cancer by exploiting amino acid sequence-based feature spaces and evolutionary ensemble system.

    PubMed

    Majid, Abdul; Ali, Safdar

    2015-01-01

    We developed genetic programming (GP)-based evolutionary ensemble system for the early diagnosis, prognosis and prediction of human breast cancer. This system has effectively exploited the diversity in feature and decision spaces. First, individual learners are trained in different feature spaces using physicochemical properties of protein amino acids. Their predictions are then stacked to develop the best solution during GP evolution process. Finally, results for HBC-Evo system are obtained with optimal threshold, which is computed using particle swarm optimization. Our novel approach has demonstrated promising results compared to state of the art approaches.

  5. Gene function prediction based on Gene Ontology Hierarchy Preserving Hashing.

    PubMed

    Zhao, Yingwen; Fu, Guangyuan; Wang, Jun; Guo, Maozu; Yu, Guoxian

    2018-02-23

    Gene Ontology (GO) uses structured vocabularies (or terms) to describe the molecular functions, biological roles, and cellular locations of gene products in a hierarchical ontology. GO annotations associate genes with GO terms and indicate the given gene products carrying out the biological functions described by the relevant terms. However, predicting correct GO annotations for genes from a massive set of GO terms as defined by GO is a difficult challenge. To combat with this challenge, we introduce a Gene Ontology Hierarchy Preserving Hashing (HPHash) based semantic method for gene function prediction. HPHash firstly measures the taxonomic similarity between GO terms. It then uses a hierarchy preserving hashing technique to keep the hierarchical order between GO terms, and to optimize a series of hashing functions to encode massive GO terms via compact binary codes. After that, HPHash utilizes these hashing functions to project the gene-term association matrix into a low-dimensional one and performs semantic similarity based gene function prediction in the low-dimensional space. Experimental results on three model species (Homo sapiens, Mus musculus and Rattus norvegicus) for interspecies gene function prediction show that HPHash performs better than other related approaches and it is robust to the number of hash functions. In addition, we also take HPHash as a plugin for BLAST based gene function prediction. From the experimental results, HPHash again significantly improves the prediction performance. The codes of HPHash are available at: http://mlda.swu.edu.cn/codes.php?name=HPHash. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Multi-center prediction of hemorrhagic transformation in acute ischemic stroke using permeability imaging features.

    PubMed

    Scalzo, Fabien; Alger, Jeffry R; Hu, Xiao; Saver, Jeffrey L; Dani, Krishna A; Muir, Keith W; Demchuk, Andrew M; Coutts, Shelagh B; Luby, Marie; Warach, Steven; Liebeskind, David S

    2013-07-01

    Permeability images derived from magnetic resonance (MR) perfusion images are sensitive to blood-brain barrier derangement of the brain tissue and have been shown to correlate with subsequent development of hemorrhagic transformation (HT) in acute ischemic stroke. This paper presents a multi-center retrospective study that evaluates the predictive power in terms of HT of six permeability MRI measures including contrast slope (CS), final contrast (FC), maximum peak bolus concentration (MPB), peak bolus area (PB), relative recirculation (rR), and percentage recovery (%R). Dynamic T2*-weighted perfusion MR images were collected from 263 acute ischemic stroke patients from four medical centers. An essential aspect of this study is to exploit a classifier-based framework to automatically identify predictive patterns in the overall intensity distribution of the permeability maps. The model is based on normalized intensity histograms that are used as input features to the predictive model. Linear and nonlinear predictive models are evaluated using a cross-validation to measure generalization power on new patients and a comparative analysis is provided for the different types of parameters. Results demonstrate that perfusion imaging in acute ischemic stroke can predict HT with an average accuracy of more than 85% using a predictive model based on a nonlinear regression model. Results also indicate that the permeability feature based on the percentage of recovery performs significantly better than the other features. This novel model may be used to refine treatment decisions in acute stroke. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Researches of fruit quality prediction model based on near infrared spectrum

    NASA Astrophysics Data System (ADS)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  8. The effect of using genealogy-based haplotypes for genomic prediction

    PubMed Central

    2013-01-01

    Background Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Conclusions Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy. PMID:23496971

  9. Predictive models of alcohol use based on attitudes and individual values.

    PubMed

    García del Castillo Rodríguez, José A; López-Sánchez, Carmen; Quiles Soler, M Carmen; García del Castillo-López, Alvaro; Gázquez Pertusa, Mónica; Marzo Campos, Juan Carlos; Inglés, Candido J

    2013-01-01

    Two predictive models are developed in this article: the first is designed to predict people's attitudes to alcoholic drinks, while the second sets out to predict the use of alcohol in relation to selected individual values. University students (N = 1,500) were recruited through stratified sampling based on sex and academic discipline. The questionnaire used obtained information on participants' alcohol use, attitudes and personal values. The results show that the attitudes model correctly classifies 76.3% of cases. Likewise, the model for level of alcohol use correctly classifies 82% of cases. According to our results, we can conclude that there are a series of individual values that influence drinking and attitudes to alcohol use, which therefore provides us with a potentially powerful instrument for developing preventive intervention programs.

  10. Neuropsychological tests for predicting cognitive decline in older adults

    PubMed Central

    Baerresen, Kimberly M; Miller, Karen J; Hanson, Eric R; Miller, Justin S; Dye, Richelin V; Hartman, Richard E; Vermeersch, David; Small, Gary W

    2015-01-01

    Summary Aim To determine neuropsychological tests likely to predict cognitive decline. Methods A sample of nonconverters (n = 106) was compared with those who declined in cognitive status (n = 24). Significant univariate logistic regression prediction models were used to create multivariate logistic regression models to predict decline based on initial neuropsychological testing. Results Rey–Osterrieth Complex Figure Test (RCFT) Retention predicted conversion to mild cognitive impairment (MCI) while baseline Buschke Delay predicted conversion to Alzheimer’s disease (AD). Due to group sample size differences, additional analyses were conducted using a subsample of demographically matched nonconverters. Analyses indicated RCFT Retention predicted conversion to MCI and AD, and Buschke Delay predicted conversion to AD. Conclusion Results suggest RCFT Retention and Buschke Delay may be useful in predicting cognitive decline. PMID:26107318

  11. Stringent homology-based prediction of H. sapiens-M. tuberculosis H37Rv protein-protein interactions.

    PubMed

    Zhou, Hufeng; Gao, Shangzhi; Nguyen, Nam Ninh; Fan, Mengyuan; Jin, Jingjing; Liu, Bing; Zhao, Liang; Xiong, Geng; Tan, Min; Li, Shijun; Wong, Limsoon

    2014-04-08

    H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are essential for understanding the infection mechanism of the formidable pathogen M. tuberculosis H37Rv. Computational prediction is an important strategy to fill the gap in experimental H. sapiens-M. tuberculosis H37Rv PPI data. Homology-based prediction is frequently used in predicting both intra-species and inter-species PPIs. However, some limitations are not properly resolved in several published works that predict eukaryote-prokaryote inter-species PPIs using intra-species template PPIs. We develop a stringent homology-based prediction approach by taking into account (i) differences between eukaryotic and prokaryotic proteins and (ii) differences between inter-species and intra-species PPI interfaces. We compare our stringent homology-based approach to a conventional homology-based approach for predicting host-pathogen PPIs, based on cellular compartment distribution analysis, disease gene list enrichment analysis, pathway enrichment analysis and functional category enrichment analysis. These analyses support the validity of our prediction result, and clearly show that our approach has better performance in predicting H. sapiens-M. tuberculosis H37Rv PPIs. Using our stringent homology-based approach, we have predicted a set of highly plausible H. sapiens-M. tuberculosis H37Rv PPIs which might be useful for many of related studies. Based on our analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent homology-based approach, we have discovered several interesting properties which are reported here for the first time. We find that both host proteins and pathogen proteins involved in the host-pathogen PPIs tend to be hubs in their own intra-species PPI network. Also, both host and pathogen proteins involved in host-pathogen PPIs tend to have longer primary sequence, tend to have more domains, tend to be more hydrophilic, etc. And the protein domains from both host and pathogen proteins involved in host-pathogen PPIs tend to have lower charge, and tend to be more hydrophilic. Our stringent homology-based prediction approach provides a better strategy in predicting PPIs between eukaryotic hosts and prokaryotic pathogens than a conventional homology-based approach. The properties we have observed from the predicted H. sapiens-M. tuberculosis H37Rv PPI network are useful for understanding inter-species host-pathogen PPI networks and provide novel insights for host-pathogen interaction studies.

  12. Predicting human olfactory perception from chemical features of odor molecules.

    PubMed

    Keller, Andreas; Gerkin, Richard C; Guan, Yuanfang; Dhurandhar, Amit; Turu, Gabor; Szalai, Bence; Mainland, Joel D; Ihara, Yusuke; Yu, Chung Wen; Wolfinger, Russ; Vens, Celine; Schietgat, Leander; De Grave, Kurt; Norel, Raquel; Stolovitzky, Gustavo; Cecchi, Guillermo A; Vosshall, Leslie B; Meyer, Pablo

    2017-02-24

    It is still not possible to predict whether a given molecule will have a perceived odor or what olfactory percept it will produce. We therefore organized the crowd-sourced DREAM Olfaction Prediction Challenge. Using a large olfactory psychophysical data set, teams developed machine-learning algorithms to predict sensory attributes of molecules based on their chemoinformatic features. The resulting models accurately predicted odor intensity and pleasantness and also successfully predicted 8 among 19 rated semantic descriptors ("garlic," "fish," "sweet," "fruit," "burnt," "spices," "flower," and "sour"). Regularized linear models performed nearly as well as random forest-based ones, with a predictive accuracy that closely approaches a key theoretical limit. These models help to predict the perceptual qualities of virtually any molecule with high accuracy and also reverse-engineer the smell of a molecule. Copyright © 2017, American Association for the Advancement of Science.

  13. Bankruptcy prediction based on financial ratios using Jordan Recurrent Neural Networks: a case study in Polish companies

    NASA Astrophysics Data System (ADS)

    Hardinata, Lingga; Warsito, Budi; Suparti

    2018-05-01

    Complexity of bankruptcy causes the accurate models of bankruptcy prediction difficult to be achieved. Various prediction models have been developed to improve the accuracy of bankruptcy predictions. Machine learning has been widely used to predict because of its adaptive capabilities. Artificial Neural Networks (ANN) is one of machine learning which proved able to complete inference tasks such as prediction and classification especially in data mining. In this paper, we propose the implementation of Jordan Recurrent Neural Networks (JRNN) to classify and predict corporate bankruptcy based on financial ratios. Feedback interconnection in JRNN enable to make the network keep important information well allowing the network to work more effectively. The result analysis showed that JRNN works very well in bankruptcy prediction with average success rate of 81.3785%.

  14. Prediction of Drug-Drug Interactions with Crizotinib as the CYP3A Substrate Using a Physiologically Based Pharmacokinetic Model.

    PubMed

    Yamazaki, Shinji; Johnson, Theodore R; Smith, Bill J

    2015-10-01

    An orally available multiple tyrosine kinase inhibitor, crizotinib (Xalkori), is a CYP3A substrate, moderate time-dependent inhibitor, and weak inducer. The main objectives of the present study were to: 1) develop and refine a physiologically based pharmacokinetic (PBPK) model of crizotinib on the basis of clinical single- and multiple-dose results, 2) verify the crizotinib PBPK model from crizotinib single-dose drug-drug interaction (DDI) results with multiple-dose coadministration of ketoconazole or rifampin, and 3) apply the crizotinib PBPK model to predict crizotinib multiple-dose DDI outcomes. We also focused on gaining insights into the underlying mechanisms mediating crizotinib DDIs using a dynamic PBPK model, the Simcyp population-based simulator. First, PBPK model-predicted crizotinib exposures adequately matched clinically observed results in the single- and multiple-dose studies. Second, the model-predicted crizotinib exposures sufficiently matched clinically observed results in the crizotinib single-dose DDI studies with ketoconazole or rifampin, resulting in the reasonably predicted fold-increases in crizotinib exposures. Finally, the predicted fold-increases in crizotinib exposures in the multiple-dose DDI studies were roughly comparable to those in the single-dose DDI studies, suggesting that the effects of crizotinib CYP3A time-dependent inhibition (net inhibition) on the multiple-dose DDI outcomes would be negligible. Therefore, crizotinib dose-adjustment in the multiple-dose DDI studies could be made on the basis of currently available single-dose results. Overall, we believe that the crizotinib PBPK model developed, refined, and verified in the present study would adequately predict crizotinib oral exposures in other clinical studies, such as DDIs with weak/moderate CYP3A inhibitors/inducers and drug-disease interactions in patients with hepatic or renal impairment. Copyright © 2015 by The American Society for Pharmacology and Experimental Therapeutics.

  15. Moist air state above counterflow wet-cooling tower fill based on Merkel, generalised Merkel and Klimanek & Białecky models

    NASA Astrophysics Data System (ADS)

    Hyhlík, Tomáš

    2017-09-01

    The article deals with an evaluation of moist air state above counterflow wet-cooling tower fill. The results based on Klimanek & Białecky model are compared with results of Merkel model and generalised Merkel model. Based on the numerical simulation it is shown that temperature is predicted correctly by using generalised Merkel model in the case of saturated or super-saturated air above the fill, but the temperature is underpredicted in the case of unsaturated moist air above the fill. The classical Merkel model always under predicts temperature above the fill. The density of moist air above the fill, which is calculated using generalised Merkel model, is strongly over predicted in the case of unsaturated moist air above the fill.

  16. Accurate prediction of energy expenditure using a shoe-based activity monitor.

    PubMed

    Sazonova, Nadezhda; Browning, Raymond C; Sazonov, Edward

    2011-07-01

    The aim of this study was to develop and validate a method for predicting energy expenditure (EE) using a footwear-based system with integrated accelerometer and pressure sensors. We developed a footwear-based device with an embedded accelerometer and insole pressure sensors for the prediction of EE. The data from the device can be used to perform accurate recognition of major postures and activities and to estimate EE using the acceleration, pressure, and posture/activity classification information in a branched algorithm without the need for individual calibration. We measured EE via indirect calorimetry as 16 adults (body mass index=19-39 kg·m) performed various low- to moderate-intensity activities and compared measured versus predicted EE using several models based on the acceleration and pressure signals. Inclusion of pressure data resulted in better accuracy of EE prediction during static postures such as sitting and standing. The activity-based branched model that included predictors from accelerometer and pressure sensors (BACC-PS) achieved the lowest error (e.g., root mean squared error (RMSE)=0.69 METs) compared with the accelerometer-only-based branched model BACC (RMSE=0.77 METs) and nonbranched model (RMSE=0.94-0.99 METs). Comparison of EE prediction models using data from both legs versus models using data from a single leg indicates that only one shoe needs to be equipped with sensors. These results suggest that foot acceleration combined with insole pressure measurement, when used in an activity-specific branched model, can accurately estimate the EE associated with common daily postures and activities. The accuracy and unobtrusiveness of a footwear-based device may make it an effective physical activity monitoring tool.

  17. Vesicular stomatitis forecasting based on Google Trends

    PubMed Central

    Lu, Yi; Zhou, GuangYa; Chen, Qin

    2018-01-01

    Background Vesicular stomatitis (VS) is an important viral disease of livestock. The main feature of VS is irregular blisters that occur on the lips, tongue, oral mucosa, hoof crown and nipple. Humans can also be infected with vesicular stomatitis and develop meningitis. This study analyses 2014 American VS outbreaks in order to accurately predict vesicular stomatitis outbreak trends. Methods American VS outbreaks data were collected from OIE. The data for VS keywords were obtained by inputting 24 disease-related keywords into Google Trends. After calculating the Pearson and Spearman correlation coefficients, it was found that there was a relationship between outbreaks and keywords derived from Google Trends. Finally, the predicted model was constructed based on qualitative classification and quantitative regression. Results For the regression model, the Pearson correlation coefficients between the predicted outbreaks and actual outbreaks are 0.953 and 0.948, respectively. For the qualitative classification model, we constructed five classification predictive models and chose the best classification predictive model as the result. The results showed, SN (sensitivity), SP (specificity) and ACC (prediction accuracy) values of the best classification predictive model are 78.52%,72.5% and 77.14%, respectively. Conclusion This study applied Google search data to construct a qualitative classification model and a quantitative regression model. The results show that the method is effective and that these two models obtain more accurate forecast. PMID:29385198

  18. Comparative analysis of grapevine whole-genome gene predictions, functional annotation, categorization and integration of the predicted gene sequences

    PubMed Central

    2012-01-01

    Background The first draft assembly and gene prediction of the grapevine genome (8X base coverage) was made available to the scientific community in 2007, and functional annotation was developed on this gene prediction. Since then additional Sanger sequences were added to the 8X sequences pool and a new version of the genomic sequence with superior base coverage (12X) was produced. Results In order to more efficiently annotate the function of the genes predicted in the new assembly, it is important to build on as much of the previous work as possible, by transferring 8X annotation of the genome to the 12X version. The 8X and 12X assemblies and gene predictions of the grapevine genome were compared to answer the question, “Can we uniquely map 8X predicted genes to 12X predicted genes?” The results show that while the assemblies and gene structure predictions are too different to make a complete mapping between them, most genes (18,725) showed a one-to-one relationship between 8X predicted genes and the last version of 12X predicted genes. In addition, reshuffled genomic sequence structures appeared. These highlight regions of the genome where the gene predictions need to be taken with caution. Based on the new grapevine gene functional annotation and in-depth functional categorization, twenty eight new molecular networks have been created for VitisNet while the existing networks were updated. Conclusions The outcomes of this study provide a functional annotation of the 12X genes, an update of VitisNet, the system of the grapevine molecular networks, and a new functional categorization of genes. Data are available at the VitisNet website (http://www.sdstate.edu/ps/research/vitis/pathways.cfm). PMID:22554261

  19. Automated Cognitive Health Assessment From Smart Home-Based Behavior Data.

    PubMed

    Dawadi, Prafulla Nath; Cook, Diane Joyce; Schmitter-Edgecombe, Maureen

    2016-07-01

    Smart home technologies offer potential benefits for assisting clinicians by automating health monitoring and well-being assessment. In this paper, we examine the actual benefits of smart home-based analysis by monitoring daily behavior in the home and predicting clinical scores of the residents. To accomplish this goal, we propose a clinical assessment using activity behavior (CAAB) approach to model a smart home resident's daily behavior and predict the corresponding clinical scores. CAAB uses statistical features that describe characteristics of a resident's daily activity performance to train machine learning algorithms that predict the clinical scores. We evaluate the performance of CAAB utilizing smart home sensor data collected from 18 smart homes over two years. We obtain a statistically significant correlation ( r=0.72) between CAAB-predicted and clinician-provided cognitive scores and a statistically significant correlation ( r=0.45) between CAAB-predicted and clinician-provided mobility scores. These prediction results suggest that it is feasible to predict clinical scores using smart home sensor data and learning-based data analysis.

  20. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures

    PubMed Central

    2012-01-01

    Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case – without sacrificing much of the accuracy of the results. Conclusions Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms. PMID:22776037

  1. A systematic molecular dynamics study of nearest-neighbor effects on base pair and base pair step conformations and fluctuations in B-DNA

    PubMed Central

    Lavery, Richard; Zakrzewska, Krystyna; Beveridge, David; Bishop, Thomas C.; Case, David A.; Cheatham, Thomas; Dixit, Surjit; Jayaram, B.; Lankas, Filip; Laughton, Charles; Maddocks, John H.; Michon, Alexis; Osman, Roman; Orozco, Modesto; Perez, Alberto; Singh, Tanya; Spackova, Nada; Sponer, Jiri

    2010-01-01

    It is well recognized that base sequence exerts a significant influence on the properties of DNA and plays a significant role in protein–DNA interactions vital for cellular processes. Understanding and predicting base sequence effects requires an extensive structural and dynamic dataset which is currently unavailable from experiment. A consortium of laboratories was consequently formed to obtain this information using molecular simulations. This article describes results providing information not only on all 10 unique base pair steps, but also on all possible nearest-neighbor effects on these steps. These results are derived from simulations of 50–100 ns on 39 different DNA oligomers in explicit solvent and using a physiological salt concentration. We demonstrate that the simulations are converged in terms of helical and backbone parameters. The results show that nearest-neighbor effects on base pair steps are very significant, implying that dinucleotide models are insufficient for predicting sequence-dependent behavior. Flanking base sequences can notably lead to base pair step parameters in dynamic equilibrium between two conformational sub-states. Although this study only provides limited data on next-nearest-neighbor effects, we suggest that such effects should be analyzed before attempting to predict the sequence-dependent behavior of DNA. PMID:19850719

  2. Pretreatment data is highly predictive of liver chemistry signals in clinical trials

    PubMed Central

    Cai, Zhaohui; Bresell, Anders; Steinberg, Mark H; Silberg, Debra G; Furlong, Stephen T

    2012-01-01

    Purpose The goal of this retrospective analysis was to assess how well predictive models could determine which patients would develop liver chemistry signals during clinical trials based on their pretreatment (baseline) information. Patients and methods Based on data from 24 late-stage clinical trials, classification models were developed to predict liver chemistry outcomes using baseline information, which included demographics, medical history, concomitant medications, and baseline laboratory results. Results Predictive models using baseline data predicted which patients would develop liver signals during the trials with average validation accuracy around 80%. Baseline levels of individual liver chemistry tests were most important for predicting their own elevations during the trials. High bilirubin levels at baseline were not uncommon and were associated with a high risk of developing biochemical Hy’s law cases. Baseline γ-glutamyltransferase (GGT) level appeared to have some predictive value, but did not increase predictability beyond using established liver chemistry tests. Conclusion It is possible to predict which patients are at a higher risk of developing liver chemistry signals using pretreatment (baseline) data. Derived knowledge from such predictions may allow proactive and targeted risk management, and the type of analysis described here could help determine whether new biomarkers offer improved performance over established ones. PMID:23226004

  3. Model-based influences on humans’ choices and striatal prediction errors

    PubMed Central

    Daw, Nathaniel D.; Gershman, Samuel J.; Seymour, Ben; Dayan, Peter; Dolan, Raymond J.

    2011-01-01

    Summary The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. PMID:21435563

  4. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  5. Characterizing informative sequence descriptors and predicting binding affinities of heterodimeric protein complexes

    PubMed Central

    2015-01-01

    Background Protein-protein interactions (PPIs) are involved in various biological processes, and underlying mechanism of the interactions plays a crucial role in therapeutics and protein engineering. Most machine learning approaches have been developed for predicting the binding affinity of protein-protein complexes based on structure and functional information. This work aims to predict the binding affinity of heterodimeric protein complexes from sequences only. Results This work proposes a support vector machine (SVM) based binding affinity classifier, called SVM-BAC, to classify heterodimeric protein complexes based on the prediction of their binding affinity. SVM-BAC identified 14 of 580 sequence descriptors (physicochemical, energetic and conformational properties of the 20 amino acids) to classify 216 heterodimeric protein complexes into low and high binding affinity. SVM-BAC yielded the training accuracy, sensitivity, specificity, AUC and test accuracy of 85.80%, 0.89, 0.83, 0.86 and 83.33%, respectively, better than existing machine learning algorithms. The 14 features and support vector regression were further used to estimate the binding affinities (Pkd) of 200 heterodimeric protein complexes. Prediction performance of a Jackknife test was the correlation coefficient of 0.34 and mean absolute error of 1.4. We further analyze three informative physicochemical properties according to their contribution to prediction performance. Results reveal that the following properties are effective in predicting the binding affinity of heterodimeric protein complexes: apparent partition energy based on buried molar fractions, relations between chemical structure and biological activity in principal component analysis IV, and normalized frequency of beta turn. Conclusions The proposed sequence-based prediction method SVM-BAC uses an optimal feature selection method to identify 14 informative features to classify and predict binding affinity of heterodimeric protein complexes. The characterization analysis revealed that the average numbers of beta turns and hydrogen bonds at protein-protein interfaces in high binding affinity complexes are more than those in low binding affinity complexes. PMID:26681483

  6. A polynomial based model for cell fate prediction in human diseases.

    PubMed

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  7. A novel method for structure-based prediction of ion channel conductance properties.

    PubMed Central

    Smart, O S; Breed, J; Smith, G R; Sansom, M S

    1997-01-01

    A rapid and easy-to-use method of predicting the conductance of an ion channel from its three-dimensional structure is presented. The method combines the pore dimensions of the channel as measured in the HOLE program with an Ohmic model of conductance. An empirically based correction factor is then applied. The method yielded good results for six experimental channel structures (none of which were included in the training set) with predictions accurate to within an average factor of 1.62 to the true values. The predictive r2 was equal to 0.90, which is indicative of a good predictive ability. The procedure is used to validate model structures of alamethicin and phospholamban. Two genuine predictions for the conductance of channels with known structure but without reported conductances are given. A modification of the procedure that calculates the expected results for the effect of the addition of nonelectrolyte polymers on conductance is set out. Results for a cholera toxin B-subunit crystal structure agree well with the measured values. The difficulty in interpreting such studies is discussed, with the conclusion that measurements on channels of known structure are required. Images FIGURE 1 FIGURE 3 FIGURE 4 FIGURE 6 FIGURE 10 PMID:9138559

  8. Gray correlation analysis and prediction models of living refuse generation in Shanghai city.

    PubMed

    Liu, Gousheng; Yu, Jianguo

    2007-01-01

    A better understanding of the factors that affect the generation of municipal living refuse (MLF) and the accurate prediction of its generation are crucial for municipal planning projects and city management. Up to now, most of the design efforts have been based on a rough prediction of MLF without any actual support. In this paper, based on published data of socioeconomic variables and MLF generation from 1990 to 2003 in the city of Shanghai, the main factors that affect MLF generation have been quantitatively studied using the method of gray correlation coefficient. Several gray models, such as GM(1,1), GIM(1), GPPM(1) and GLPM(1), have been studied, and predicted results are verified with subsequent residual test. Results show that, among the selected seven factors, consumption of gas, water and electricity are the largest three factors affecting MLF generation, and GLPM(1) is the optimized model to predict MLF generation. Through this model, the predicted MLF generation in 2010 in Shanghai will be 7.65 million tons. The methods and results developed in this paper can provide valuable information for MLF management and related municipal planning projects.

  9. RRCRank: a fusion method using rank strategy for residue-residue contact prediction.

    PubMed

    Jing, Xiaoyang; Dong, Qiwen; Lu, Ruqian

    2017-09-02

    In structural biology area, protein residue-residue contacts play a crucial role in protein structure prediction. Some researchers have found that the predicted residue-residue contacts could effectively constrain the conformational search space, which is significant for de novo protein structure prediction. In the last few decades, related researchers have developed various methods to predict residue-residue contacts, especially, significant performance has been achieved by using fusion methods in recent years. In this work, a novel fusion method based on rank strategy has been proposed to predict contacts. Unlike the traditional regression or classification strategies, the contact prediction task is regarded as a ranking task. First, two kinds of features are extracted from correlated mutations methods and ensemble machine-learning classifiers, and then the proposed method uses the learning-to-rank algorithm to predict contact probability of each residue pair. First, we perform two benchmark tests for the proposed fusion method (RRCRank) on CASP11 dataset and CASP12 dataset respectively. The test results show that the RRCRank method outperforms other well-developed methods, especially for medium and short range contacts. Second, in order to verify the superiority of ranking strategy, we predict contacts by using the traditional regression and classification strategies based on the same features as ranking strategy. Compared with these two traditional strategies, the proposed ranking strategy shows better performance for three contact types, in particular for long range contacts. Third, the proposed RRCRank has been compared with several state-of-the-art methods in CASP11 and CASP12. The results show that the RRCRank could achieve comparable prediction precisions and is better than three methods in most assessment metrics. The learning-to-rank algorithm is introduced to develop a novel rank-based method for the residue-residue contact prediction of proteins, which achieves state-of-the-art performance based on the extensive assessment.

  10. Selected Streamflow Statistics and Regression Equations for Predicting Statistics at Stream Locations in Monroe County, Pennsylvania

    USGS Publications Warehouse

    Thompson, Ronald E.; Hoffman, Scott A.

    2006-01-01

    A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation of water-resources availability.

  11. Prediction of microbe-disease association from the integration of neighbor and graph with collaborative recommendation model.

    PubMed

    Huang, Yu-An; You, Zhu-Hong; Chen, Xing; Huang, Zhi-An; Zhang, Shanwen; Yan, Gui-Ying

    2017-10-16

    Accumulating clinical researches have shown that specific microbes with abnormal levels are closely associated with the development of various human diseases. Knowledge of microbe-disease associations can provide valuable insights for complex disease mechanism understanding as well as the prevention, diagnosis and treatment of various diseases. However, little effort has been made to predict microbial candidates for human complex diseases on a large scale. In this work, we developed a new computational model for predicting microbe-disease associations by combining two single recommendation methods. Based on the assumption that functionally similar microbes tend to get involved in the mechanism of similar disease, we adopted neighbor-based collaborative filtering and a graph-based scoring method to compute association possibility of microbe-disease pairs. The promising prediction performance could be attributed to the use of hybrid approach based on two single recommendation methods as well as the introduction of Gaussian kernel-based similarity and symptom-based disease similarity. To evaluate the performance of the proposed model, we implemented leave-one-out and fivefold cross validations on the HMDAD database, which is recently built as the first database collecting experimentally-confirmed microbe-disease associations. As a result, NGRHMDA achieved reliable results with AUCs of 0.9023 ± 0.0031 and 0.9111 in the validation frameworks of fivefold CV and LOOCV. In addition, 78.2% microbe samples and 66.7% disease samples are found to be consistent with the basic assumption of our work that microbes tend to get involved in the similar disease clusters, and vice versa. Compared with other methods, the prediction results yielded by NGRHMDA demonstrate its effective prediction performance for microbe-disease associations. It is anticipated that NGRHMDA can be used as a useful tool to search the most potential microbial candidates for various diseases, and therefore boosts the medical knowledge and drug development. The codes and dataset of our work can be downloaded from https://github.com/yahuang1991/NGRHMDA .

  12. An Ensemble Approach for Drug Side Effect Prediction

    PubMed Central

    Jahid, Md Jamiul; Ruan, Jianhua

    2014-01-01

    In silico prediction of drug side-effects in early stage of drug development is becoming more popular now days, which not only reduces the time for drug design but also reduces the drug development costs. In this article we propose an ensemble approach to predict drug side-effects of drug molecules based on their chemical structure. Our idea originates from the observation that similar drugs have similar side-effects. Based on this observation we design an ensemble approach that combine the results from different classification models where each model is generated by a different set of similar drugs. We applied our approach to 1385 side-effects in the SIDER database for 888 drugs. Results show that our approach outperformed previously published approaches and standard classifiers. Furthermore, we applied our method to a number of uncharacterized drug molecules in DrugBank database and predict their side-effect profiles for future usage. Results from various sources confirm that our method is able to predict the side-effects for uncharacterized drugs and more importantly able to predict rare side-effects which are often ignored by other approaches. The method described in this article can be useful to predict side-effects in drug design in an early stage to reduce experimental cost and time. PMID:25327524

  13. Link prediction based on local community properties

    NASA Astrophysics Data System (ADS)

    Yang, Xu-Hua; Zhang, Hai-Feng; Ling, Fei; Cheng, Zhi; Weng, Guo-Qing; Huang, Yu-Jiao

    2016-09-01

    The link prediction algorithm is one of the key technologies to reveal the inherent rule of network evolution. This paper proposes a novel link prediction algorithm based on the properties of the local community, which is composed of the common neighbor nodes of any two nodes in the network and the links between these nodes. By referring to the node degree and the condition of assortativity or disassortativity in a network, we comprehensively consider the effect of the shortest path and edge clustering coefficient within the local community on node similarity. We numerically show the proposed method provide good link prediction results.

  14. A method of predicting the energy-absorption capability of composite subfloor beams

    NASA Technical Reports Server (NTRS)

    Farley, Gary L.

    1987-01-01

    A simple method of predicting the energy-absorption capability of composite subfloor beam structure was developed. The method is based upon the weighted sum of the energy-absorption capability of constituent elements of a subfloor beam. An empirical data base of energy absorption results from circular and square cross section tube specimens were used in the prediction capability. The procedure is applicable to a wide range of subfloor beam structure. The procedure was demonstrated on three subfloor beam concepts. Agreement between test and prediction was within seven percent for all three cases.

  15. Electronic Nose Based on Independent Component Analysis Combined with Partial Least Squares and Artificial Neural Networks for Wine Prediction

    PubMed Central

    Aguilera, Teodoro; Lozano, Jesús; Paredes, José A.; Álvarez, Fernando J.; Suárez, José I.

    2012-01-01

    The aim of this work is to propose an alternative way for wine classification and prediction based on an electronic nose (e-nose) combined with Independent Component Analysis (ICA) as a dimensionality reduction technique, Partial Least Squares (PLS) to predict sensorial descriptors and Artificial Neural Networks (ANNs) for classification purpose. A total of 26 wines from different regions, varieties and elaboration processes have been analyzed with an e-nose and tasted by a sensory panel. Successful results have been obtained in most cases for prediction and classification. PMID:22969387

  16. Strainrange partitioning behavior of the nickel-base superalloys, Rene' 80 and in 100

    NASA Technical Reports Server (NTRS)

    Halford, G. R.; Nachtigall, A. J.

    1978-01-01

    A study was made to assess the ability of the method of Strainrange Partitioning (SRP) to both correlate and predict high-temperature, low cycle fatigue lives of nickel base superalloys for gas turbine applications. The partitioned strainrange versus life relationships for uncoated Rene' 80 and cast IN 100 were also determined from the ductility normalized-Strainrange Partitioning equations. These were used to predict the cyclic lives of the baseline tests. The life predictability of the method was verified for cast IN 100 by applying the baseline results to the cyclic life prediction of a series of complex strain cycling tests with multiple hold periods at constant strain. It was concluded that the method of SRP can correlate and predict the cyclic lives of laboratory specimens of the nickel base superalloys evaluated in this program.

  17. A novel artificial neural network method for biomedical prediction based on matrix pseudo-inversion.

    PubMed

    Cai, Binghuang; Jiang, Xia

    2014-04-01

    Biomedical prediction based on clinical and genome-wide data has become increasingly important in disease diagnosis and classification. To solve the prediction problem in an effective manner for the improvement of clinical care, we develop a novel Artificial Neural Network (ANN) method based on Matrix Pseudo-Inversion (MPI) for use in biomedical applications. The MPI-ANN is constructed as a three-layer (i.e., input, hidden, and output layers) feed-forward neural network, and the weights connecting the hidden and output layers are directly determined based on MPI without a lengthy learning iteration. The LASSO (Least Absolute Shrinkage and Selection Operator) method is also presented for comparative purposes. Single Nucleotide Polymorphism (SNP) simulated data and real breast cancer data are employed to validate the performance of the MPI-ANN method via 5-fold cross validation. Experimental results demonstrate the efficacy of the developed MPI-ANN for disease classification and prediction, in view of the significantly superior accuracy (i.e., the rate of correct predictions), as compared with LASSO. The results based on the real breast cancer data also show that the MPI-ANN has better performance than other machine learning methods (including support vector machine (SVM), logistic regression (LR), and an iterative ANN). In addition, experiments demonstrate that our MPI-ANN could be used for bio-marker selection as well. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin

    USGS Publications Warehouse

    Shrestha, M.S.; Artan, G.A.; Bajracharya, S.R.; Gautam, D.K.; Tokar, S.A.

    2011-01-01

    In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32000km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC-RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC-RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC-RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction. ?? 2011 The Authors. Journal of Flood Risk Management ?? 2011 The Chartered Institution of Water and Environmental Management.

  19. Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin

    USGS Publications Warehouse

    Artan, Guleid A.; Tokar, S.A.; Gautam, D.K.; Bajracharya, S.R.; Shrestha, M.S.

    2011-01-01

    In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32 000 km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC_RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC_RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC_RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction.

  20. Research of Water Level Prediction for a Continuous Flood due to Typhoons Based on a Machine Learning Method

    NASA Astrophysics Data System (ADS)

    Nakatsugawa, M.; Kobayashi, Y.; Okazaki, R.; Taniguchi, Y.

    2017-12-01

    This research aims to improve accuracy of water level prediction calculations for more effective river management. In August 2016, Hokkaido was visited by four typhoons, whose heavy rainfall caused severe flooding. In the Tokoro river basin of Eastern Hokkaido, the water level (WL) at the Kamikawazoe gauging station, which is at the lower reaches exceeded the design high-water level and the water rose to the highest level on record. To predict such flood conditions and mitigate disaster damage, it is necessary to improve the accuracy of prediction as well as to prolong the lead time (LT) required for disaster mitigation measures such as flood-fighting activities and evacuation actions by residents. There is the need to predict the river water level around the peak stage earlier and more accurately. Previous research dealing with WL prediction had proposed a method in which the WL at the lower reaches is estimated by the correlation with the WL at the upper reaches (hereinafter: "the water level correlation method"). Additionally, a runoff model-based method has been generally used in which the discharge is estimated by giving rainfall prediction data to a runoff model such as a storage function model and then the WL is estimated from that discharge by using a WL discharge rating curve (H-Q curve). In this research, an attempt was made to predict WL by applying the Random Forest (RF) method, which is a machine learning method that can estimate the contribution of explanatory variables. Furthermore, from the practical point of view, we investigated the prediction of WL based on a multiple correlation (MC) method involving factors using explanatory variables with high contribution in the RF method, and we examined the proper selection of explanatory variables and the extension of LT. The following results were found: 1) Based on the RF method tuned up by learning from previous floods, the WL for the abnormal flood case of August 2016 was properly predicted with a lead time of 6 h. 2) Based on the contribution of explanatory variables, factors were selected for the MC method. In this way, plausible prediction results were obtained.

  1. Predictability of the Lagrangian Motion in the Upper Ocean

    NASA Astrophysics Data System (ADS)

    Piterbarg, L. I.; Griffa, A.; Griffa, A.; Mariano, A. J.; Ozgokmen, T. M.; Ryan, E. H.

    2001-12-01

    The complex non-linear dynamics of the upper ocean leads to chaotic behavior of drifter trajectories in the ocean. Our study is focused on estimating the predictability limit for the position of an individual Lagrangian particle or a particle cluster based on the knowledge of mean currents and observations of nearby particles (predictors). The Lagrangian prediction problem, besides being a fundamental scientific problem, is also of great importance for practical applications such as search and rescue operations and for modeling the spread of fish larvae. A stochastic multi-particle model for the Lagrangian motion has been rigorously formulated and is a generalization of the well known "random flight" model for a single particle. Our model is mathematically consistent and includes a few easily interpreted parameters, such as the Lagrangian velocity decorrelation time scale, the turbulent velocity variance, and the velocity decorrelation radius, that can be estimated from data. The top Lyapunov exponent for an isotropic version of the model is explicitly expressed as a function of these parameters enabling us to approximate the predictability limit to first order. Lagrangian prediction errors for two new prediction algorithms are evaluated against simple algorithms and each other and are used to test the predictability limits of the stochastic model for isotropic turbulence. The first algorithm is based on a Kalman filter and uses the developed stochastic model. Its implementation for drifter clusters in both the Tropical Pacific and Adriatic Sea, showed good prediction skill over a period of 1-2 weeks. The prediction error is primarily a function of the data density, defined as the number of predictors within a velocity decorrelation spatial scale from the particle to be predicted. The second algorithm is model independent and is based on spatial regression considerations. Preliminary results, based on simulated, as well as, real data, indicate that it performs better than the Kalman-based algorithm in strong shear flows. An important component of our research is the optimal predictor location problem; Where should floats be launched in order to minimize the Lagrangian prediction error? Preliminary Lagrangian sampling results for different flow scenarios will be presented.

  2. Use seismic colored inversion and power law committee machine based on imperial competitive algorithm for improving porosity prediction in a heterogeneous reservoir

    NASA Astrophysics Data System (ADS)

    Ansari, Hamid Reza

    2014-09-01

    In this paper we propose a new method for predicting rock porosity based on a combination of several artificial intelligence systems. The method focuses on one of the Iranian carbonate fields in the Persian Gulf. Because there is strong heterogeneity in carbonate formations, estimation of rock properties experiences more challenge than sandstone. For this purpose, seismic colored inversion (SCI) and a new approach of committee machine are used in order to improve porosity estimation. The study comprises three major steps. First, a series of sample-based attributes is calculated from 3D seismic volume. Acoustic impedance is an important attribute that is obtained by the SCI method in this study. Second, porosity log is predicted from seismic attributes using common intelligent computation systems including: probabilistic neural network (PNN), radial basis function network (RBFN), multi-layer feed forward network (MLFN), ε-support vector regression (ε-SVR) and adaptive neuro-fuzzy inference system (ANFIS). Finally, a power law committee machine (PLCM) is constructed based on imperial competitive algorithm (ICA) to combine the results of all previous predictions in a single solution. This technique is called PLCM-ICA in this paper. The results show that PLCM-ICA model improved the results of neural networks, support vector machine and neuro-fuzzy system.

  3. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  4. Application of LogitBoost Classifier for Traceability Using SNP Chip Data

    PubMed Central

    Kang, Hyunsung; Cho, Seoae; Kim, Heebal; Seo, Kang-Seok

    2015-01-01

    Consumer attention to food safety has increased rapidly due to animal-related diseases; therefore, it is important to identify their places of origin (POO) for safety purposes. However, only a few studies have addressed this issue and focused on machine learning-based approaches. In the present study, classification analyses were performed using a customized SNP chip for POO prediction. To accomplish this, 4,122 pigs originating from 104 farms were genotyped using the SNP chip. Several factors were considered to establish the best prediction model based on these data. We also assessed the applicability of the suggested model using a kinship coefficient-filtering approach. Our results showed that the LogitBoost-based prediction model outperformed other classifiers in terms of classification performance under most conditions. Specifically, a greater level of accuracy was observed when a higher kinship-based cutoff was employed. These results demonstrated the applicability of a machine learning-based approach using SNP chip data for practical traceability. PMID:26436917

  5. Application of LogitBoost Classifier for Traceability Using SNP Chip Data.

    PubMed

    Kim, Kwondo; Seo, Minseok; Kang, Hyunsung; Cho, Seoae; Kim, Heebal; Seo, Kang-Seok

    2015-01-01

    Consumer attention to food safety has increased rapidly due to animal-related diseases; therefore, it is important to identify their places of origin (POO) for safety purposes. However, only a few studies have addressed this issue and focused on machine learning-based approaches. In the present study, classification analyses were performed using a customized SNP chip for POO prediction. To accomplish this, 4,122 pigs originating from 104 farms were genotyped using the SNP chip. Several factors were considered to establish the best prediction model based on these data. We also assessed the applicability of the suggested model using a kinship coefficient-filtering approach. Our results showed that the LogitBoost-based prediction model outperformed other classifiers in terms of classification performance under most conditions. Specifically, a greater level of accuracy was observed when a higher kinship-based cutoff was employed. These results demonstrated the applicability of a machine learning-based approach using SNP chip data for practical traceability.

  6. Proposals for enhanced health risk assessment and stratification in an integrated care scenario

    PubMed Central

    Dueñas-Espín, Ivan; Vela, Emili; Pauws, Steffen; Bescos, Cristina; Cano, Isaac; Cleries, Montserrat; Contel, Joan Carles; de Manuel Keenoy, Esteban; Garcia-Aymerich, Judith; Gomez-Cabrero, David; Kaye, Rachelle; Lahr, Maarten M H; Lluch-Ariet, Magí; Moharra, Montserrat; Monterde, David; Mora, Joana; Nalin, Marco; Pavlickova, Andrea; Piera, Jordi; Ponce, Sara; Santaeugenia, Sebastià; Schonenberg, Helen; Störk, Stefan; Tegner, Jesper; Velickovski, Filip; Westerteicher, Christoph; Roca, Josep

    2016-01-01

    Objectives Population-based health risk assessment and stratification are considered highly relevant for large-scale implementation of integrated care by facilitating services design and case identification. The principal objective of the study was to analyse five health-risk assessment strategies and health indicators used in the five regions participating in the Advancing Care Coordination and Telehealth Deployment (ACT) programme (http://www.act-programme.eu). The second purpose was to elaborate on strategies toward enhanced health risk predictive modelling in the clinical scenario. Settings The five ACT regions: Scotland (UK), Basque Country (ES), Catalonia (ES), Lombardy (I) and Groningen (NL). Participants Responsible teams for regional data management in the five ACT regions. Primary and secondary outcome measures We characterised and compared risk assessment strategies among ACT regions by analysing operational health risk predictive modelling tools for population-based stratification, as well as available health indicators at regional level. The analysis of the risk assessment tool deployed in Catalonia in 2015 (GMAs, Adjusted Morbidity Groups) was used as a basis to propose how population-based analytics could contribute to clinical risk prediction. Results There was consensus on the need for a population health approach to generate health risk predictive modelling. However, this strategy was fully in place only in two ACT regions: Basque Country and Catalonia. We found marked differences among regions in health risk predictive modelling tools and health indicators, and identified key factors constraining their comparability. The research proposes means to overcome current limitations and the use of population-based health risk prediction for enhanced clinical risk assessment. Conclusions The results indicate the need for further efforts to improve both comparability and flexibility of current population-based health risk predictive modelling approaches. Applicability and impact of the proposals for enhanced clinical risk assessment require prospective evaluation. PMID:27084274

  7. Predicting overload-affected fatigue crack growth in steels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skorupa, M.; Skorupa, A.; Ladecki, B.

    1996-12-01

    The ability of semi-empirical crack closure models to predict the effect of overloads on fatigue crack growth in low-alloy steels has been investigated. With this purpose, the CORPUS model developed for aircraft metals and spectra has been checked first through comparisons between the simulated and observed results for a low-alloy steel. The CORPUS predictions of crack growth under several types of simple load histories containing overloads appeared generally unconservative which prompted the authors to formulate a new model, more suitable for steels. With the latter approach, the assumed evolution of the crack opening stress during the delayed retardation stage hasmore » been based on experimental results reported for various steels. For all the load sequences considered, the predictions from the proposed model appeared to be by far more accurate than those from CORPUS. Based on the analysis results, the capability of semi-empirical prediction concepts to cover experimentally observed trends that have been reported for sequences with overloads is discussed. Finally, possibilities of improving the model performance are considered.« less

  8. Predicting the Creativity of Design Majors Based on the Interaction of Diverse Personality Traits

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng; Peng, Li-Pei; Lin, Ju-Sen; Liang, Chaoyun

    2015-01-01

    In this study, design majors were analysed to examine how diverse personality traits interact and influence student creativity. The study participants comprised 476 design majors. The results indicated that openness predicted the originality of creativity, whereas openness, conscientiousness and agreeableness predicted the usefulness of…

  9. Demonstration of the Water Erosion Prediction Project (WEPP) internet interface and services

    USDA-ARS?s Scientific Manuscript database

    The Water Erosion Prediction Project (WEPP) model is a process-based FORTRAN computer simulation program for prediction of runoff and soil erosion by water at hillslope profile, field, and small watershed scales. To effectively run the WEPP model and interpret results additional software has been de...

  10. Is there more valuable information in PWI datasets for a voxel-wise acute ischemic stroke tissue outcome prediction than what is represented by typical perfusion maps?

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Siemonsen, Susanne; Dalski, Michael; Verleger, Tobias; Kemmling, Andre; Fiehler, Jens

    2014-03-01

    The acute ischemic stroke is a leading cause for death and disability in the industry nations. In case of a present acute ischemic stroke, the prediction of the future tissue outcome is of high interest for the clinicians as it can be used to support therapy decision making. Within this context, it has already been shown that the voxel-wise multi-parametric tissue outcome prediction leads to more promising results compared to single channel perfusion map thresholding. Most previously published multi-parametric predictions employ information from perfusion maps derived from perfusion-weighted MRI together with other image sequences such as diffusion-weighted MRI. However, it remains unclear if the typically calculated perfusion maps used for this purpose really include all valuable information from the PWI dataset for an optimal tissue outcome prediction. To investigate this problem in more detail, two different methods to predict tissue outcome using a k-nearest-neighbor approach were developed in this work and evaluated based on 18 datasets of acute stroke patients with known tissue outcome. The first method integrates apparent diffusion coefficient and perfusion parameter (Tmax, MTT, CBV, CBF) information for the voxel-wise prediction, while the second method employs also apparent diffusion coefficient information but the complete perfusion information in terms of the voxel-wise residue functions instead of the perfusion parameter maps for the voxel-wise prediction. Overall, the comparison of the results of the two prediction methods for the 18 patients using a leave-one-out cross validation revealed no considerable differences. Quantitatively, the parameter-based prediction of tissue outcome led to a mean Dice coefficient of 0.474, while the prediction using the residue functions led to a mean Dice coefficient of 0.461. Thus, it may be concluded from the results of this study that the perfusion parameter maps typically derived from PWI datasets include all valuable perfusion information required for a voxel-based tissue outcome prediction, while the complete analysis of the residue functions does not add further benefits for the voxel-wise tissue outcome prediction and is also computationally more expensive.

  11. Prediction of shallow landslide occurrence: Validation of a physically-based approach through a real case study.

    PubMed

    Schilirò, Luca; Montrasio, Lorella; Scarascia Mugnozza, Gabriele

    2016-11-01

    In recent years, physically-based numerical models have frequently been used in the framework of early-warning systems devoted to rainfall-induced landslide hazard monitoring and mitigation. For this reason, in this work we describe the potential of SLIP (Shallow Landslides Instability Prediction), a simplified physically-based model for the analysis of shallow landslide occurrence. In order to test the reliability of this model, a back analysis of recent landslide events occurred in the study area (located SW of Messina, northeastern Sicily, Italy) on October 1st, 2009 was performed. The simulation results have been compared with those obtained for the same event by using TRIGRS, another well-established model for shallow landslide prediction. Afterwards, a simulation over a 2-year span period has been performed for the same area, with the aim of evaluating the performance of SLIP as early warning tool. The results confirm the good predictive capability of the model, both in terms of spatial and temporal prediction of the instability phenomena. For this reason, we recommend an operating procedure for the real-time definition of shallow landslide triggering scenarios at the catchment scale, which is based on the use of SLIP calibrated through a specific multi-methodological approach. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Prediction-based manufacturing center self-adaptive demand side energy optimization in cyber physical systems

    NASA Astrophysics Data System (ADS)

    Sun, Xinyao; Wang, Xue; Wu, Jiangwei; Liu, Youda

    2014-05-01

    Cyber physical systems(CPS) recently emerge as a new technology which can provide promising approaches to demand side management(DSM), an important capability in industrial power systems. Meanwhile, the manufacturing center is a typical industrial power subsystem with dozens of high energy consumption devices which have complex physical dynamics. DSM, integrated with CPS, is an effective methodology for solving energy optimization problems in manufacturing center. This paper presents a prediction-based manufacturing center self-adaptive energy optimization method for demand side management in cyber physical systems. To gain prior knowledge of DSM operating results, a sparse Bayesian learning based componential forecasting method is introduced to predict 24-hour electric load levels for specific industrial areas in China. From this data, a pricing strategy is designed based on short-term load forecasting results. To minimize total energy costs while guaranteeing manufacturing center service quality, an adaptive demand side energy optimization algorithm is presented. The proposed scheme is tested in a machining center energy optimization experiment. An AMI sensing system is then used to measure the demand side energy consumption of the manufacturing center. Based on the data collected from the sensing system, the load prediction-based energy optimization scheme is implemented. By employing both the PSO and the CPSO method, the problem of DSM in the manufacturing center is solved. The results of the experiment show the self-adaptive CPSO energy optimization method enhances optimization by 5% compared with the traditional PSO optimization method.

  13. Fatigue life prediction of rotor blade composites: Validation of constant amplitude formulations with variable amplitude experiments

    NASA Astrophysics Data System (ADS)

    Westphal, T.; Nijssen, R. P. L.

    2014-12-01

    The effect of Constant Life Diagram (CLD) formulation on the fatigue life prediction under variable amplitude (VA) loading was investigated based on variable amplitude tests using three different load spectra representative for wind turbine loading. Next to the Wisper and WisperX spectra, the recently developed NewWisper2 spectrum was used. Based on these variable amplitude fatigue results the prediction accuracy of 4 CLD formulations is investigated. In the study a piecewise linear CLD based on the S-N curves for 9 load ratios compares favourably in terms of prediction accuracy and conservativeness. For the specific laminate used in this study Boerstra's Multislope model provides a good alternative at reduced test effort.

  14. Ab-initio conformational epitope structure prediction using genetic algorithm and SVM for vaccine design.

    PubMed

    Moghram, Basem Ameen; Nabil, Emad; Badr, Amr

    2018-01-01

    T-cell epitope structure identification is a significant challenging immunoinformatic problem within epitope-based vaccine design. Epitopes or antigenic peptides are a set of amino acids that bind with the Major Histocompatibility Complex (MHC) molecules. The aim of this process is presented by Antigen Presenting Cells to be inspected by T-cells. MHC-molecule-binding epitopes are responsible for triggering the immune response to antigens. The epitope's three-dimensional (3D) molecular structure (i.e., tertiary structure) reflects its proper function. Therefore, the identification of MHC class-II epitopes structure is a significant step towards epitope-based vaccine design and understanding of the immune system. In this paper, we propose a new technique using a Genetic Algorithm for Predicting the Epitope Structure (GAPES), to predict the structure of MHC class-II epitopes based on their sequence. The proposed Elitist-based genetic algorithm for predicting the epitope's tertiary structure is based on Ab-Initio Empirical Conformational Energy Program for Peptides (ECEPP) Force Field Model. The developed secondary structure prediction technique relies on Ramachandran Plot. We used two alignment algorithms: the ROSS alignment and TM-Score alignment. We applied four different alignment approaches to calculate the similarity scores of the dataset under test. We utilized the support vector machine (SVM) classifier as an evaluation of the prediction performance. The prediction accuracy and the Area Under Receiver Operating Characteristic (ROC) Curve (AUC) were calculated as measures of performance. The calculations are performed on twelve similarity-reduced datasets of the Immune Epitope Data Base (IEDB) and a large dataset of peptide-binding affinities to HLA-DRB1*0101. The results showed that GAPES was reliable and very accurate. We achieved an average prediction accuracy of 93.50% and an average AUC of 0.974 in the IEDB dataset. Also, we achieved an accuracy of 95.125% and an AUC of 0.987 on the HLA-DRB1*0101 allele of the Wang benchmark dataset. The results indicate that the proposed prediction technique "GAPES" is a promising technique that will help researchers and scientists to predict the protein structure and it will assist them in the intelligent design of new epitope-based vaccines. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Biological networks for predicting chemical hepatocarcinogenicity using gene expression data from treated mice and relevance across human and rat species.

    PubMed

    Thomas, Reuben; Thomas, Russell S; Auerbach, Scott S; Portier, Christopher J

    2013-01-01

    Several groups have employed genomic data from subchronic chemical toxicity studies in rodents (90 days) to derive gene-centric predictors of chronic toxicity and carcinogenicity. Genes are annotated to belong to biological processes or molecular pathways that are mechanistically well understood and are described in public databases. To develop a molecular pathway-based prediction model of long term hepatocarcinogenicity using 90-day gene expression data and to evaluate the performance of this model with respect to both intra-species, dose-dependent and cross-species predictions. Genome-wide hepatic mRNA expression was retrospectively measured in B6C3F1 mice following subchronic exposure to twenty-six (26) chemicals (10 were positive, 2 equivocal and 14 negative for liver tumors) previously studied by the US National Toxicology Program. Using these data, a pathway-based predictor model for long-term liver cancer risk was derived using random forests. The prediction model was independently validated on test sets associated with liver cancer risk obtained from mice, rats and humans. Using 5-fold cross validation, the developed prediction model had reasonable predictive performance with the area under receiver-operator curve (AUC) equal to 0.66. The developed prediction model was then used to extrapolate the results to data associated with rat and human liver cancer. The extrapolated model worked well for both extrapolated species (AUC value of 0.74 for rats and 0.91 for humans). The prediction models implied a balanced interplay between all pathway responses leading to carcinogenicity predictions. Pathway-based prediction models estimated from sub-chronic data hold promise for predicting long-term carcinogenicity and also for its ability to extrapolate results across multiple species.

  16. Comparisons of Methods for Predicting Community Annoyance Due to Sonic Booms

    NASA Technical Reports Server (NTRS)

    Hubbard, Harvey H.; Shepherd, Kevin P.

    1996-01-01

    Two approaches to the prediction of community response to sonic boom exposure are examined and compared. The first approach is based on the wealth of data concerning community response to common transportation noises coupled with results of a sonic boom/aircraft noise comparison study. The second approach is based on limited field studies of community response to sonic booms. Substantial differences between indoor and outdoor listening conditions are observed. Reasonable agreement is observed between predicted community responses and available measured responses.

  17. Robust model predictive control for satellite formation keeping with eccentricity/inclination vector separation

    NASA Astrophysics Data System (ADS)

    Lim, Yeerang; Jung, Youeyun; Bang, Hyochoong

    2018-05-01

    This study presents model predictive formation control based on an eccentricity/inclination vector separation strategy. Alternative collision avoidance can be accomplished by using eccentricity/inclination vectors and adding a simple goal function term for optimization process. Real-time control is also achievable with model predictive controller based on convex formulation. Constraint-tightening approach is address as well improve robustness of the controller, and simulation results are presented to verify performance enhancement for the proposed approach.

  18. Quantifying uncertainties in streamflow predictions through signature based inference of hydrological model parameters

    NASA Astrophysics Data System (ADS)

    Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro

    2016-04-01

    The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.

  19. Prediction of Protein Structural Classes for Low-Similarity Sequences Based on Consensus Sequence and Segmented PSSM.

    PubMed

    Liang, Yunyun; Liu, Sanyang; Zhang, Shengli

    2015-01-01

    Prediction of protein structural classes for low-similarity sequences is useful for understanding fold patterns, regulation, functions, and interactions of proteins. It is well known that feature extraction is significant to prediction of protein structural class and it mainly uses protein primary sequence, predicted secondary structure sequence, and position-specific scoring matrix (PSSM). Currently, prediction solely based on the PSSM has played a key role in improving the prediction accuracy. In this paper, we propose a novel method called CSP-SegPseP-SegACP by fusing consensus sequence (CS), segmented PsePSSM, and segmented autocovariance transformation (ACT) based on PSSM. Three widely used low-similarity datasets (1189, 25PDB, and 640) are adopted in this paper. Then a 700-dimensional (700D) feature vector is constructed and the dimension is decreased to 224D by using principal component analysis (PCA). To verify the performance of our method, rigorous jackknife cross-validation tests are performed on 1189, 25PDB, and 640 datasets. Comparison of our results with the existing PSSM-based methods demonstrates that our method achieves the favorable and competitive performance. This will offer an important complementary to other PSSM-based methods for prediction of protein structural classes for low-similarity sequences.

  20. Electric Power Engineering Cost Predicting Model Based on the PCA-GA-BP

    NASA Astrophysics Data System (ADS)

    Wen, Lei; Yu, Jiake; Zhao, Xin

    2017-10-01

    In this paper a hybrid prediction algorithm: PCA-GA-BP model is proposed. PCA algorithm is established to reduce the correlation between indicators of original data and decrease difficulty of BP neural network in complex dimensional calculation. The BP neural network is established to estimate the cost of power transmission project. The results show that PCA-GA-BP algorithm can improve result of prediction of electric power engineering cost.

  1. Applicability of a gene expression based prediction method to SD and Wistar rats: an example of CARCINOscreen®.

    PubMed

    Matsumoto, Hiroshi; Saito, Fumiyo; Takeyoshi, Masahiro

    2015-12-01

    Recently, the development of several gene expression-based prediction methods has been attempted in the fields of toxicology. CARCINOscreen® is a gene expression-based screening method to predict carcinogenicity of chemicals which target the liver with high accuracy. In this study, we investigated the applicability of the gene expression-based screening method to SD and Wistar rats by using CARCINOscreen®, originally developed with F344 rats, with two carcinogens, 2,4-diaminotoluen and thioacetamide, and two non-carcinogens, 2,6-diaminotoluen and sodium benzoate. After the 28-day repeated dose test was conducted with each chemical in SD and Wistar rats, microarray analysis was performed using total RNA extracted from each liver. Obtained gene expression data were applied to CARCINOscreen®. Predictive scores obtained by the CARCINOscreen® for known carcinogens were > 2 in all strains of rats, while non-carcinogens gave prediction scores below 0.5. These results suggested that the gene expression based screening method, CARCINOscreen®, can be applied to SD and Wistar rats, widely used strains in toxicological studies, by setting of an appropriate boundary line of prediction score to classify the chemicals into carcinogens and non-carcinogens.

  2. Resting Energy Expenditure Prediction in Recreational Athletes of 18–35 Years: Confirmation of Cunningham Equation and an Improved Weight-Based Alternative

    PubMed Central

    ten Haaf, Twan; Weijs, Peter J. M.

    2014-01-01

    Introduction Resting energy expenditure (REE) is expected to be higher in athletes because of their relatively high fat free mass (FFM). Therefore, REE predictive equation for recreational athletes may be required. The aim of this study was to validate existing REE predictive equations and to develop a new recreational athlete specific equation. Methods 90 (53M, 37F) adult athletes, exercising on average 9.1±5.0 hours a week and 5.0±1.8 times a week, were included. REE was measured using indirect calorimetry (Vmax Encore n29), FFM and FM were measured using air displacement plethysmography. Multiple linear regression analysis was used to develop a new FFM-based and weight-based REE predictive equation. The percentage accurate predictions (within 10% of measured REE), percentage bias, root mean square error and limits of agreement were calculated. Results The Cunningham equation and the new weight-based equation and the new FFM-based equation performed equally well. De Lorenzo's equation predicted REE less accurate, but better than the other generally used REE predictive equations. Harris-Benedict, WHO, Schofield, Mifflin and Owen all showed less than 50% accuracy. Conclusion For a population of (Dutch) recreational athletes, the REE can accurately be predicted with the existing Cunningham equation. Since body composition measurement is not always possible, and other generally used equations fail, the new weight-based equation is advised for use in sports nutrition. PMID:25275434

  3. Research of Coal Resources Reserves Prediction Based on GM (1, 1) Model

    NASA Astrophysics Data System (ADS)

    Xiao, Jiancheng

    2018-01-01

    Based on the forecast of China’s coal reserves, this paper uses the GM (1, 1) gray forecasting theory to establish the gray forecasting model of China’s coal reserves based on the data of China’s coal reserves from 2002 to 2009, and obtained the trend of coal resources reserves with the current economic and social development situation, and the residual test model is established, so the prediction model is more accurate. The results show that China’s coal reserves can ensure the use of production at least 300 years of use. And the results are similar to the mainstream forecast results, and that are in line with objective reality.

  4. A Geographic-Information-Systems-Based Approach to Analysis of Characteristics Predicting Student Persistence and Graduation

    ERIC Educational Resources Information Center

    Ousley, Chris

    2010-01-01

    This study sought to provide empirical evidence regarding the use of spatial analysis in enrollment management to predict persistence and graduation. The research utilized data from the 2000 U.S. Census and applicant records from The University of Arizona to study the spatial distributions of enrollments. Based on the initial results, stepwise…

  5. Predictive control and estimation algorithms for the NASA/JPL 70-meter antennas

    NASA Technical Reports Server (NTRS)

    Gawronski, W.

    1991-01-01

    A modified output prediction procedure and a new controller design is presented based on the predictive control law. Also, a new predictive estimator is developed to complement the controller and to enhance system performance. The predictive controller is designed and applied to the tracking control of the Deep Space Network 70 m antennas. Simulation results show significant improvement in tracking performance over the linear quadratic controller and estimator presently in use.

  6. Predictive Feedback and Conscious Visual Experience

    PubMed Central

    Panichello, Matthew F.; Cheung, Olivia S.; Bar, Moshe

    2012-01-01

    The human brain continuously generates predictions about the environment based on learned regularities in the world. These predictions actively and efficiently facilitate the interpretation of incoming sensory information. We review evidence that, as a result of this facilitation, predictions directly influence conscious experience. Specifically, we propose that predictions enable rapid generation of conscious percepts and bias the contents of awareness in situations of uncertainty. The possible neural mechanisms underlying this facilitation are discussed. PMID:23346068

  7. Predicting the mortality from asbestos-related diseases based on the amount of asbestos used and the effects of slate buildings in Korea.

    PubMed

    Kim, Su-Young; Kim, Young-Chan; Kim, Yongku; Hong, Won-Hwa

    2016-01-15

    Asbestos has been used since ancient times, owing to its heat-resistant, rot-proof, and insulating qualities, and its usage rapidly increased after the industrial revolution. In Korea, all slates were previously manufactured in a mixture of about 90% cement and 10% chrysotile (white asbestos). This study used a Generalized Poisson regression (GPR) model after creating databases of the mortality from asbestos-related diseases and of the amount of asbestos used in Korea as a means to predict the future mortality of asbestos-related diseases and mesothelioma in Korea. Moreover, to predict the future mortality according to the effects of slate buildings, a comparative analysis based on the result of the GPR model was conducted after creating databases of the amount of asbestos used in Korea and of the amount of asbestos used in making slates. We predicted the mortality from asbestos-related diseases by year, from 2014 to 2036, according to the amount of asbestos used. As a result, it was predicted that a total of 1942 people (maximum, 3476) will die by 2036. Moreover, based on the comparative analysis according to the influence index, it was predicted that a maximum of 555 people will die from asbestos-related diseases by 2031 as a result of the effects of asbestos-containing slate buildings, and the mortality was predicted to peak in 2021, with 53 cases. Although mesothelioma and pulmonary asbestosis were considered as asbestos-related diseases, these are not the only two diseases caused by asbestos. However the results of this study are highly important and relevant, as, for the first time in Korea, the future mortality from asbestos-related diseases was predicted. These findings are expected to contribute greatly to the Korean government's policies related to the compensation for asbestos victims. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Space shuttle booster multi-engine base flow analysis

    NASA Technical Reports Server (NTRS)

    Tang, H. H.; Gardiner, C. R.; Anderson, W. A.; Navickas, J.

    1972-01-01

    A comprehensive review of currently available techniques pertinent to several prominent aspects of the base thermal problem of the space shuttle booster is given along with a brief review of experimental results. A tractable engineering analysis, capable of predicting the power-on base pressure, base heating, and other base thermal environmental conditions, such as base gas temperature, is presented and used for an analysis of various space shuttle booster configurations. The analysis consists of a rational combination of theoretical treatments of the prominent flow interaction phenomena in the base region. These theories consider jet mixing, plume flow, axisymmetric flow effects, base injection, recirculating flow dynamics, and various modes of heat transfer. Such effects as initial boundary layer expansion at the nozzle lip, reattachment, recompression, choked vent flow, and nonisoenergetic mixing processes are included in the analysis. A unified method was developed and programmed to numerically obtain compatible solutions for the various flow field components in both flight and ground test conditions. Preliminary prediction for a 12-engine space shuttle booster base thermal environment was obtained for a typical trajectory history. Theoretical predictions were also obtained for some clustered-engine experimental conditions. Results indicate good agreement between the data and theoretical predicitons.

  9. Hierarchical Interactions Model for Predicting Mild Cognitive Impairment (MCI) to Alzheimer's Disease (AD) Conversion

    PubMed Central

    Li, Han; Liu, Yashu; Gong, Pinghua; Zhang, Changshui; Ye, Jieping

    2014-01-01

    Identifying patients with Mild Cognitive Impairment (MCI) who are likely to convert to dementia has recently attracted increasing attention in Alzheimer's disease (AD) research. An accurate prediction of conversion from MCI to AD can aid clinicians to initiate treatments at early stage and monitor their effectiveness. However, existing prediction systems based on the original biosignatures are not satisfactory. In this paper, we propose to fit the prediction models using pairwise biosignature interactions, thus capturing higher-order relationship among biosignatures. Specifically, we employ hierarchical constraints and sparsity regularization to prune the high-dimensional input features. Based on the significant biosignatures and underlying interactions identified, we build classifiers to predict the conversion probability based on the selected features. We further analyze the underlying interaction effects of different biosignatures based on the so-called stable expectation scores. We have used 293 MCI subjects from Alzheimer's Disease Neuroimaging Initiative (ADNI) database that have MRI measurements at the baseline to evaluate the effectiveness of the proposed method. Our proposed method achieves better classification performance than state-of-the-art methods. Moreover, we discover several significant interactions predictive of MCI-to-AD conversion. These results shed light on improving the prediction performance using interaction features. PMID:24416143

  10. Design and experiment of vehicular charger AC/DC system based on predictive control algorithm

    NASA Astrophysics Data System (ADS)

    He, Guangbi; Quan, Shuhai; Lu, Yuzhang

    2018-06-01

    For the car charging stage rectifier uncontrollable system, this paper proposes a predictive control algorithm of DC/DC converter based on the prediction model, established by the state space average method and its prediction model, obtained by the optimal mathematical description of mathematical calculation, to analysis prediction algorithm by Simulink simulation. The design of the structure of the car charging, at the request of the rated output power and output voltage adjustable control circuit, the first stage is the three-phase uncontrolled rectifier DC voltage Ud through the filter capacitor, after by using double-phase interleaved buck-boost circuit with wide range output voltage required value, analyzing its working principle and the the parameters for the design and selection of components. The analysis of current ripple shows that the double staggered parallel connection has the advantages of reducing the output current ripple and reducing the loss. The simulation experiment of the whole charging circuit is carried out by software, and the result is in line with the design requirements of the system. Finally combining the soft with hardware circuit to achieve charging of the system according to the requirements, experimental platform proved the feasibility and effectiveness of the proposed predictive control algorithm based on the car charging of the system, which is consistent with the simulation results.

  11. Novel Approach for Prediction of Localized Necking in Case of Nonlinear Strain Paths

    NASA Astrophysics Data System (ADS)

    Drotleff, K.; Liewald, M.

    2017-09-01

    Rising customer expectations regarding design complexity and weight reduction of sheet metal components alongside with further reduced time to market implicate increased demand for process validation using numerical forming simulation. Formability prediction though often is still based on the forming limit diagram first presented in the 1960s. Despite many drawbacks in case of nonlinear strain paths and major advances in research in the recent years, the forming limit curve (FLC) is still one of the most commonly used criteria for assessing formability of sheet metal materials. Especially when forming complex part geometries nonlinear strain paths may occur, which cannot be predicted using the conventional FLC-Concept. In this paper a novel approach for calculation of FLCs for nonlinear strain paths is presented. Combining an interesting approach for prediction of FLC using tensile test data and IFU-FLC-Criterion a model for prediction of localized necking for nonlinear strain paths can be derived. Presented model is purely based on experimental tensile test data making it easy to calibrate for any given material. Resulting prediction of localized necking is validated using an experimental deep drawing specimen made of AA6014 material having a sheet thickness of 1.04 mm. The results are compared to IFU-FLC-Criterion based on data of pre-stretched Nakajima specimen.

  12. A New Hybrid-Multiscale SSA Prediction of Non-Stationary Time Series

    NASA Astrophysics Data System (ADS)

    Ghanbarzadeh, Mitra; Aminghafari, Mina

    2016-02-01

    Singular spectral analysis (SSA) is a non-parametric method used in the prediction of non-stationary time series. It has two parameters, which are difficult to determine and very sensitive to their values. Since, SSA is a deterministic-based method, it does not give good results when the time series is contaminated with a high noise level and correlated noise. Therefore, we introduce a novel method to handle these problems. It is based on the prediction of non-decimated wavelet (NDW) signals by SSA and then, prediction of residuals by wavelet regression. The advantages of our method are the automatic determination of parameters and taking account of the stochastic structure of time series. As shown through the simulated and real data, we obtain better results than SSA, a non-parametric wavelet regression method and Holt-Winters method.

  13. Predicting Treatment Response in Social Anxiety Disorder From Functional Magnetic Resonance Imaging

    PubMed Central

    Doehrmann, Oliver; Ghosh, Satrajit S.; Polli, Frida E.; Reynolds, Gretchen O.; Horn, Franziska; Keshavan, Anisha; Triantafyllou, Christina; Saygin, Zeynep M.; Whitfield-Gabrieli, Susan; Hofmann, Stefan G.; Pollack, Mark; Gabrieli, John D.

    2013-01-01

    Context Current behavioral measures poorly predict treatment outcome in social anxiety disorder (SAD). To our knowledge, this is the first study to examine neuroimaging-based treatment prediction in SAD. Objective To measure brain activation in patients with SAD as a biomarker to predict subsequent response to cognitive behavioral therapy (CBT). Design Functional magnetic resonance imaging (fMRI) data were collected prior to CBT intervention. Changes in clinical status were regressed on brain responses and tested for selectivity for social stimuli. Setting Patients were treated with protocol-based CBT at anxiety disorder programs at Boston University or Massachusetts General Hospital and underwent neuroimaging data collection at Massachusetts Institute of Technology. Patients Thirty-nine medication-free patients meeting DSM-IV criteria for the generalized subtype of SAD. Interventions Brain responses to angry vs neutral faces or emotional vs neutral scenes were examined with fMRI prior to initiation of CBT. Main Outcome Measures Whole-brain regression analyses with differential fMRI responses for angry vs neutral faces and changes in Liebowitz Social Anxiety Scale score as the treatment outcome measure. Results Pretreatment responses significantly predicted subsequent treatment outcome of patients selectively for social stimuli and particularly in regions of higher-order visual cortex. Combining the brain measures with information on clinical severity accounted for more than 40% of the variance in treatment response and substantially exceeded predictions based on clinical measures at baseline. Prediction success was unaffected by testing for potential confounding factors such as depression severity at baseline. Conclusions The results suggest that brain imaging can provide biomarkers that substantially improve predictions for the success of cognitive behavioral interventions and more generally suggest that such biomarkers may offer evidence-based, personalized medicine approaches for optimally selecting among treatment options for a patient. PMID:22945462

  14. Artificial intelligence models for predicting iron deficiency anemia and iron serum level based on accessible laboratory data.

    PubMed

    Azarkhish, Iman; Raoufy, Mohammad Reza; Gharibzadeh, Shahriar

    2012-06-01

    Iron deficiency anemia (IDA) is the most common nutritional deficiency worldwide. Measuring serum iron is time consuming, expensive and not available in most hospitals. In this study, based on four accessible laboratory data (MCV, MCH, MCHC, Hb/RBC), we developed an artificial neural network (ANN) and an adaptive neuro-fuzzy inference system (ANFIS) to diagnose the IDA and to predict serum iron level. Our results represent that the neural network analysis is superior to ANFIS and logistic regression models in diagnosing IDA. Moreover, the results show that the ANN is likely to provide an accurate test for predicting serum iron levels with high accuracy and acceptable precision.

  15. Prediction of penicillin resistance in Staphylococcus aureus isolates from dairy cows with mastitis, based on prior test results.

    PubMed

    Grinberg, A; Lopez-Villalobos, N; Lawrence, K; Nulsen, M

    2005-10-01

    To gauge how well prior laboratory test results predict in vitro penicillin resistance of Staphylococcus aureus isolates from dairy cows with mastitis. Population-based data on the farm of origin (n=79), genotype based on pulsed-field gel electrophoresis (PFGE) results, and the penicillin-resistance status of Staph. aureus isolates (n=115) from milk samples collected from dairy cows with mastitis submitted to two diagnostic laboratories over a 6-month period were used. Data were mined stochastically using the all-possible-pairs method, binomial modelling and bootstrap simulation, to test whether prior test results enhance the accuracy of prediction of penicillin resistance on farms. Of all Staph. aureus isolates tested, 38% were penicillin resistant. A significant aggregation of penicillin-resistance status was evident within farms. The probability of random pairs of isolates from the same farm having the same penicillin-resistance status was 76%, compared with 53% for random pairings of samples across all farms. Thus, the resistance status of randomly selected isolates was 1.43 times more likely to correctly predict the status of other isolates from the same farm than the random population pairwise concordance probability (p=0.011). This effect was likely due to the clonal relationship of isolates within farms, as the predictive fraction attributable to prior test results was close to nil when the effect of within-farm clonal infections was withdrawn from the model. Knowledge of the penicillin-resistance status of a prior Staph. aureus isolate significantly enhanced the predictive capability of other isolates from the same farm. In the time and space frame of this study, clinicians using previous information from a farm would have more accurately predicted the penicillin-resistance status of an isolate than they would by chance alone on farms infected with clonal Staph. aureus isolates, but not on farms infected with highly genetically heterogeneous bacterial strains.

  16. Protein docking prediction using predicted protein-protein interface.

    PubMed

    Li, Bin; Kihara, Daisuke

    2012-01-10

    Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.

  17. Robust predictive cruise control for commercial vehicles

    NASA Astrophysics Data System (ADS)

    Junell, Jaime; Tumer, Kagan

    2013-10-01

    In this paper we explore learning-based predictive cruise control and the impact of this technology on increasing fuel efficiency for commercial trucks. Traditional cruise control is wasteful when maintaining a constant velocity over rolling hills. Predictive cruise control (PCC) is able to look ahead at future road conditions and solve for a cost-effective course of action. Model- based controllers have been implemented in this field but cannot accommodate many complexities of a dynamic environment which includes changing road and vehicle conditions. In this work, we focus on incorporating a learner into an already successful model- based predictive cruise controller in order to improve its performance. We explore back propagating neural networks to predict future errors then take actions to prevent said errors from occurring. The results show that this approach improves the model based PCC by up to 60% under certain conditions. In addition, we explore the benefits of classifier ensembles to further improve the gains due to intelligent cruise control.

  18. CD-Based Indices for Link Prediction in Complex Network.

    PubMed

    Wang, Tao; Wang, Hongjue; Wang, Xiaoxia

    2016-01-01

    Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks.

  19. CD-Based Indices for Link Prediction in Complex Network

    PubMed Central

    Wang, Tao; Wang, Hongjue; Wang, Xiaoxia

    2016-01-01

    Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks. PMID:26752405

  20. Adaptive Data-based Predictive Control for Short Take-off and Landing (STOL) Aircraft

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan Spencer; Acosta, Diana Michelle; Phan, Minh Q.

    2010-01-01

    Data-based Predictive Control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. The characteristics of adaptive data-based predictive control are particularly appropriate for the control of nonlinear and time-varying systems, such as Short Take-off and Landing (STOL) aircraft. STOL is a capability of interest to NASA because conceptual Cruise Efficient Short Take-off and Landing (CESTOL) transport aircraft offer the ability to reduce congestion in the terminal area by utilizing existing shorter runways at airports, as well as to lower community noise by flying steep approach and climb-out patterns that reduce the noise footprint of the aircraft. In this study, adaptive data-based predictive control is implemented as an integrated flight-propulsion controller for the outer-loop control of a CESTOL-type aircraft. Results show that the controller successfully tracks velocity while attempting to maintain a constant flight path angle, using longitudinal command, thrust and flap setting as the control inputs.

  1. Assessment of fatigue life of remanufactured impeller based on FEA

    NASA Astrophysics Data System (ADS)

    Xu, Lei; Cao, Huajun; Liu, Hailong; Zhang, Yubo

    2016-09-01

    Predicting the fatigue life of remanufactured centrifugal compressor impellers is a critical problem. In this paper, the S-N curve data were obtained by combining experimentation and theory deduction. The load spectrum was compiled by the rain-flow counting method based on the comprehensive consideration of the centrifugal force, residual stress, and aerodynamic loads in the repair region. A fatigue life simulation model was built, and fatigue life was analyzed based on the fatigue cumulative damage rule. Although incapable of providing a high-precision prediction, the simulation results were useful for the analysis of fatigue life impact factors and fatigue fracture areas. Results showed that the load amplitude greatly affected fatigue life, the impeller was protected from running at over-speed, and the predicted fatigue life was satisfied within the next service cycle safely at the rated speed.

  2. Sequence Based Prediction of Antioxidant Proteins Using a Classifier Selection Strategy

    PubMed Central

    Zhang, Lina; Zhang, Chengjin; Gao, Rui; Yang, Runtao; Song, Qing

    2016-01-01

    Antioxidant proteins perform significant functions in maintaining oxidation/antioxidation balance and have potential therapies for some diseases. Accurate identification of antioxidant proteins could contribute to revealing physiological processes of oxidation/antioxidation balance and developing novel antioxidation-based drugs. In this study, an ensemble method is presented to predict antioxidant proteins with hybrid features, incorporating SSI (Secondary Structure Information), PSSM (Position Specific Scoring Matrix), RSA (Relative Solvent Accessibility), and CTD (Composition, Transition, Distribution). The prediction results of the ensemble predictor are determined by an average of prediction results of multiple base classifiers. Based on a classifier selection strategy, we obtain an optimal ensemble classifier composed of RF (Random Forest), SMO (Sequential Minimal Optimization), NNA (Nearest Neighbor Algorithm), and J48 with an accuracy of 0.925. A Relief combined with IFS (Incremental Feature Selection) method is adopted to obtain optimal features from hybrid features. With the optimal features, the ensemble method achieves improved performance with a sensitivity of 0.95, a specificity of 0.93, an accuracy of 0.94, and an MCC (Matthew’s Correlation Coefficient) of 0.880, far better than the existing method. To evaluate the prediction performance objectively, the proposed method is compared with existing methods on the same independent testing dataset. Encouragingly, our method performs better than previous studies. In addition, our method achieves more balanced performance with a sensitivity of 0.878 and a specificity of 0.860. These results suggest that the proposed ensemble method can be a potential candidate for antioxidant protein prediction. For public access, we develop a user-friendly web server for antioxidant protein identification that is freely accessible at http://antioxidant.weka.cc. PMID:27662651

  3. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  4. A methodology for reduced order modeling and calibration of the upper atmosphere

    NASA Astrophysics Data System (ADS)

    Mehta, Piyush M.; Linares, Richard

    2017-10-01

    Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.

  5. [Dental manpower prediction in Israel for 2017].

    PubMed

    Vered, Y; Zini, A; Mann, J

    2010-07-01

    A recent study published by the authors indicated that according to the Israeli Central Bureau of Statistics in 2008, Israel had 5800 active dentists, a figure well below the publication by the Ministry of Health. Based on this figure, using the manpower to population ratio method, the following results were obtained: The predicted number of dentist in 2017 would be 6090, based on, the estimated number of Israel: graduates, the estimated number of dentists who would arrive in Israel as immigrants or Israelis who studied abroad, based on an attrition rate of 3% and on the assumption that the number of dentists leaving the country is negligible. Table 2, based on manpower to population ratio, indicates that by 2017, Israel would have 1 dentist per 1400 population, a ratio which is still far above what many countries present, but high for Israel. This might reflect a dramatic change, from employment in public clinics, back to private practices. The results clearly indicate that a shortage of dentists is predicted in the near future and a major brainstorming is urgently required to evaluate these results.

  6. Estimating consumer familiarity with health terminology: a context-based approach.

    PubMed

    Zeng-Treitler, Qing; Goryachev, Sergey; Tse, Tony; Keselman, Alla; Boxwala, Aziz

    2008-01-01

    Effective health communication is often hindered by a "vocabulary gap" between language familiar to consumers and jargon used in medical practice and research. To present health information to consumers in a comprehensible fashion, we need to develop a mechanism to quantify health terms as being more likely or less likely to be understood by typical members of the lay public. Prior research has used approaches including syllable count, easy word list, and frequency count, all of which have significant limitations. In this article, we present a new method that predicts consumer familiarity using contextual information. The method was applied to a large query log data set and validated using results from two previously conducted consumer surveys. We measured the correlation between the survey result and the context-based prediction, syllable count, frequency count, and log normalized frequency count. The correlation coefficient between the context-based prediction and the survey result was 0.773 (p < 0.001), which was higher than the correlation coefficients between the survey result and the syllable count, frequency count, and log normalized frequency count (p < or = 0.012). The context-based approach provides a good alternative to the existing term familiarity assessment methods.

  7. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  8. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    NASA Astrophysics Data System (ADS)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  9. A novel model to predict gas-phase hydroxyl radical oxidation kinetics of polychlorinated compounds.

    PubMed

    Luo, Shuang; Wei, Zongsu; Spinney, Richard; Yang, Zhihui; Chai, Liyuan; Xiao, Ruiyang

    2017-04-01

    In this study, a novel model based on aromatic meta-substituent grouping was presented to predict the second-order rate constants (k) for OH oxidation of PCBs in gas-phase. Since the oxidation kinetics are dependent on the chlorination degree and position, we hypothesized that it may be more accurate for k value prediction if we group PCB congeners based on substitution positions (i.e., ortho (o), meta (m), and para (p)). To test this hypothesis, we examined the correlation of polarizability (α), a quantum chemical based descriptor for k values, with an empirical Hammett constant (σ + ) on each substitution position. Our result shows that α is highly linearly correlated to ∑σ o,m,p + based on aromatic meta-substituents leading to the grouping based predictive model. With the new model, the calculated k values exhibited an excellent agreement with experimental measurements, and greater predictive power than the quantum chemical based quantitative structure activity relationship (QSAR) model. Further, the relationship of α and ∑σ o,m,p + for PCDDs congeners, together with highest occupied molecular orbital (HOMO) distribution, were used to validate the aromatic meta-substituent grouping method. This newly developed model features a combination of good predictability of quantum chemical based QSAR model and simplicity of Hammett relationship, showing a great potential for fast and computational tractable prediction of k values for gas-phase OH oxidation of polychlorinated compounds. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Docking Based 3D-QSAR Study of Tricyclic Guanidine Analogues of Batzelladine K as anti-malarial agents

    NASA Astrophysics Data System (ADS)

    Ahmed, Nafees; Anwar, Sirajudheen; Thet Htar, Thet

    2017-06-01

    The Plasmodium falciparum Lactate Dehydrogenase enzyme (PfLDH) catalyzes inter-conversion of pyruvate to lactate during glycolysis producing the energy required for parasitic growth. The PfLDH has been studied as a potential molecular target for development of anti-malarial agents. In an attempt to find the potent inhibitor of PfLDH, we have used Discovery studio to perform molecular docking in the active binding pocket of PfLDH by CDOCKER, followed by three-dimensional quantitative structure-activity relationship (3D-QSAR) studies of tricyclic guanidine batzelladine compounds, which were previously synthesized in our laboratory. Docking studies showed that there is a very strong correlation between in silico and in vitro results. Based on docking results, a highly predictive 3D-QSAR model was developed with q2 of 0.516. The model has predicted r2 of 0.91 showing that predicted IC50 values are in good agreement with experimental IC50 values. The results obtained from this study revealed the developed model can be used to design new anti-malarial compounds based on tricyclic guanidine derivatives and to predict activities of new inhibitors.

  11. Docking Based 3D-QSAR Study of Tricyclic Guanidine Analogues of Batzelladine K As Anti-Malarial Agents.

    PubMed

    Ahmed, Nafees; Anwar, Sirajudheen; Thet Htar, Thet

    2017-01-01

    The Plasmodium falciparum Lactate Dehydrogenase enzyme ( Pf LDH) catalyzes inter-conversion of pyruvate to lactate during glycolysis producing the energy required for parasitic growth. The Pf LDH has been studied as a potential molecular target for development of anti-malarial agents. In an attempt to find the potent inhibitor of Pf LDH, we have used Discovery studio to perform molecular docking in the active binding pocket of Pf LDH by CDOCKER, followed by three-dimensional quantitative structure-activity relationship (3D-QSAR) studies of tricyclic guanidine batzelladine compounds, which were previously synthesized in our laboratory. Docking studies showed that there is a very strong correlation between in silico and in vitro results. Based on docking results, a highly predictive 3D-QSAR model was developed with q 2 of 0.516. The model has predicted r 2 of 0.91 showing that predicted IC 50 values are in good agreement with experimental IC 50 values. The results obtained from this study revealed the developed model can be used to design new anti-malarial compounds based on tricyclic guanidine derivatives and to predict activities of new inhibitors.

  12. Docking Based 3D-QSAR Study of Tricyclic Guanidine Analogues of Batzelladine K As Anti-Malarial Agents

    PubMed Central

    Ahmed, Nafees; Anwar, Sirajudheen; Thet Htar, Thet

    2017-01-01

    The Plasmodium falciparum Lactate Dehydrogenase enzyme (PfLDH) catalyzes inter-conversion of pyruvate to lactate during glycolysis producing the energy required for parasitic growth. The PfLDH has been studied as a potential molecular target for development of anti-malarial agents. In an attempt to find the potent inhibitor of PfLDH, we have used Discovery studio to perform molecular docking in the active binding pocket of PfLDH by CDOCKER, followed by three-dimensional quantitative structure-activity relationship (3D-QSAR) studies of tricyclic guanidine batzelladine compounds, which were previously synthesized in our laboratory. Docking studies showed that there is a very strong correlation between in silico and in vitro results. Based on docking results, a highly predictive 3D-QSAR model was developed with q2 of 0.516. The model has predicted r2 of 0.91 showing that predicted IC50 values are in good agreement with experimental IC50 values. The results obtained from this study revealed the developed model can be used to design new anti-malarial compounds based on tricyclic guanidine derivatives and to predict activities of new inhibitors. PMID:28664157

  13. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles.

    PubMed

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words.

  14. Novel Method of Production Decline Analysis

    NASA Astrophysics Data System (ADS)

    Xie, Shan; Lan, Yifei; He, Lei; Jiao, Yang; Wu, Yong

    2018-02-01

    ARPS decline curves is the most commonly used in oil and gas field due to its minimal data requirements and ease application. And prediction of production decline which is based on ARPS analysis rely on known decline type. However, when coefficient index are very approximate under different decline type, it is difficult to directly recognize decline trend of matched curves. Due to difficulties above, based on simulation results of multi-factor response experiments, a new dynamic decline prediction model is introduced with using multiple linear regression of influence factors. First of all, according to study of effect factors of production decline, interaction experimental schemes are designed. Based on simulated results, annual decline rate is predicted by decline model. Moreover, the new method is applied in A gas filed of Ordos Basin as example to illustrate reliability. The result commit that the new model can directly predict decline tendency without needing recognize decline style. From arithmetic aspect, it also take advantage of high veracity. Finally, the new method improves the evaluation method of gas well production decline in low permeability gas reservoir, which also provides technical support for further understanding of tight gas field development laws.

  15. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles

    PubMed Central

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605

  16. The effects of geometric uncertainties on computational modelling of knee biomechanics

    PubMed Central

    Fisher, John; Wilcox, Ruth

    2017-01-01

    The geometry of the articular components of the knee is an important factor in predicting joint mechanics in computational models. There are a number of uncertainties in the definition of the geometry of cartilage and meniscus, and evaluating the effects of these uncertainties is fundamental to understanding the level of reliability of the models. In this study, the sensitivity of knee mechanics to geometric uncertainties was investigated by comparing polynomial-based and image-based knee models and varying the size of meniscus. The results suggested that the geometric uncertainties in cartilage and meniscus resulting from the resolution of MRI and the accuracy of segmentation caused considerable effects on the predicted knee mechanics. Moreover, even if the mathematical geometric descriptors can be very close to the imaged-based articular surfaces, the detailed contact pressure distribution produced by the mathematical geometric descriptors was not the same as that of the image-based model. However, the trends predicted by the models based on mathematical geometric descriptors were similar to those of the imaged-based models. PMID:28879008

  17. Land Covers Classification Based on Random Forest Method Using Features from Full-Waveform LIDAR Data

    NASA Astrophysics Data System (ADS)

    Ma, L.; Zhou, M.; Li, C.

    2017-09-01

    In this study, a Random Forest (RF) based land covers classification method is presented to predict the types of land covers in Miyun area. The returned full-waveforms which were acquired by a LiteMapper 5600 airborne LiDAR system were processed, including waveform filtering, waveform decomposition and features extraction. The commonly used features that were distance, intensity, Full Width at Half Maximum (FWHM), skewness and kurtosis were extracted. These waveform features were used as attributes of training data for generating the RF prediction model. The RF prediction model was applied to predict the types of land covers in Miyun area as trees, buildings, farmland and ground. The classification results of these four types of land covers were obtained according to the ground truth information acquired from CCD image data of the same region. The RF classification results were compared with that of SVM method and show better results. The RF classification accuracy reached 89.73% and the classification Kappa was 0.8631.

  18. Using a knowledge-based planning solution to select patients for proton therapy.

    PubMed

    Delaney, Alexander R; Dahele, Max; Tol, Jim P; Kuijper, Ingrid T; Slotman, Ben J; Verbakel, Wilko F A R

    2017-08-01

    Patient selection for proton therapy by comparing proton/photon treatment plans is time-consuming and prone to bias. RapidPlan™, a knowledge-based-planning solution, uses plan-libraries to model and predict organ-at-risk (OAR) dose-volume-histograms (DVHs). We investigated whether RapidPlan, utilizing an algorithm based only on photon beam characteristics, could generate proton DVH-predictions and whether these could correctly identify patients for proton therapy. Model PROT and Model PHOT comprised 30 head-and-neck cancer proton and photon plans, respectively. Proton and photon knowledge-based-plans (KBPs) were made for ten evaluation-patients. DVH-prediction accuracy was analyzed by comparing predicted-vs-achieved mean OAR doses. KBPs and manual plans were compared using salivary gland and swallowing muscle mean doses. For illustration, patients were selected for protons if predicted Model PHOT mean dose minus predicted Model PROT mean dose (ΔPrediction) for combined OARs was ≥6Gy, and benchmarked using achieved KBP doses. Achieved and predicted Model PROT /Model PHOT mean dose R 2 was 0.95/0.98. Generally, achieved mean dose for Model PHOT /Model PROT KBPs was respectively lower/higher than predicted. Comparing Model PROT /Model PHOT KBPs with manual plans, salivary and swallowing mean doses increased/decreased by <2Gy, on average. ΔPrediction≥6Gy correctly selected 4 of 5 patients for protons. Knowledge-based DVH-predictions can provide efficient, patient-specific selection for protons. A proton-specific RapidPlan-solution could improve results. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Next Place Prediction Based on Spatiotemporal Pattern Mining of Mobile Device Logs.

    PubMed

    Lee, Sungjun; Lim, Junseok; Park, Jonghun; Kim, Kwanho

    2016-01-23

    Due to the recent explosive growth of location-aware services based on mobile devices, predicting the next places of a user is of increasing importance to enable proactive information services. In this paper, we introduce a data-driven framework that aims to predict the user's next places using his/her past visiting patterns analyzed from mobile device logs. Specifically, the notion of the spatiotemporal-periodic (STP) pattern is proposed to capture the visits with spatiotemporal periodicity by focusing on a detail level of location for each individual. Subsequently, we present algorithms that extract the STP patterns from a user's past visiting behaviors and predict the next places based on the patterns. The experiment results obtained by using a real-world dataset show that the proposed methods are more effective in predicting the user's next places than the previous approaches considered in most cases.

  20. A dynamic multi-scale Markov model based methodology for remaining life prediction

    NASA Astrophysics Data System (ADS)

    Yan, Jihong; Guo, Chaozhong; Wang, Xing

    2011-05-01

    The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.

  1. Trading Network Predicts Stock Price

    PubMed Central

    Sun, Xiao-Qian; Shen, Hua-Wei; Cheng, Xue-Qi

    2014-01-01

    Stock price prediction is an important and challenging problem for studying financial markets. Existing studies are mainly based on the time series of stock price or the operation performance of listed company. In this paper, we propose to predict stock price based on investors' trading behavior. For each stock, we characterize the daily trading relationship among its investors using a trading network. We then classify the nodes of trading network into three roles according to their connectivity pattern. Strong Granger causality is found between stock price and trading relationship indices, i.e., the fraction of trading relationship among nodes with different roles. We further predict stock price by incorporating these trading relationship indices into a neural network based on time series of stock price. Experimental results on 51 stocks in two Chinese Stock Exchanges demonstrate the accuracy of stock price prediction is significantly improved by the inclusion of trading relationship indices. PMID:24429767

  2. Geary autocorrelation and DCCA coefficient: Application to predict apoptosis protein subcellular localization via PSSM

    NASA Astrophysics Data System (ADS)

    Liang, Yunyun; Liu, Sanyang; Zhang, Shengli

    2017-02-01

    Apoptosis is a fundamental process controlling normal tissue homeostasis by regulating a balance between cell proliferation and death. Predicting subcellular location of apoptosis proteins is very helpful for understanding its mechanism of programmed cell death. Prediction of apoptosis protein subcellular location is still a challenging and complicated task, and existing methods mainly based on protein primary sequences. In this paper, we propose a new position-specific scoring matrix (PSSM)-based model by using Geary autocorrelation function and detrended cross-correlation coefficient (DCCA coefficient). Then a 270-dimensional (270D) feature vector is constructed on three widely used datasets: ZD98, ZW225 and CL317, and support vector machine is adopted as classifier. The overall prediction accuracies are significantly improved by rigorous jackknife test. The results show that our model offers a reliable and effective PSSM-based tool for prediction of apoptosis protein subcellular localization.

  3. Trading network predicts stock price.

    PubMed

    Sun, Xiao-Qian; Shen, Hua-Wei; Cheng, Xue-Qi

    2014-01-16

    Stock price prediction is an important and challenging problem for studying financial markets. Existing studies are mainly based on the time series of stock price or the operation performance of listed company. In this paper, we propose to predict stock price based on investors' trading behavior. For each stock, we characterize the daily trading relationship among its investors using a trading network. We then classify the nodes of trading network into three roles according to their connectivity pattern. Strong Granger causality is found between stock price and trading relationship indices, i.e., the fraction of trading relationship among nodes with different roles. We further predict stock price by incorporating these trading relationship indices into a neural network based on time series of stock price. Experimental results on 51 stocks in two Chinese Stock Exchanges demonstrate the accuracy of stock price prediction is significantly improved by the inclusion of trading relationship indices.

  4. Using the Malthus programme to predict the recruitment of patients to MR-linac research trials in prostate and lung cancer.

    PubMed

    Sanderson, Benjamin; McWilliam, Alan; Faivre-Finn, Corinne; Kirkby, Norman Francis; Jena, Rajesh; Mee, Thomas; Choudhury, Ananya

    2017-01-01

    In this study, we used evidence-based mathematical modelling to predict the patient cohort for MR-linac to assess its feasibility in a time of austerity. We discuss our results and the implications of evidence-based radiotherapy demand modelling tools such as Malthus on the implementation of new technology and value-based healthcare. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Unraveling past impacts of climate change and land management on historic peatland development using proxy-based reconstruction, monitoring data and process modeling.

    PubMed

    Heinemeyer, Andreas; Swindles, Graeme T

    2018-05-08

    Peatlands represent globally significant soil carbon stores that have been accumulating for millennia under water-logged conditions. However, deepening water-table depths (WTD) from climate change or human-induced drainage could stimulate decomposition resulting in peatlands turning from carbon sinks to carbon sources. Contemporary WTD ranges of testate amoebae (TA) are commonly used to predict past WTD in peatlands using quantitative transfer function models. Here we present, for the first time, a study comparing TA-based WTD reconstructions to instrumentally monitored WTD and hydrological model predictions using the MILLENNIA peatland model to examine past peatland responses to climate change and land management. Although there was very good agreement between monitored and modeled WTD, TA-reconstructed water table was consistently deeper. Predictions from a larger European TA transfer function data set were wetter, but the overall directional fit to observed WTD was better for a TA transfer function based on data from northern England. We applied a regression-based offset correction to the reconstructed WTD for the validation period (1931-2010). We then predicted WTD using available climate records as MILLENNIA model input and compared the offset-corrected TA reconstruction to MILLENNIA WTD predictions over an extended period (1750-1931) with available climate reconstructions. Although the comparison revealed striking similarities in predicted overall WTD patterns, particularly for a recent drier period (1965-1995), there were clear periods when TA-based WTD predictions underestimated (i.e. drier during 1830-1930) and overestimated (i.e. wetter during 1760-1830) past WTD compared to MILLENNIA model predictions. Importantly, simulated grouse moor management scenarios may explain the drier TA WTD predictions, resulting in considerable model predicted carbon losses and reduced methane emissions, mainly due to drainage. This study demonstrates the value of a site-specific and combined data-model validation step toward using TA-derived moisture conditions to understand past climate-driven peatland development and carbon budgets alongside modeling likely management impacts. © 2018 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  6. Accurate experimental and theoretical comparisons between superconductor-insulator-superconductor mixers showing weak and strong quantum effects

    NASA Technical Reports Server (NTRS)

    Mcgrath, W. R.; Richards, P. L.; Face, D. W.; Prober, D. E.; Lloyd, F. L.

    1988-01-01

    A systematic study of the gain and noise in superconductor-insulator-superconductor mixers employing Ta based, Nb based, and Pb-alloy based tunnel junctions was made. These junctions displayed both weak and strong quantum effects at a signal frequency of 33 GHz. The effects of energy gap sharpness and subgap current were investigated and are quantitatively related to mixer performance. Detailed comparisons are made of the mixing results with the predictions of a three-port model approximation to the Tucker theory. Mixer performance was measured with a novel test apparatus which is accurate enough to allow for the first quantitative tests of theoretical noise predictions. It is found that the three-port model of the Tucker theory underestimates the mixer noise temperature by a factor of about 2 for all of the mixers. In addition, predicted values of available mixer gain are in reasonable agreement with experiment when quantum effects are weak. However, as quantum effects become strong, the predicted available gain diverges to infinity, which is in sharp contrast to the experimental results. Predictions of coupled gain do not always show such divergences.

  7. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Degroh, Kim K.; Sechkar, Edward A.

    1992-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) will assist in understanding the mechanisms involved, and will lead to improved reliability in predicting in-space durability of materials based on ground laboratory testing. A computational simulation of atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of assumed mechanistic behavior of atomic oxygen and results of both ground laboratory and LDEF data, a predictive Monte Carlo model was developed which simulates the oxidation processes that occur on polymers with applied protective coatings that have defects. The use of high atomic oxygen fluence-directed ram LDEF results has enabled mechanistic implications to be made by adjusting Monte Carlo modeling assumptions to match observed results based on scanning electron microscopy. Modeling assumptions, implications, and predictions are presented, along with comparison of observed ground laboratory and LDEF results.

  8. Does teacher evaluation based on student performance predict motivation, well-being, and ill-being?

    PubMed

    Cuevas, Ricardo; Ntoumanis, Nikos; Fernandez-Bustos, Juan G; Bartholomew, Kimberley

    2018-06-01

    This study tests an explanatory model based on self-determination theory, which posits that pressure experienced by teachers when they are evaluated based on their students' academic performance will differentially predict teacher adaptive and maladaptive motivation, well-being, and ill-being. A total of 360 Spanish physical education teachers completed a multi-scale inventory. We found support for a structural equation model that showed that perceived pressure predicted teacher autonomous motivation negatively, predicted amotivation positively, and was unrelated to controlled motivation. In addition, autonomous motivation predicted vitality positively and exhaustion negatively, whereas controlled motivation and amotivation predicted vitality negatively and exhaustion positively. Amotivation significantly mediated the relation between pressure and vitality and between pressure and exhaustion. The results underline the potential negative impact of pressure felt by teachers due to this type of evaluation on teacher motivation and psychological health. Copyright © 2018 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  9. Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes

    PubMed Central

    Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian

    2015-01-01

    Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes. PMID:26294903

  10. Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes.

    PubMed

    Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian

    2015-01-01

    Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.

  11. Design of experiments-based monitoring of critical quality attributes for the spray-drying process of insulin by NIR spectroscopy.

    PubMed

    Maltesen, Morten Jonas; van de Weert, Marco; Grohganz, Holger

    2012-09-01

    Moisture content and aerodynamic particle size are critical quality attributes for spray-dried protein formulations. In this study, spray-dried insulin powders intended for pulmonary delivery were produced applying design of experiments methodology. Near infrared spectroscopy (NIR) in combination with preprocessing and multivariate analysis in the form of partial least squares projections to latent structures (PLS) were used to correlate the spectral data with moisture content and aerodynamic particle size measured by a time of flight principle. PLS models predicting the moisture content were based on the chemical information of the water molecules in the NIR spectrum. Models yielded prediction errors (RMSEP) between 0.39% and 0.48% with thermal gravimetric analysis used as reference method. The PLS models predicting the aerodynamic particle size were based on baseline offset in the NIR spectra and yielded prediction errors between 0.27 and 0.48 μm. The morphology of the spray-dried particles had a significant impact on the predictive ability of the models. Good predictive models could be obtained for spherical particles with a calibration error (RMSECV) of 0.22 μm, whereas wrinkled particles resulted in much less robust models with a Q (2) of 0.69. Based on the results in this study, NIR is a suitable tool for process analysis of the spray-drying process and for control of moisture content and particle size, in particular for smooth and spherical particles.

  12. A Predictive Model for Medical Events Based on Contextual Embedding of Temporal Sequences

    PubMed Central

    Wang, Zhimu; Huang, Yingxiang; Wang, Shuang; Wang, Fei; Jiang, Xiaoqian

    2016-01-01

    Background Medical concepts are inherently ambiguous and error-prone due to human fallibility, which makes it hard for them to be fully used by classical machine learning methods (eg, for tasks like early stage disease prediction). Objective Our work was to create a new machine-friendly representation that resembles the semantics of medical concepts. We then developed a sequential predictive model for medical events based on this new representation. Methods We developed novel contextual embedding techniques to combine different medical events (eg, diagnoses, prescriptions, and labs tests). Each medical event is converted into a numerical vector that resembles its “semantics,” via which the similarity between medical events can be easily measured. We developed simple and effective predictive models based on these vectors to predict novel diagnoses. Results We evaluated our sequential prediction model (and standard learning methods) in estimating the risk of potential diseases based on our contextual embedding representation. Our model achieved an area under the receiver operating characteristic (ROC) curve (AUC) of 0.79 on chronic systolic heart failure and an average AUC of 0.67 (over the 80 most common diagnoses) using the Medical Information Mart for Intensive Care III (MIMIC-III) dataset. Conclusions We propose a general early prognosis predictor for 80 different diagnoses. Our method computes numeric representation for each medical event to uncover the potential meaning of those events. Our results demonstrate the efficiency of the proposed method, which will benefit patients and physicians by offering more accurate diagnosis. PMID:27888170

  13. A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.

    PubMed

    Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao

    2016-07-01

    Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value <0.25 in the univariate analysis were further evaluated in multivariate models using backward elimination. Discrimination was assessed using receiver operating curve. Calibration was evaluated using the McFadden's R2. Net reclassification index (NRI) associated with incorporation of laboratory results was calculated. Results were internally validated. A model similar to existing CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.

  14. Prediction of Combustion Gas Deposit Compositions

    NASA Technical Reports Server (NTRS)

    Kohl, F. J.; Mcbride, B. J.; Zeleznik, F. J.; Gordon, S.

    1985-01-01

    Demonstrated procedure used to predict accurately chemical compositions of complicated deposit mixtures. NASA Lewis Research Center's Computer Program for Calculation of Complex Chemical Equilibrium Compositions (CEC) used in conjunction with Computer Program for Calculation of Ideal Gas Thermodynamic Data (PAC) and resulting Thermodynamic Data Base (THDATA) to predict deposit compositions from metal or mineral-seeded combustion processes.

  15. Driving and Low Vision: Validity of Assessments for Predicting Performance of Drivers

    ERIC Educational Resources Information Center

    Strong, J. Graham; Jutai, Jeffrey W.; Russell-Minda, Elizabeth; Evans, Mal

    2008-01-01

    The authors conducted a systematic review to examine whether vision-related assessments can predict the driving performance of individuals who have low vision. The results indicate that measures of visual field, contrast sensitivity, cognitive and attention-based tests, and driver screening tools have variable utility for predicting real-world…

  16. Experimental and computational prediction of glass transition temperature of drugs.

    PubMed

    Alzghoul, Ahmad; Alhalaweh, Amjad; Mahlin, Denny; Bergström, Christel A S

    2014-12-22

    Glass transition temperature (Tg) is an important inherent property of an amorphous solid material which is usually determined experimentally. In this study, the relation between Tg and melting temperature (Tm) was evaluated using a data set of 71 structurally diverse druglike compounds. Further, in silico models for prediction of Tg were developed based on calculated molecular descriptors and linear (multilinear regression, partial least-squares, principal component regression) and nonlinear (neural network, support vector regression) modeling techniques. The models based on Tm predicted Tg with an RMSE of 19.5 K for the test set. Among the five computational models developed herein the support vector regression gave the best result with RMSE of 18.7 K for the test set using only four chemical descriptors. Hence, two different models that predict Tg of drug-like molecules with high accuracy were developed. If Tm is available, a simple linear regression can be used to predict Tg. However, the results also suggest that support vector regression and calculated molecular descriptors can predict Tg with equal accuracy, already before compound synthesis.

  17. Software Design Challenges in Time Series Prediction Systems Using Parallel Implementation of Artificial Neural Networks.

    PubMed

    Manikandan, Narayanan; Subha, Srinivasan

    2016-01-01

    Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used.

  18. Software Design Challenges in Time Series Prediction Systems Using Parallel Implementation of Artificial Neural Networks

    PubMed Central

    Manikandan, Narayanan; Subha, Srinivasan

    2016-01-01

    Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used. PMID:26881271

  19. Experimental test of quantum nonlocality in three-photon Greenberger-Horne-Zeilinger entanglement

    PubMed

    Pan; Bouwmeester; Daniell; Weinfurter; Zeilinger

    2000-02-03

    Bell's theorem states that certain statistical correlations predicted by quantum physics for measurements on two-particle systems cannot be understood within a realistic picture based on local properties of each individual particle-even if the two particles are separated by large distances. Einstein, Podolsky and Rosen first recognized the fundamental significance of these quantum correlations (termed 'entanglement' by Schrodinger) and the two-particle quantum predictions have found ever-increasing experimental support. A more striking conflict between quantum mechanical and local realistic predictions (for perfect correlations) has been discovered; but experimental verification has been difficult, as it requires entanglement between at least three particles. Here we report experimental confirmation of this conflict, using our recently developed method to observe three-photon entanglement, or 'Greenberger-Horne-Zeilinger' (GHZ) states. The results of three specific experiments, involving measurements of polarization correlations between three photons, lead to predictions for a fourth experiment; quantum physical predictions are mutually contradictory with expectations based on local realism. We find the results of the fourth experiment to be in agreement with the quantum prediction and in striking conflict with local realism.

  20. A New Navigation Satellite Clock Bias Prediction Method Based on Modified Clock-bias Quadratic Polynomial Model

    NASA Astrophysics Data System (ADS)

    Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.

    2016-01-01

    In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.

  1. Comparison of Several Methods of Predicting the Pressure Loss at Altitude Across a Baffled Aircraft-Engine Cylinder

    NASA Technical Reports Server (NTRS)

    Neustein, Joseph; Schafer, Louis J , Jr

    1946-01-01

    Several methods of predicting the compressible-flow pressure loss across a baffled aircraft-engine cylinder were analytically related and were experimentally investigated on a typical air-cooled aircraft-engine cylinder. Tests with and without heat transfer covered a wide range of cooling-air flows and simulated altitudes from sea level to 40,000 feet. Both the analysis and the test results showed that the method based on the density determined by the static pressure and the stagnation temperature at the baffle exit gave results comparable with those obtained from methods derived by one-dimensional-flow theory. The method based on a characteristic Mach number, although related analytically to one-dimensional-flow theory, was found impractical in the present tests because of the difficulty encountered in defining the proper characteristic state of the cooling air. Accurate predictions of altitude pressure loss can apparently be made by these methods, provided that they are based on the results of sea-level tests with heat transfer.

  2. The Global Integrated Drought Monitoring and Prediction System (GIDMaPS): Overview and Capabilities

    NASA Astrophysics Data System (ADS)

    AghaKouchak, A.; Hao, Z.; Farahmand, A.; Nakhjiri, N.

    2013-12-01

    Development of reliable monitoring and prediction indices and tools are fundamental to drought preparedness and management. Motivated by the Global Drought Information Systems (GDIS) activities, this paper presents the Global Integrated Drought Monitoring and Prediction System (GIDMaPS) which provides near real-time drought information using both remote sensing observations and model simulations. The monthly data from the NASA Modern-Era Retrospective analysis for Research and Applications (MERRA-Land), North American Land Data Assimilation System (NLDAS), and remotely sensed precipitation data are used as input to GIDMaPS. Numerous indices have been developed for drought monitoring based on various indicator variables (e.g., precipitation, soil moisture, water storage). Defining droughts based on a single variable (e.g., precipitation, soil moisture or runoff) may not be sufficient for reliable risk assessment and decision making. GIDMaPS provides drought information based on multiple indices including Standardized Precipitation Index (SPI), Standardized Soil Moisture Index (SSI) and the Multivariate Standardized Drought Index (MSDI) which combines SPI and SSI probabilistically. In other words, MSDI incorporates the meteorological and agricultural drought conditions for overall characterization of droughts. The seasonal prediction component of GIDMaPS is based on a persistence model which requires historical data and near-past observations. The seasonal drought prediction component is based on two input data sets (MERRA and NLDAS) and three drought indicators (SPI, SSI and MSDI). The drought prediction model provides the empirical probability of drought for different severity levels. In this presentation, both monitoring and prediction components of GIDMaPS will be discussed, and the results from several major droughts including the 2013 Namibia, 2012-2013 United States, 2011-2012 Horn of Africa, and 2010 Amazon Droughts will be presented. The results indicate that GIDMaPS advances our drought monitoring and prediction capabilities through integration of multiple data and indicators.

  3. Risk Factors Analysis and Death Prediction in Some Life-Threatening Ailments Using Chi-Square Case-Based Reasoning (χ2 CBR) Model.

    PubMed

    Adeniyi, D A; Wei, Z; Yang, Y

    2018-01-30

    A wealth of data are available within the health care system, however, effective analysis tools for exploring the hidden patterns in these datasets are lacking. To alleviate this limitation, this paper proposes a simple but promising hybrid predictive model by suitably combining the Chi-square distance measurement with case-based reasoning technique. The study presents the realization of an automated risk calculator and death prediction in some life-threatening ailments using Chi-square case-based reasoning (χ 2 CBR) model. The proposed predictive engine is capable of reducing runtime and speeds up execution process through the use of critical χ 2 distribution value. This work also showcases the development of a novel feature selection method referred to as frequent item based rule (FIBR) method. This FIBR method is used for selecting the best feature for the proposed χ 2 CBR model at the preprocessing stage of the predictive procedures. The implementation of the proposed risk calculator is achieved through the use of an in-house developed PHP program experimented with XAMP/Apache HTTP server as hosting server. The process of data acquisition and case-based development is implemented using the MySQL application. Performance comparison between our system, the NBY, the ED-KNN, the ANN, the SVM, the Random Forest and the traditional CBR techniques shows that the quality of predictions produced by our system outperformed the baseline methods studied. The result of our experiment shows that the precision rate and predictive quality of our system in most cases are equal to or greater than 70%. Our result also shows that the proposed system executes faster than the baseline methods studied. Therefore, the proposed risk calculator is capable of providing useful, consistent, faster, accurate and efficient risk level prediction to both the patients and the physicians at any time, online and on a real-time basis.

  4. Leuconostoc mesenteroides growth in food products: prediction and sensitivity analysis by adaptive-network-based fuzzy inference systems.

    PubMed

    Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien

    2013-01-01

    An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. THE ANFIS AND ANN MODELS WERE COMPARED IN TERMS OF SIX STATISTICAL INDICES CALCULATED BY COMPARING THEIR PREDICTION RESULTS WITH ACTUAL DATA: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R (2)). Graphical plots were also used for model comparison. The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions.

  5. Leuconostoc Mesenteroides Growth in Food Products: Prediction and Sensitivity Analysis by Adaptive-Network-Based Fuzzy Inference Systems

    PubMed Central

    Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien

    2013-01-01

    Background An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. Methods The ANFIS and ANN models were compared in terms of six statistical indices calculated by comparing their prediction results with actual data: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R 2). Graphical plots were also used for model comparison. Conclusions The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. PMID:23705023

  6. Chaotic time series prediction for prenatal exposure to polychlorinated biphenyls in umbilical cord blood using the least squares SEATR model

    NASA Astrophysics Data System (ADS)

    Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia

    2016-04-01

    Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field.

  7. Chaotic time series prediction for prenatal exposure to polychlorinated biphenyls in umbilical cord blood using the least squares SEATR model

    PubMed Central

    Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia

    2016-01-01

    Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field. PMID:27118260

  8. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  9. Prediction of protein-protein interactions based on PseAA composition and hybrid feature selection.

    PubMed

    Liu, Liang; Cai, Yudong; Lu, Wencong; Feng, Kaiyan; Peng, Chunrong; Niu, Bing

    2009-03-06

    Based on pseudo amino acid (PseAA) composition and a novel hybrid feature selection frame, this paper presents a computational system to predict the PPIs (protein-protein interactions) using 8796 protein pairs. These pairs are coded by PseAA composition, resulting in 114 features. A hybrid feature selection system, mRMR-KNNs-wrapper, is applied to obtain an optimized feature set by excluding poor-performed and/or redundant features, resulting in 103 remaining features. Using the optimized 103-feature subset, a prediction model is trained and tested in the k-nearest neighbors (KNNs) learning system. This prediction model achieves an overall accurate prediction rate of 76.18%, evaluated by 10-fold cross-validation test, which is 1.46% higher than using the initial 114 features and is 6.51% higher than the 20 features, coded by amino acid compositions. The PPIs predictor, developed for this research, is available for public use at http://chemdata.shu.edu.cn/ppi.

  10. Voidage correction algorithm for unresolved Euler-Lagrange simulations

    NASA Astrophysics Data System (ADS)

    Askarishahi, Maryam; Salehi, Mohammad-Sadegh; Radl, Stefan

    2018-04-01

    The effect of grid coarsening on the predicted total drag force and heat exchange rate in dense gas-particle flows is investigated using Euler-Lagrange (EL) approach. We demonstrate that grid coarsening may reduce the predicted total drag force and exchange rate. Surprisingly, exchange coefficients predicted by the EL approach deviate more significantly from the exact value compared to results of Euler-Euler (EE)-based calculations. The voidage gradient is identified as the root cause of this peculiar behavior. Consequently, we propose a correction algorithm based on a sigmoidal function to predict the voidage experienced by individual particles. Our correction algorithm can significantly improve the prediction of exchange coefficients in EL models, which is tested for simulations involving Euler grid cell sizes between 2d_p and 12d_p . It is most relevant in simulations of dense polydisperse particle suspensions featuring steep voidage profiles. For these suspensions, classical approaches may result in an error of the total exchange rate of up to 30%.

  11. R2C: improving ab initio residue contact map prediction using dynamic fusion strategy and Gaussian noise filter.

    PubMed

    Yang, Jing; Jin, Qi-Yu; Zhang, Biao; Shen, Hong-Bin

    2016-08-15

    Inter-residue contacts in proteins dictate the topology of protein structures. They are crucial for protein folding and structural stability. Accurate prediction of residue contacts especially for long-range contacts is important to the quality of ab inito structure modeling since they can enforce strong restraints to structure assembly. In this paper, we present a new Residue-Residue Contact predictor called R2C that combines machine learning-based and correlated mutation analysis-based methods, together with a two-dimensional Gaussian noise filter to enhance the long-range residue contact prediction. Our results show that the outputs from the machine learning-based method are concentrated with better performance on short-range contacts; while for correlated mutation analysis-based approach, the predictions are widespread with higher accuracy on long-range contacts. An effective query-driven dynamic fusion strategy proposed here takes full advantages of the two different methods, resulting in an impressive overall accuracy improvement. We also show that the contact map directly from the prediction model contains the interesting Gaussian noise, which has not been discovered before. Different from recent studies that tried to further enhance the quality of contact map by removing its transitive noise, we designed a new two-dimensional Gaussian noise filter, which was especially helpful for reinforcing the long-range residue contact prediction. Tested on recent CASP10/11 datasets, the overall top L/5 accuracy of our final R2C predictor is 17.6%/15.5% higher than the pure machine learning-based method and 7.8%/8.3% higher than the correlated mutation analysis-based approach for the long-range residue contact prediction. http://www.csbio.sjtu.edu.cn/bioinf/R2C/Contact:hbshen@sjtu.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Theoretical investigation on the microstructure of triethylene glycol based deep eutectic solvents: COSMO-RS and TURBOMOLE prediction

    NASA Astrophysics Data System (ADS)

    Aissaoui, Tayeb; Benguerba, Yacine; AlNashef, Inas M.

    2017-08-01

    The in-silico combination mechanism of triethylene glycol based DESs has been studied. COSMO-RS and graphical user interface TmoleX software were used to predict the interaction mechanism of hydrogen bond donors (HBDs) with hydrogen bond acceptors (HBA) to form DESs. The predicted IR results were compared with the previously reported experimental FT-IR analysis for the same studied DESs. The sigma profiles for the HBD, HBAs and formed DESs were interpreted to identify qualitatively molecular properties like polarity or hydrogen bonding donor and acceptor abilities. The predicted physicochemical properties reported in this study were in good agreement with experimental ones.

  13. Construction of prediction intervals for Palmer Drought Severity Index using bootstrap

    NASA Astrophysics Data System (ADS)

    Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan

    2018-04-01

    In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Like; Kang, Jian, E-mail: j.kang@sheffield.ac.uk; Schroth, Olaf

    Large scale transportation projects can adversely affect the visual perception of environmental quality and require adequate visual impact assessment. In this study, we investigated the effects of the characteristics of the road project and the character of the existing landscape on the perceived visual impact of motorways, and developed a GIS-based prediction model based on the findings. An online survey using computer-visualised scenes of different motorway and landscape scenarios was carried out to obtain perception-based judgements on the visual impact. Motorway scenarios simulated included the baseline scenario without road, original motorway, motorways with timber noise barriers, transparent noise barriers andmore » tree screen; different landscape scenarios were created by changing land cover of buildings and trees in three distance zones. The landscape content of each scene was measured in GIS. The result shows that presence of a motorway especially with the timber barrier significantly decreases the visual quality of the view. The resulted visual impact tends to be lower where it is less visually pleasant with more buildings in the view, and can be slightly reduced by the visual absorption effect of the scattered trees between the motorway and the viewpoint. Based on the survey result, eleven predictors were identified for the visual impact prediction model which was applied in GIS to generate maps of visual impact of motorways in different scenarios. The proposed prediction model can be used to achieve efficient and reliable assessment of visual impact of motorways. - Highlights: • Motorways induce significant visual impact especially with timber noise barriers. • Visual impact is negatively correlated with amount of buildings in the view. • Visual impact is positively correlated with percentage of trees in the view. • Perception-based motorway visual impact prediction model using mapped predictors • Predicted visual impacts in different scenarios are mapped in GIS.« less

  15. Prediction of Carbohydrate Binding Sites on Protein Surfaces with 3-Dimensional Probability Density Distributions of Interacting Atoms

    PubMed Central

    Tsai, Keng-Chang; Jian, Jhih-Wei; Yang, Ei-Wen; Hsu, Po-Chiang; Peng, Hung-Pin; Chen, Ching-Tai; Chen, Jun-Bo; Chang, Jeng-Yih; Hsu, Wen-Lian; Yang, An-Suei

    2012-01-01

    Non-covalent protein-carbohydrate interactions mediate molecular targeting in many biological processes. Prediction of non-covalent carbohydrate binding sites on protein surfaces not only provides insights into the functions of the query proteins; information on key carbohydrate-binding residues could suggest site-directed mutagenesis experiments, design therapeutics targeting carbohydrate-binding proteins, and provide guidance in engineering protein-carbohydrate interactions. In this work, we show that non-covalent carbohydrate binding sites on protein surfaces can be predicted with relatively high accuracy when the query protein structures are known. The prediction capabilities were based on a novel encoding scheme of the three-dimensional probability density maps describing the distributions of 36 non-covalent interacting atom types around protein surfaces. One machine learning model was trained for each of the 30 protein atom types. The machine learning algorithms predicted tentative carbohydrate binding sites on query proteins by recognizing the characteristic interacting atom distribution patterns specific for carbohydrate binding sites from known protein structures. The prediction results for all protein atom types were integrated into surface patches as tentative carbohydrate binding sites based on normalized prediction confidence level. The prediction capabilities of the predictors were benchmarked by a 10-fold cross validation on 497 non-redundant proteins with known carbohydrate binding sites. The predictors were further tested on an independent test set with 108 proteins. The residue-based Matthews correlation coefficient (MCC) for the independent test was 0.45, with prediction precision and sensitivity (or recall) of 0.45 and 0.49 respectively. In addition, 111 unbound carbohydrate-binding protein structures for which the structures were determined in the absence of the carbohydrate ligands were predicted with the trained predictors. The overall prediction MCC was 0.49. Independent tests on anti-carbohydrate antibodies showed that the carbohydrate antigen binding sites were predicted with comparable accuracy. These results demonstrate that the predictors are among the best in carbohydrate binding site predictions to date. PMID:22848404

  16. A vertical handoff decision algorithm based on ARMA prediction model

    NASA Astrophysics Data System (ADS)

    Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan

    2012-01-01

    With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.

  17. Estimation of the Driving Style Based on the Users' Activity and Environment Influence.

    PubMed

    Sysoev, Mikhail; Kos, Andrej; Guna, Jože; Pogačnik, Matevž

    2017-10-21

    New models and methods have been designed to predict the influence of the user's environment and activity information to the driving style in standard automotive environments. For these purposes, an experiment was conducted providing two types of analysis: (i) the evaluation of a self-assessment of the driving style; (ii) the prediction of aggressive driving style based on drivers' activity and environment parameters. Sixty seven h of driving data from 10 drivers were collected for analysis in this study. The new parameters used in the experiment are the car door opening and closing manner, which were applied to improve the prediction accuracy. An Android application called Sensoric was developed to collect low-level smartphone data about the users' activity. The driving style was predicted from the user's environment and activity data collected before driving. The prediction was tested against the actual driving style, calculated from objective driving data. The prediction has shown encouraging results, with precision values ranging from 0.727 up to 0.909 for aggressive driving recognition rate. The obtained results lend support to the hypothesis that user's environment and activity data could be used for the prediction of the aggressive driving style in advance, before the driving starts.

  18. Biomarkers for predicting type 2 diabetes development-Can metabolomics improve on existing biomarkers?

    PubMed

    Savolainen, Otto; Fagerberg, Björn; Vendelbo Lind, Mads; Sandberg, Ann-Sofie; Ross, Alastair B; Bergström, Göran

    2017-01-01

    The aim was to determine if metabolomics could be used to build a predictive model for type 2 diabetes (T2D) risk that would improve prediction of T2D over current risk markers. Gas chromatography-tandem mass spectrometry metabolomics was used in a nested case-control study based on a screening sample of 64-year-old Caucasian women (n = 629). Candidate metabolic markers of T2D were identified in plasma obtained at baseline and the power to predict diabetes was tested in 69 incident cases occurring during 5.5 years follow-up. The metabolomics results were used as a standalone prediction model and in combination with established T2D predictive biomarkers for building eight T2D prediction models that were compared with each other based on their sensitivity and selectivity for predicting T2D. Established markers of T2D (impaired fasting glucose, impaired glucose tolerance, insulin resistance (HOMA), smoking, serum adiponectin)) alone, and in combination with metabolomics had the largest areas under the curve (AUC) (0.794 (95% confidence interval [0.738-0.850]) and 0.808 [0.749-0.867] respectively), with the standalone metabolomics model based on nine fasting plasma markers having a lower predictive power (0.657 [0.577-0.736]). Prediction based on non-blood based measures was 0.638 [0.565-0.711]). Established measures of T2D risk remain the best predictor of T2D risk in this population. Additional markers detected using metabolomics are likely related to these measures as they did not enhance the overall prediction in a combined model.

  19. Prediction of Drug-Target Interactions and Drug Repositioning via Network-Based Inference

    PubMed Central

    Jiang, Jing; Lu, Weiqiang; Li, Weihua; Liu, Guixia; Zhou, Weixing; Huang, Jin; Tang, Yun

    2012-01-01

    Drug-target interaction (DTI) is the basis of drug discovery and design. It is time consuming and costly to determine DTI experimentally. Hence, it is necessary to develop computational methods for the prediction of potential DTI. Based on complex network theory, three supervised inference methods were developed here to predict DTI and used for drug repositioning, namely drug-based similarity inference (DBSI), target-based similarity inference (TBSI) and network-based inference (NBI). Among them, NBI performed best on four benchmark data sets. Then a drug-target network was created with NBI based on 12,483 FDA-approved and experimental drug-target binary links, and some new DTIs were further predicted. In vitro assays confirmed that five old drugs, namely montelukast, diclofenac, simvastatin, ketoconazole, and itraconazole, showed polypharmacological features on estrogen receptors or dipeptidyl peptidase-IV with half maximal inhibitory or effective concentration ranged from 0.2 to 10 µM. Moreover, simvastatin and ketoconazole showed potent antiproliferative activities on human MDA-MB-231 breast cancer cell line in MTT assays. The results indicated that these methods could be powerful tools in prediction of DTIs and drug repositioning. PMID:22589709

  20. Improved Prediction of Blood-Brain Barrier Permeability Through Machine Learning with Combined Use of Molecular Property-Based Descriptors and Fingerprints.

    PubMed

    Yuan, Yaxia; Zheng, Fang; Zhan, Chang-Guo

    2018-03-21

    Blood-brain barrier (BBB) permeability of a compound determines whether the compound can effectively enter the brain. It is an essential property which must be accounted for in drug discovery with a target in the brain. Several computational methods have been used to predict the BBB permeability. In particular, support vector machine (SVM), which is a kernel-based machine learning method, has been used popularly in this field. For SVM training and prediction, the compounds are characterized by molecular descriptors. Some SVM models were based on the use of molecular property-based descriptors (including 1D, 2D, and 3D descriptors) or fragment-based descriptors (known as the fingerprints of a molecule). The selection of descriptors is critical for the performance of a SVM model. In this study, we aimed to develop a generally applicable new SVM model by combining all of the features of the molecular property-based descriptors and fingerprints to improve the accuracy for the BBB permeability prediction. The results indicate that our SVM model has improved accuracy compared to the currently available models of the BBB permeability prediction.

  1. The prediction of the residual life of electromechanical equipment based on the artificial neural network

    NASA Astrophysics Data System (ADS)

    Zhukovskiy, Yu L.; Korolev, N. A.; Babanova, I. S.; Boikov, A. V.

    2017-10-01

    This article is devoted to the prediction of the residual life based on an estimate of the technical state of the induction motor. The proposed system allows to increase the accuracy and completeness of diagnostics by using an artificial neural network (ANN), and also identify and predict faulty states of an electrical equipment in dynamics. The results of the proposed system for estimation the technical condition are probability technical state diagrams and a quantitative evaluation of the residual life, taking into account electrical, vibrational, indirect parameters and detected defects. Based on the evaluation of the technical condition and the prediction of the residual life, a decision is made to change the control of the operating and maintenance modes of the electric motors.

  2. The Impact of Trajectory Prediction Uncertainty on Air Traffic Controller Performance and Acceptability

    NASA Technical Reports Server (NTRS)

    Mercer, Joey S.; Bienert, Nancy; Gomez, Ashley; Hunt, Sarah; Kraut, Joshua; Martin, Lynne; Morey, Susan; Green, Steven M.; Prevot, Thomas; Wu, Minghong G.

    2013-01-01

    A Human-In-The-Loop air traffic control simulation investigated the impact of uncertainties in trajectory predictions on NextGen Trajectory-Based Operations concepts, seeking to understand when the automation would become unacceptable to controllers or when performance targets could no longer be met. Retired air traffic controllers staffed two en route transition sectors, delivering arrival traffic to the northwest corner-post of Atlanta approach control under time-based metering operations. Using trajectory-based decision-support tools, the participants worked the traffic under varying levels of wind forecast error and aircraft performance model error, impacting the ground automations ability to make accurate predictions. Results suggest that the controllers were able to maintain high levels of performance, despite even the highest levels of trajectory prediction errors.

  3. Generalized Predictive and Neural Generalized Predictive Control of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Kelkar, Atul G.

    2000-01-01

    The research work presented in this thesis addresses the problem of robust control of uncertain linear and nonlinear systems using Neural network-based Generalized Predictive Control (NGPC) methodology. A brief overview of predictive control and its comparison with Linear Quadratic (LQ) control is given to emphasize advantages and drawbacks of predictive control methods. It is shown that the Generalized Predictive Control (GPC) methodology overcomes the drawbacks associated with traditional LQ control as well as conventional predictive control methods. It is shown that in spite of the model-based nature of GPC it has good robustness properties being special case of receding horizon control. The conditions for choosing tuning parameters for GPC to ensure closed-loop stability are derived. A neural network-based GPC architecture is proposed for the control of linear and nonlinear uncertain systems. A methodology to account for parametric uncertainty in the system is proposed using on-line training capability of multi-layer neural network. Several simulation examples and results from real-time experiments are given to demonstrate the effectiveness of the proposed methodology.

  4. An improved predictive functional control method with application to PMSM systems

    NASA Astrophysics Data System (ADS)

    Li, Shihua; Liu, Huixian; Fu, Wenshu

    2017-01-01

    In common design of prediction model-based control method, usually disturbances are not considered in the prediction model as well as the control design. For the control systems with large amplitude or strong disturbances, it is difficult to precisely predict the future outputs according to the conventional prediction model, and thus the desired optimal closed-loop performance will be degraded to some extent. To this end, an improved predictive functional control (PFC) method is developed in this paper by embedding disturbance information into the system model. Here, a composite prediction model is thus obtained by embedding the estimated value of disturbances, where disturbance observer (DOB) is employed to estimate the lumped disturbances. So the influence of disturbances on system is taken into account in optimisation procedure. Finally, considering the speed control problem for permanent magnet synchronous motor (PMSM) servo system, a control scheme based on the improved PFC method is designed to ensure an optimal closed-loop performance even in the presence of disturbances. Simulation and experimental results based on a hardware platform are provided to confirm the effectiveness of the proposed algorithm.

  5. Compressed sensing based missing nodes prediction in temporal communication network

    NASA Astrophysics Data System (ADS)

    Cheng, Guangquan; Ma, Yang; Liu, Zhong; Xie, Fuli

    2018-02-01

    The reconstruction of complex network topology is of great theoretical and practical significance. Most research so far focuses on the prediction of missing links. There are many mature algorithms for link prediction which have achieved good results, but research on the prediction of missing nodes has just begun. In this paper, we propose an algorithm for missing node prediction in complex networks. We detect the position of missing nodes based on their neighbor nodes under the theory of compressed sensing, and extend the algorithm to the case of multiple missing nodes using spectral clustering. Experiments on real public network datasets and simulated datasets show that our algorithm can detect the locations of hidden nodes effectively with high precision.

  6. In silico platform for predicting and initiating β-turns in a protein at desired locations.

    PubMed

    Singh, Harinder; Singh, Sandeep; Raghava, Gajendra P S

    2015-05-01

    Numerous studies have been performed for analysis and prediction of β-turns in a protein. This study focuses on analyzing, predicting, and designing of β-turns to understand the preference of amino acids in β-turn formation. We analyzed around 20,000 PDB chains to understand the preference of residues or pair of residues at different positions in β-turns. Based on the results, a propensity-based method has been developed for predicting β-turns with an accuracy of 82%. We introduced a new approach entitled "Turn level prediction method," which predicts the complete β-turn rather than focusing on the residues in a β-turn. Finally, we developed BetaTPred3, a Random forest based method for predicting β-turns by utilizing various features of four residues present in β-turns. The BetaTPred3 achieved an accuracy of 79% with 0.51 MCC that is comparable or better than existing methods on BT426 dataset. Additionally, models were developed to predict β-turn types with better performance than other methods available in the literature. In order to improve the quality of prediction of turns, we developed prediction models on a large and latest dataset of 6376 nonredundant protein chains. Based on this study, a web server has been developed for prediction of β-turns and their types in proteins. This web server also predicts minimum number of mutations required to initiate or break a β-turn in a protein at specified location of a protein. © 2015 Wiley Periodicals, Inc.

  7. Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM

    NASA Astrophysics Data System (ADS)

    Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan

    2018-02-01

    The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.

  8. Underestimation of Variance of Predicted Health Utilities Derived from Multiattribute Utility Instruments.

    PubMed

    Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M

    2017-04-01

    Parameter uncertainty in value sets of multiattribute utility-based instruments (MAUIs) has received little attention previously. This false precision leads to underestimation of the uncertainty of the results of cost-effectiveness analyses. The aim of this study is to examine the use of multiple imputation as a method to account for this uncertainty of MAUI scoring algorithms. We fitted a Bayesian model with random effects for respondents and health states to the data from the original US EQ-5D-3L valuation study, thereby estimating the uncertainty in the EQ-5D-3L scoring algorithm. We applied these results to EQ-5D-3L data from the Commonwealth Fund (CWF) Survey for Sick Adults ( n = 3958), comparing the standard error of the estimated mean utility in the CWF population using the predictive distribution from the Bayesian mixed-effect model (i.e., incorporating parameter uncertainty in the value set) with the standard error of the estimated mean utilities based on multiple imputation and the standard error using the conventional approach of using MAUI (i.e., ignoring uncertainty in the value set). The mean utility in the CWF population based on the predictive distribution of the Bayesian model was 0.827 with a standard error (SE) of 0.011. When utilities were derived using the conventional approach, the estimated mean utility was 0.827 with an SE of 0.003, which is only 25% of the SE based on the full predictive distribution of the mixed-effect model. Using multiple imputation with 20 imputed sets, the mean utility was 0.828 with an SE of 0.011, which is similar to the SE based on the full predictive distribution. Ignoring uncertainty of the predicted health utilities derived from MAUIs could lead to substantial underestimation of the variance of mean utilities. Multiple imputation corrects for this underestimation so that the results of cost-effectiveness analyses using MAUIs can report the correct degree of uncertainty.

  9. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures.

    PubMed

    Scheid, Anika; Nebel, Markus E

    2012-07-09

    Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case - without sacrificing much of the accuracy of the results. Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms.

  10. Precipitation forecasts for rainfall runoff predictions. A case study in poorly gauged Ribb and Gumara catchments, upper Blue Nile, Ethiopia

    NASA Astrophysics Data System (ADS)

    Seyoum, Mesgana; van Andel, Schalk Jan; Xuan, Yunqing; Amare, Kibreab

    Flow forecasting in poorly gauged, flood-prone Ribb and Gumara sub-catchments of the Blue Nile was studied with the aim of testing the performance of Quantitative Precipitation Forecasts (QPFs). Four types of QPFs namely MM5 forecasts with a spatial resolution of 2 km; the Maximum, Mean and Minimum members (MaxEPS, MeanEPS and MinEPS where EPS stands for Ensemble Prediction System) of the fixed, low resolution (2.5 by 2.5 degrees) National Oceanic and Atmospheric Administration Global Forecast System (NOAA GFS) ensemble forecasts were used. Both the MM5 and the EPS were not calibrated (bias correction, downscaling (for EPS), etc.). In addition, zero forecasts assuming no rainfall in the coming days, and monthly average forecasts assuming average monthly rainfall in the coming days, were used. These rainfall forecasts were then used to drive the Hydrologic Engineering Center’s-Hydrologic Modeling System, HEC-HMS, hydrologic model for flow predictions. The results show that flow predictions using MaxEPS and MM5 precipitation forecasts over-predicted the peak flow for most of the seven events analyzed, whereas under-predicted peak flow was found using zero- and monthly average rainfall. The comparison of observed and predicted flow hydrographs shows that MM5, MaxEPS and MeanEPS precipitation forecasts were able to capture the rainfall signal that caused peak flows. Flow predictions based on MaxEPS and MeanEPS gave results that were quantitatively close to the observed flow for most events, whereas flow predictions based on MM5 resulted in large overestimations for some events. In follow-up research for this particular case study, calibration of the MM5 model will be performed. The overall analysis shows that freely available atmospheric forecasting products can provide additional information on upcoming rainfall and peak flow events in areas where only base-line forecasts such as no-rainfall or climatology are available.

  11. Deep learning based tissue analysis predicts outcome in colorectal cancer.

    PubMed

    Bychkov, Dmitrii; Linder, Nina; Turkki, Riku; Nordling, Stig; Kovanen, Panu E; Verrill, Clare; Walliander, Margarita; Lundin, Mikael; Haglund, Caj; Lundin, Johan

    2018-02-21

    Image-based machine learning and deep learning in particular has recently shown expert-level accuracy in medical image classification. In this study, we combine convolutional and recurrent architectures to train a deep network to predict colorectal cancer outcome based on images of tumour tissue samples. The novelty of our approach is that we directly predict patient outcome, without any intermediate tissue classification. We evaluate a set of digitized haematoxylin-eosin-stained tumour tissue microarray (TMA) samples from 420 colorectal cancer patients with clinicopathological and outcome data available. The results show that deep learning-based outcome prediction with only small tissue areas as input outperforms (hazard ratio 2.3; CI 95% 1.79-3.03; AUC 0.69) visual histological assessment performed by human experts on both TMA spot (HR 1.67; CI 95% 1.28-2.19; AUC 0.58) and whole-slide level (HR 1.65; CI 95% 1.30-2.15; AUC 0.57) in the stratification into low- and high-risk patients. Our results suggest that state-of-the-art deep learning techniques can extract more prognostic information from the tissue morphology of colorectal cancer than an experienced human observer.

  12. Enhanced thermoelectric performance of graphene nanoribbon-based devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hossain, Md Sharafat, E-mail: hossain@student.unimelb.edu.au; Huynh, Duc Hau; Nguyen, Phuong Duc

    There have been numerous theoretical studies on exciting thermoelectric properties of graphene nano-ribbons (GNRs); however, most of these studies are mainly based on simulations. In this work, we measure and characterize the thermoelectric properties of GNRs and compare the results with theoretical predictions. Our experimental results verify that nano-structuring and patterning graphene into nano-ribbons significantly enhance its thermoelectric power, confirming previous predictions. Although patterning results in lower conductance (G), the overall power factor (S{sup 2}G) increases for nanoribbons. We demonstrate that edge roughness plays an important role in achieving such an enhanced performance and support it through first principles simulations.more » We show that uncontrolled edge roughness, which is considered detrimental in GNR-based electronic devices, leads to enhanced thermoelectric performance of GNR-based thermoelectric devices. The result validates previously reported theoretical studies of GNRs and demonstrates the potential of GNRs for the realization of highly efficient thermoelectric devices.« less

  13. Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Schorlemmer, D.; Jordan, T. H.; Zechar, J. D.; Gerstenberger, M. C.; Wiemer, S.; Maechling, P. J.

    2006-12-01

    Earthquake prediction is one of the most difficult problems in physical science and, owing to its societal implications, one of the most controversial. The study of earthquake predictability has been impeded by the lack of an adequate experimental infrastructure---the capability to conduct scientific prediction experiments under rigorous, controlled conditions and evaluate them using accepted criteria specified in advance. To remedy this deficiency, the Southern California Earthquake Center (SCEC) is working with its international partners, which include the European Union (through the Swiss Seismological Service) and New Zealand (through GNS Science), to develop a virtual, distributed laboratory with a cyberinfrastructure adequate to support a global program of research on earthquake predictability. This Collaboratory for the Study of Earthquake Predictability (CSEP) will extend the testing activities of SCEC's Working Group on Regional Earthquake Likelihood Models, from which we will present first results. CSEP will support rigorous procedures for registering prediction experiments on regional and global scales, community-endorsed standards for assessing probability-based and alarm-based predictions, access to authorized data sets and monitoring products from designated natural laboratories, and software to allow researchers to participate in prediction experiments. CSEP will encourage research on earthquake predictability by supporting an environment for scientific prediction experiments that allows the predictive skill of proposed algorithms to be rigorously compared with standardized reference methods and data sets. It will thereby reduce the controversies surrounding earthquake prediction, and it will allow the results of prediction experiments to be communicated to the scientific community, governmental agencies, and the general public in an appropriate research context.

  14. Microarray-based cancer prediction using soft computing approach.

    PubMed

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  15. Prediction and Factor Extraction of Drug Function by Analyzing Medical Records in Developing Countries.

    PubMed

    Hu, Min; Nohara, Yasunobu; Nakamura, Masafumi; Nakashima, Naoki

    2017-01-01

    The World Health Organization has declared Bangladesh one of 58 countries facing acute Human Resources for Health (HRH) crisis. Artificial intelligence in healthcare has been shown to be successful for diagnostics. Using machine learning to predict pharmaceutical prescriptions may solve HRH crises. In this study, we investigate a predictive model by analyzing prescription data of 4,543 subjects in Bangladesh. We predict the function of prescribed drugs, comparing three machine-learning approaches. The approaches compare whether a subject shall be prescribed medicine from the 21 most frequently prescribed drug functions. Receiver Operating Characteristics (ROC) were selected as a way to evaluate and assess prediction models. The results show the drug function with the best prediction performance was oral hypoglycemic drugs, which has an average AUC of 0.962. To understand how the variables affect prediction, we conducted factor analysis based on tree-based algorithms and natural language processing techniques.

  16. Predicting enteric methane emission of dairy cows with milk Fourier-transform infrared spectra and gas chromatography-based milk fatty acid profiles.

    PubMed

    van Gastelen, S; Mollenhorst, H; Antunes-Fernandes, E C; Hettinga, K A; van Burgsteden, G G; Dijkstra, J; Rademaker, J L W

    2018-06-01

    The objective of the present study was to compare the prediction potential of milk Fourier-transform infrared spectroscopy (FTIR) for CH 4 emissions of dairy cows with that of gas chromatography (GC)-based milk fatty acids (MFA). Data from 9 experiments with lactating Holstein-Friesian cows, with a total of 30 dietary treatments and 218 observations, were used. Methane emissions were measured for 3 consecutive days in climate respiration chambers and expressed as production (g/d), yield (g/kg of dry matter intake; DMI), and intensity (g/kg of fat- and protein-corrected milk; FPCM). Dry matter intake was 16.3 ± 2.18 kg/d (mean ± standard deviation), FPCM yield was 25.9 ± 5.06 kg/d, CH 4 production was 366 ± 53.9 g/d, CH 4 yield was 22.5 ± 2.10 g/kg of DMI, and CH 4 intensity was 14.4 ± 2.58 g/kg of FPCM. Milk was sampled during the same days and analyzed by GC and by FTIR. Multivariate GC-determined MFA-based and FTIR-based CH 4 prediction models were developed, and subsequently, the final CH 4 prediction models were evaluated with root mean squared error of prediction and concordance correlation coefficient analysis. Further, we performed a random 10-fold cross validation to calculate the performance parameters of the models (e.g., the coefficient of determination of cross validation). The final GC-determined MFA-based CH 4 prediction models estimate CH 4 production, yield, and intensity with a root mean squared error of prediction of 35.7 g/d, 1.6 g/kg of DMI, and 1.6 g/kg of FPCM and with a concordance correlation coefficient of 0.72, 0.59, and 0.77, respectively. The final FTIR-based CH 4 prediction models estimate CH 4 production, yield, and intensity with a root mean squared error of prediction of 43.2 g/d, 1.9 g/kg of DMI, and 1.7 g/kg of FPCM and with a concordance correlation coefficient of 0.52, 0.40, and 0.72, respectively. The GC-determined MFA-based prediction models described a greater part of the observed variation in CH 4 emission than did the FTIR-based models. The cross validation results indicate that all CH 4 prediction models (both GC-determined MFA-based and FTIR-based models) are robust; the difference between the coefficient of determination and the coefficient of determination of cross validation ranged from 0.01 to 0.07. The results indicate that GC-determined MFA have a greater potential than FTIR spectra to estimate CH 4 production, yield, and intensity. Both techniques hold potential but may not yet be ready to predict CH 4 emission of dairy cows in practice. Additional CH 4 measurements are needed to improve the accuracy and robustness of GC-determined MFA and FTIR spectra for CH 4 prediction. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. SNBRFinder: A Sequence-Based Hybrid Algorithm for Enhanced Prediction of Nucleic Acid-Binding Residues.

    PubMed

    Yang, Xiaoxia; Wang, Jia; Sun, Jun; Liu, Rong

    2015-01-01

    Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder) by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.

  18. Predicted phototoxicities of carbon nano-material by quantum mechanical calculations.

    PubMed

    Betowski, Don

    2017-08-01

    The purpose of this research was to develop a predictive model for the phototoxicity potential of carbon nanomaterials (fullerenols and single-walled carbon nanotubes). This model is based on the quantum mechanical (ab initio) calculations on these carbon-based materials and comparison of the triplet excited states of these materials to published work relating phototoxicity of polynuclear aromatic hydrocarbons (PAH) to their predictive triplet excited state energy. A successful outcome will add another tool to the arsenal of predictive methods for the U.S. EPA program offices as they assess the toxicity of compounds in use or coming into commerce. The basis of this research was obtaining the best quantum mechanical structure of the carbon nanomaterial and was fundamental in determining the triplet excited state energy. The triplet excited state, in turn, is associated with the phototoxicity of the material. This project relies heavily on the interaction of the predictive results (physical chemistry) and the experimental results obtained by biologists and toxicologists. The results of the experiments (toxicity testing) will help refine the predictive model, while the predictions will alert the scientists to red flag compounds. It is hoped that a guidance document for the U.S. EPA will be forthcoming to help determine the toxicity of compounds. This can be a screening tool that would rely on further testing for those compounds found by these predictions to be a phototoxic danger to health and the environment. Copyright © 2017. Published by Elsevier Inc.

  19. Earthquake prediction analysis based on empirical seismic rate: the M8 algorithm

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.

    2010-12-01

    The quality of space-time earthquake prediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local rate of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized rate of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.

  20. Accuracy of predicting genomic breeding values for residual feed intake in Angus and Charolais beef cattle.

    PubMed

    Chen, L; Schenkel, F; Vinsky, M; Crews, D H; Li, C

    2013-10-01

    In beef cattle, phenotypic data that are difficult and/or costly to measure, such as feed efficiency, and DNA marker genotypes are usually available on a small number of animals of different breeds or populations. To achieve a maximal accuracy of genomic prediction using the phenotype and genotype data, strategies for forming a training population to predict genomic breeding values (GEBV) of the selection candidates need to be evaluated. In this study, we examined the accuracy of predicting GEBV for residual feed intake (RFI) based on 522 Angus and 395 Charolais steers genotyped on SNP with the Illumina Bovine SNP50 Beadchip for 3 training population forming strategies: within breed, across breed, and by pooling data from the 2 breeds (i.e., combined). Two other scenarios with the training and validation data split by birth year and by sire family within a breed were also investigated to assess the impact of genetic relationships on the accuracy of genomic prediction. Three statistical methods including the best linear unbiased prediction with the relationship matrix defined based on the pedigree (PBLUP), based on the SNP genotypes (GBLUP), and a Bayesian method (BayesB) were used to predict the GEBV. The results showed that the accuracy of the GEBV prediction was the highest when the prediction was within breed and when the validation population had greater genetic relationships with the training population, with a maximum of 0.58 for Angus and 0.64 for Charolais. The within-breed prediction accuracies dropped to 0.29 and 0.38, respectively, when the validation populations had a minimal pedigree link with the training population. When the training population of a different breed was used to predict the GEBV of the validation population, that is, across-breed genomic prediction, the accuracies were further reduced to 0.10 to 0.22, depending on the prediction method used. Pooling data from the 2 breeds to form the training population resulted in accuracies increased to 0.31 and 0.43, respectively, for the Angus and Charolais validation populations. The results suggested that the genetic relationship of selection candidates with the training population has a greater impact on the accuracy of GEBV using the Illumina Bovine SNP50 Beadchip. Pooling data from different breeds to form the training population will improve the accuracy of across breed genomic prediction for RFI in beef cattle.

  1. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J; Fan, J; Hu, W

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less

  2. Interlinking backscatter, grain size and benthic community structure

    NASA Astrophysics Data System (ADS)

    McGonigle, Chris; Collier, Jenny S.

    2014-06-01

    The relationship between acoustic backscatter, sediment grain size and benthic community structure is examined using three different quantitative methods, covering image- and angular response-based approaches. Multibeam time-series backscatter (300 kHz) data acquired in 2008 off the coast of East Anglia (UK) are compared with grain size properties, macrofaunal abundance and biomass from 130 Hamon and 16 Clamshell grab samples. Three predictive methods are used: 1) image-based (mean backscatter intensity); 2) angular response-based (predicted mean grain size), and 3) image-based (1st principal component and classification) from Quester Tangent Corporation Multiview software. Relationships between grain size and backscatter are explored using linear regression. Differences in grain size and benthic community structure between acoustically defined groups are examined using ANOVA and PERMANOVA+. Results for the Hamon grab stations indicate significant correlations between measured mean grain size and mean backscatter intensity, angular response predicted mean grain size, and 1st principal component of QTC analysis (all p < 0.001). Results for the Clamshell grab for two of the methods have stronger positive correlations; mean backscatter intensity (r2 = 0.619; p < 0.001) and angular response predicted mean grain size (r2 = 0.692; p < 0.001). ANOVA reveals significant differences in mean grain size (Hamon) within acoustic groups for all methods: mean backscatter (p < 0.001), angular response predicted grain size (p < 0.001), and QTC class (p = 0.009). Mean grain size (Clamshell) shows a significant difference between groups for mean backscatter (p = 0.001); other methods were not significant. PERMANOVA for the Hamon abundance shows benthic community structure was significantly different between acoustic groups for all methods (p ≤ 0.001). Overall these results show considerable promise in that more than 60% of the variance in the mean grain size of the Clamshell grab samples can be explained by mean backscatter or acoustically-predicted grain size. These results show that there is significant predictive capacity for sediment characteristics from multibeam backscatter and that these acoustic classifications can have ecological validity.

  3. Computational prediction of ionic liquid 1-octanol/water partition coefficients.

    PubMed

    Kamath, Ganesh; Bhatnagar, Navendu; Baker, Gary A; Baker, Sheila N; Potoff, Jeffrey J

    2012-04-07

    Wet 1-octanol/water partition coefficients (log K(ow)) predicted for imidazolium-based ionic liquids using adaptive bias force-molecular dynamics (ABF-MD) simulations lie in excellent agreement with experimental values. These encouraging results suggest prospects for this computational tool in the a priori prediction of log K(ow) values of ionic liquids broadly with possible screening implications as well (e.g., prediction of CO(2)-philic ionic liquids).

  4. A blood-based predictor for neocortical Aβ burden in Alzheimer's disease: results from the AIBL study.

    PubMed

    Burnham, S C; Faux, N G; Wilson, W; Laws, S M; Ames, D; Bedo, J; Bush, A I; Doecke, J D; Ellis, K A; Head, R; Jones, G; Kiiveri, H; Martins, R N; Rembach, A; Rowe, C C; Salvado, O; Macaulay, S L; Masters, C L; Villemagne, V L

    2014-04-01

    Dementia is a global epidemic with Alzheimer's disease (AD) being the leading cause. Early identification of patients at risk of developing AD is now becoming an international priority. Neocortical Aβ (extracellular β-amyloid) burden (NAB), as assessed by positron emission tomography (PET), represents one such marker for early identification. These scans are expensive and are not widely available, thus, there is a need for cheaper and more widely accessible alternatives. Addressing this need, a blood biomarker-based signature having efficacy for the prediction of NAB and which can be easily adapted for population screening is described. Blood data (176 analytes measured in plasma) and Pittsburgh Compound B (PiB)-PET measurements from 273 participants from the Australian Imaging, Biomarkers and Lifestyle (AIBL) study were utilised. Univariate analysis was conducted to assess the difference of plasma measures between high and low NAB groups, and cross-validated machine-learning models were generated for predicting NAB. These models were applied to 817 non-imaged AIBL subjects and 82 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) for validation. Five analytes showed significant difference between subjects with high compared to low NAB. A machine-learning model (based on nine markers) achieved sensitivity and specificity of 80 and 82%, respectively, for predicting NAB. Validation using the ADNI cohort yielded similar results (sensitivity 79% and specificity 76%). These results show that a panel of blood-based biomarkers is able to accurately predict NAB, supporting the hypothesis for a relationship between a blood-based signature and Aβ accumulation, therefore, providing a platform for developing a population-based screen.

  5. Derivation of a target concentration of Pb in soil based on elevation of adult blood pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stern, A.H.

    1996-04-01

    The increase in systolic blood pressure in males appears to be the most sensitive adult endpoint appropriate for deriving a health risk-based target level of lead (Ph) in soil. Because the response of blood pressure to blood Ph concentration (PbB) has no apparent threshold, traditional approaches based on the application of a Reference Dose (RfD) are not applicable. An alternative approach is presented based on a model which predicts the population shift in systolic blood pressure from ingestion of Pb contaminated soil as a simultaneous function of exposure to Pb in soil, the baseline distribution of blood Pb concentration inmore » the population and the baseline distribution of systolic pressure in the population. This model is analyzed using Monte Carlo analysis to predict the population distribution of systolic pressure resulting from Ph exposure. Based on this analysis, it is predicted that for adult males 18-65 years old, exposure to 1000 ppm Pb in soil will result in an increase of approximately 1 mm Hg systolic pressure, an increase in the incidence of systolic hypertension (i.e., systolic pressure >140 mm Hg) of approximately 1% and an increase in PbB of 1-3 {mu}g/dl. Based on the proposition that these adverse effects can be considered de minimis, 1000 ppm Ph in soil is proposed as a target soil concentration for adult exposure. Available data do not appear to be adequate to predict the newborn PbB level which would result from exposure to this soil level during pregnancy, 36 refs., 6 figs.« less

  6. Improved NASA-ANOPP Noise Prediction Computer Code for Advanced Subsonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Kontos, K. B.; Janardan, B. A.; Gliebe, P. R.

    1996-01-01

    Recent experience using ANOPP to predict turbofan engine flyover noise suggests that it over-predicts overall EPNL by a significant amount. An improvement in this prediction method is desired for system optimization and assessment studies of advanced UHB engines. An assessment of the ANOPP fan inlet, fan exhaust, jet, combustor, and turbine noise prediction methods is made using static engine component noise data from the CF6-8OC2, E(3), and QCSEE turbofan engines. It is shown that the ANOPP prediction results are generally higher than the measured GE data, and that the inlet noise prediction method (Heidmann method) is the most significant source of this overprediction. Fan noise spectral comparisons show that improvements to the fan tone, broadband, and combination tone noise models are required to yield results that more closely simulate the GE data. Suggested changes that yield improved fan noise predictions but preserve the Heidmann model structure are identified and described. These changes are based on the sets of engine data mentioned, as well as some CFM56 engine data that was used to expand the combination tone noise database. It should be noted that the recommended changes are based on an analysis of engines that are limited to single stage fans with design tip relative Mach numbers greater than one.

  7. A study of microstructural characteristics and differential thermal analysis of Ni-based superalloys

    NASA Technical Reports Server (NTRS)

    Aggarwal, M. D.; Lal, R. B.; Oyekenu, Samuel A.; Parr, Richard; Gentz, Stephen

    1989-01-01

    The objective of this work is to correlate the mechanical properties of the Ni-based superalloy MAR M246(Hf) used in the Space Shuttle Main Engine with its structural characteristics by systematic study of optical photomicrographs and differential thermal analysis. The authors developed a method of predicting the liquidus and solidus temperature of various nickel based superalloys (MAR-M247, Waspaloy, Udimet-41, polycrystalline and single crystals of CMSX-2 and CMSX-3) and comparing the predictions with the experimental differential thermal analysis (DTA) curves using Perkin-Elmer DTA 1700. The method of predicting these temperatures is based on the additive effect of the components dissolved in nickel. The results were compared with the experimental values.

  8. Variation Among Internet Based Calculators in Predicting Spontaneous Resolution of Vesicoureteral Reflux

    PubMed Central

    Routh, Jonathan C.; Gong, Edward M.; Cannon, Glenn M.; Yu, Richard N.; Gargollo, Patricio C.; Nelson, Caleb P.

    2010-01-01

    Purpose An increasing number of parents and practitioners use the Internet for health related purposes, and an increasing number of models are available on the Internet for predicting spontaneous resolution rates for children with vesi-coureteral reflux. We sought to determine whether currently available Internet based calculators for vesicoureteral reflux resolution produce systematically different results. Materials and Methods Following a systematic Internet search we identified 3 Internet based calculators of spontaneous resolution rates for children with vesicoureteral reflux, of which 2 were academic affiliated and 1 was industry affiliated. We generated a random cohort of 100 hypothetical patients with a wide range of clinical characteristics and entered the data on each patient into each calculator. We then compared the results from the calculators in terms of mean predicted resolution probability and number of cases deemed likely to resolve at various cutoff probabilities. Results Mean predicted resolution probabilities were 41% and 36% (range 31% to 41%) for the 2 academic affiliated calculators and 33% for the industry affiliated calculator (p = 0.02). For some patients the calculators produced markedly different probabilities of spontaneous resolution, in some instances ranging from 24% to 89% for the same patient. At thresholds greater than 5%, 10% and 25% probability of spontaneous resolution the calculators differed significantly regarding whether cases would resolve (all p < 0.0001). Conclusions Predicted probabilities of spontaneous resolution of vesicoureteral reflux differ significantly among Internet based calculators. For certain patients, particularly those with a lower probability of spontaneous resolution, these differences can significantly influence clinical decision making. PMID:20172550

  9. Analysis of Artificial Neural Network Backpropagation Using Conjugate Gradient Fletcher Reeves In The Predicting Process

    NASA Astrophysics Data System (ADS)

    Wanto, Anjar; Zarlis, Muhammad; Sawaluddin; Hartama, Dedy

    2017-12-01

    Backpropagation is a good artificial neural network algorithm used to predict, one of which is to predict the rate of Consumer Price Index (CPI) based on the foodstuff sector. While conjugate gradient fletcher reeves is a suitable optimization method when juxtaposed with backpropagation method, because this method can shorten iteration without reducing the quality of training and testing result. Consumer Price Index (CPI) data that will be predicted to come from the Central Statistics Agency (BPS) Pematangsiantar. The results of this study will be expected to contribute to the government in making policies to improve economic growth. In this study, the data obtained will be processed by conducting training and testing with artificial neural network backpropagation by using parameter learning rate 0,01 and target error minimum that is 0.001-0,09. The training network is built with binary and bipolar sigmoid activation functions. After the results with backpropagation are obtained, it will then be optimized using the conjugate gradient fletcher reeves method by conducting the same training and testing based on 5 predefined network architectures. The result, the method used can increase the speed and accuracy result.

  10. Molecular modeling of the microstructure evolution during carbon fiber processing

    NASA Astrophysics Data System (ADS)

    Desai, Saaketh; Li, Chunyu; Shen, Tongtong; Strachan, Alejandro

    2017-12-01

    The rational design of carbon fibers with desired properties requires quantitative relationships between the processing conditions, microstructure, and resulting properties. We developed a molecular model that combines kinetic Monte Carlo and molecular dynamics techniques to predict the microstructure evolution during the processes of carbonization and graphitization of polyacrylonitrile (PAN)-based carbon fibers. The model accurately predicts the cross-sectional microstructure of the fibers with the molecular structure of the stabilized PAN fibers and physics-based chemical reaction rates as the only inputs. The resulting structures exhibit key features observed in electron microcopy studies such as curved graphitic sheets and hairpin structures. In addition, computed X-ray diffraction patterns are in good agreement with experiments. We predict the transverse moduli of the resulting fibers between 1 GPa and 5 GPa, in good agreement with experimental results for high modulus fibers and slightly lower than those of high-strength fibers. The transverse modulus is governed by sliding between graphitic sheets, and the relatively low value for the predicted microstructures can be attributed to their perfect longitudinal texture. Finally, the simulations provide insight into the relationships between chemical kinetics and the final microstructure; we observe that high reaction rates result in porous structures with lower moduli.

  11. Predicting the host of influenza viruses based on the word vector.

    PubMed

    Xu, Beibei; Tan, Zhiying; Li, Kenli; Jiang, Taijiao; Peng, Yousong

    2017-01-01

    Newly emerging influenza viruses continue to threaten public health. A rapid determination of the host range of newly discovered influenza viruses would assist in early assessment of their risk. Here, we attempted to predict the host of influenza viruses using the Support Vector Machine (SVM) classifier based on the word vector, a new representation and feature extraction method for biological sequences. The results show that the length of the word within the word vector, the sequence type (DNA or protein) and the species from which the sequences were derived for generating the word vector all influence the performance of models in predicting the host of influenza viruses. In nearly all cases, the models built on the surface proteins hemagglutinin (HA) and neuraminidase (NA) (or their genes) produced better results than internal influenza proteins (or their genes). The best performance was achieved when the model was built on the HA gene based on word vectors (words of three-letters long) generated from DNA sequences of the influenza virus. This results in accuracies of 99.7% for avian, 96.9% for human and 90.6% for swine influenza viruses. Compared to the method of sequence homology best-hit searches using the Basic Local Alignment Search Tool (BLAST), the word vector-based models still need further improvements in predicting the host of influenza A viruses.

  12. Model-based influences on humans' choices and striatal prediction errors.

    PubMed

    Daw, Nathaniel D; Gershman, Samuel J; Seymour, Ben; Dayan, Peter; Dolan, Raymond J

    2011-03-24

    The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. A prediction model for colon cancer surveillance data.

    PubMed

    Good, Norm M; Suresh, Krithika; Young, Graeme P; Lockett, Trevor J; Macrae, Finlay A; Taylor, Jeremy M G

    2015-08-15

    Dynamic prediction models make use of patient-specific longitudinal data to update individualized survival probability predictions based on current and past information. Colonoscopy (COL) and fecal occult blood test (FOBT) results were collected from two Australian surveillance studies on individuals characterized as high-risk based on a personal or family history of colorectal cancer. Motivated by a Poisson process, this paper proposes a generalized nonlinear model with a complementary log-log link as a dynamic prediction tool that produces individualized probabilities for the risk of developing advanced adenoma or colorectal cancer (AAC). This model allows predicted risk to depend on a patient's baseline characteristics and time-dependent covariates. Information on the dates and results of COLs and FOBTs were incorporated using time-dependent covariates that contributed to patient risk of AAC for a specified period following the test result. These covariates serve to update a person's risk as additional COL, and FOBT test information becomes available. Model selection was conducted systematically through the comparison of Akaike information criterion. Goodness-of-fit was assessed with the use of calibration plots to compare the predicted probability of event occurrence with the proportion of events observed. Abnormal COL results were found to significantly increase risk of AAC for 1 year following the test. Positive FOBTs were found to significantly increase the risk of AAC for 3 months following the result. The covariates that incorporated the updated test results were of greater significance and had a larger effect on risk than the baseline variables. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Template‐based field map prediction for rapid whole brain B0 shimming

    PubMed Central

    Shi, Yuhang; Vannesjo, S. Johanna; Miller, Karla L.

    2017-01-01

    Purpose In typical MRI protocols, time is spent acquiring a field map to calculate the shim settings for best image quality. We propose a fast template‐based field map prediction method that yields near‐optimal shims without measuring the field. Methods The template‐based prediction method uses prior knowledge of the B0 distribution in the human brain, based on a large database of field maps acquired from different subjects, together with subject‐specific structural information from a quick localizer scan. The shimming performance of using the template‐based prediction is evaluated in comparison to a range of potential fast shimming methods. Results Static B0 shimming based on predicted field maps performed almost as well as shimming based on individually measured field maps. In experimental evaluations at 7 T, the proposed approach yielded a residual field standard deviation in the brain of on average 59 Hz, compared with 50 Hz using measured field maps and 176 Hz using no subject‐specific shim. Conclusions This work demonstrates that shimming based on predicted field maps is feasible. The field map prediction accuracy could potentially be further improved by generating the template from a subset of subjects, based on parameters such as head rotation and body mass index. Magn Reson Med 80:171–180, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:29193340

  15. Rigorous assessment and integration of the sequence and structure based features to predict hot spots

    PubMed Central

    2011-01-01

    Background Systematic mutagenesis studies have shown that only a few interface residues termed hot spots contribute significantly to the binding free energy of protein-protein interactions. Therefore, hot spots prediction becomes increasingly important for well understanding the essence of proteins interactions and helping narrow down the search space for drug design. Currently many computational methods have been developed by proposing different features. However comparative assessment of these features and furthermore effective and accurate methods are still in pressing need. Results In this study, we first comprehensively collect the features to discriminate hot spots and non-hot spots and analyze their distributions. We find that hot spots have lower relASA and larger relative change in ASA, suggesting hot spots tend to be protected from bulk solvent. In addition, hot spots have more contacts including hydrogen bonds, salt bridges, and atomic contacts, which favor complexes formation. Interestingly, we find that conservation score and sequence entropy are not significantly different between hot spots and non-hot spots in Ab+ dataset (all complexes). While in Ab- dataset (antigen-antibody complexes are excluded), there are significant differences in two features between hot pots and non-hot spots. Secondly, we explore the predictive ability for each feature and the combinations of features by support vector machines (SVMs). The results indicate that sequence-based feature outperforms other combinations of features with reasonable accuracy, with a precision of 0.69, a recall of 0.68, an F1 score of 0.68, and an AUC of 0.68 on independent test set. Compared with other machine learning methods and two energy-based approaches, our approach achieves the best performance. Moreover, we demonstrate the applicability of our method to predict hot spots of two protein complexes. Conclusion Experimental results show that support vector machine classifiers are quite effective in predicting hot spots based on sequence features. Hot spots cannot be fully predicted through simple analysis based on physicochemical characteristics, but there is reason to believe that integration of features and machine learning methods can remarkably improve the predictive performance for hot spots. PMID:21798070

  16. Structural features based genome-wide characterization and prediction of nucleosome organization

    PubMed Central

    2012-01-01

    Background Nucleosome distribution along chromatin dictates genomic DNA accessibility and thus profoundly influences gene expression. However, the underlying mechanism of nucleosome formation remains elusive. Here, taking a structural perspective, we systematically explored nucleosome formation potential of genomic sequences and the effect on chromatin organization and gene expression in S. cerevisiae. Results We analyzed twelve structural features related to flexibility, curvature and energy of DNA sequences. The results showed that some structural features such as DNA denaturation, DNA-bending stiffness, Stacking energy, Z-DNA, Propeller twist and free energy, were highly correlated with in vitro and in vivo nucleosome occupancy. Specifically, they can be classified into two classes, one positively and the other negatively correlated with nucleosome occupancy. These two kinds of structural features facilitated nucleosome binding in centromere regions and repressed nucleosome formation in the promoter regions of protein-coding genes to mediate transcriptional regulation. Based on these analyses, we integrated all twelve structural features in a model to predict more accurately nucleosome occupancy in vivo than the existing methods that mainly depend on sequence compositional features. Furthermore, we developed a novel approach, named DLaNe, that located nucleosomes by detecting peaks of structural profiles, and built a meta predictor to integrate information from different structural features. As a comparison, we also constructed a hidden Markov model (HMM) to locate nucleosomes based on the profiles of these structural features. The result showed that the meta DLaNe and HMM-based method performed better than the existing methods, demonstrating the power of these structural features in predicting nucleosome positions. Conclusions Our analysis revealed that DNA structures significantly contribute to nucleosome organization and influence chromatin structure and gene expression regulation. The results indicated that our proposed methods are effective in predicting nucleosome occupancy and positions and that these structural features are highly predictive of nucleosome organization. The implementation of our DLaNe method based on structural features is available online. PMID:22449207

  17. Efficient operation scheduling for adsorption chillers using predictive optimization-based control methods

    NASA Astrophysics Data System (ADS)

    Bürger, Adrian; Sawant, Parantapa; Bohlayer, Markus; Altmann-Dieses, Angelika; Braun, Marco; Diehl, Moritz

    2017-10-01

    Within this work, the benefits of using predictive control methods for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on optimized scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based optimization methods is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, optimization-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these methods for system control are addressed.

  18. On the predictability of outliers in ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Siegert, S.; Bröcker, J.; Kantz, H.

    2012-03-01

    In numerical weather prediction, ensembles are used to retrieve probabilistic forecasts of future weather conditions. We consider events where the verification is smaller than the smallest, or larger than the largest ensemble member of a scalar ensemble forecast. These events are called outliers. In a statistically consistent K-member ensemble, outliers should occur with a base rate of 2/(K+1). In operational ensembles this base rate tends to be higher. We study the predictability of outlier events in terms of the Brier Skill Score and find that forecast probabilities can be calculated which are more skillful than the unconditional base rate. This is shown analytically for statistically consistent ensembles. Using logistic regression, forecast probabilities for outlier events in an operational ensemble are calculated. These probabilities exhibit positive skill which is quantitatively similar to the analytical results. Possible causes of these results as well as their consequences for ensemble interpretation are discussed.

  19. Variational prediction of the mechanical behavior of shape memory alloys based on thermal experiments

    NASA Astrophysics Data System (ADS)

    Junker, Philipp; Jaeger, Stefanie; Kastner, Oliver; Eggeler, Gunther; Hackl, Klaus

    2015-07-01

    In this work, we present simulations of shape memory alloys which serve as first examples demonstrating the predicting character of energy-based material models. We begin with a theoretical approach for the derivation of the caloric parts of the Helmholtz free energy. Afterwards, experimental results for DSC measurements are presented. Then, we recall a micromechanical model based on the principle of the minimum of the dissipation potential for the simulation of polycrystalline shape memory alloys. The previously determined caloric parts of the Helmholtz free energy close the set of model parameters without the need of parameter fitting. All quantities are derived directly from experiments. Finally, we compare finite element results for tension tests to experimental data and show that the model identified by thermal measurements can predict mechanically induced phase transformations and thus rationalize global material behavior without any further assumptions.

  20. A new method for the prediction of chatter stability lobes based on dynamic cutting force simulation model and support vector machine

    NASA Astrophysics Data System (ADS)

    Peng, Chong; Wang, Lun; Liao, T. Warren

    2015-10-01

    Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.

  1. Prediction-based sampled-data H∞ controller design for attitude stabilisation of a rigid spacecraft with disturbances

    NASA Astrophysics Data System (ADS)

    Zhu, Baolong; Zhang, Zhiping; Zhou, Ding; Ma, Jie; Li, Shunli

    2017-08-01

    This paper investigates the H∞ control problem of the attitude stabilisation of a rigid spacecraft with external disturbances using prediction-based sampled-data control strategy. Aiming to achieve a 'virtual' closed-loop system, a type of parameterised sampled-data controller is designed by introducing a prediction mechanism. The resultant closed-loop system is equivalent to a hybrid system featured by a continuous-time and an impulsive differential system. By using a time-varying Lyapunov functional, a generalised bounded real lemma (GBRL) is first established for a kind of impulsive differential system. Based on this GBRL and Lyapunov functional approach, a sufficient condition is derived to guarantee the closed-loop system to be asymptotically stable and to achieve a prescribed H∞ performance. In addition, the controller parameter tuning is cast into a convex optimisation problem. Simulation and comparative results are provided to illustrate the effectiveness of the developed control scheme.

  2. DBH Prediction Using Allometry Described by Bivariate Copula Distribution

    NASA Astrophysics Data System (ADS)

    Xu, Q.; Hou, Z.; Li, B.; Greenberg, J. A.

    2017-12-01

    Forest biomass mapping based on single tree detection from the airborne laser scanning (ALS) usually depends on an allometric equation that relates diameter at breast height (DBH) with per-tree aboveground biomass. The incapability of the ALS technology in directly measuring DBH leads to the need to predict DBH with other ALS-measured tree-level structural parameters. A copula-based method is proposed in the study to predict DBH with the ALS-measured tree height and crown diameter using a dataset measured in the Lassen National Forest in California. Instead of exploring an explicit mathematical equation that explains the underlying relationship between DBH and other structural parameters, the copula-based prediction method utilizes the dependency between cumulative distributions of these variables, and solves the DBH based on an assumption that for a single tree, the cumulative probability of each structural parameter is identical. Results show that compared with the bench-marking least-square linear regression and the k-MSN imputation, the copula-based method obtains better accuracy in the DBH for the Lassen National Forest. To assess the generalization of the proposed method, prediction uncertainty is quantified using bootstrapping techniques that examine the variability of the RMSE of the predicted DBH. We find that the copula distribution is reliable in describing the allometric relationship between tree-level structural parameters, and it contributes to the reduction of prediction uncertainty.

  3. Improving simulations of precipitation phase and snowpack at a site subject to cold air intrusions: Snoqualmie Pass, WA

    NASA Astrophysics Data System (ADS)

    Wayand, Nicholas E.; Stimberis, John; Zagrodnik, Joseph P.; Mass, Clifford F.; Lundquist, Jessica D.

    2016-09-01

    Low-level cold air from eastern Washington often flows westward through mountain passes in the Washington Cascades, creating localized inversions and locally reducing climatological temperatures. The persistence of this inversion during a frontal passage can result in complex patterns of snow and rain that are difficult to predict. Yet these predictions are critical to support highway avalanche control, ski resort operations, and modeling of headwater snowpack storage. In this study we used observations of precipitation phase from a disdrometer and snow depth sensors across Snoqualmie Pass, WA, to evaluate surface-air-temperature-based and mesoscale-model-based predictions of precipitation phase during the anomalously warm 2014-2015 winter. Correlations of phase between surface-based methods and observations were greatly improved (r2 from 0.45 to 0.66) and frozen precipitation biases reduced (+36% to -6% of accumulated snow water equivalent) by using air temperature from a nearby higher-elevation station, which was less impacted by low-level inversions. Alternatively, we found a hybrid method that combines surface-based predictions with output from the Weather Research and Forecasting mesoscale model to have improved skill (r2 = 0.61) over both parent models (r2 = 0.42 and 0.55). These results suggest that prediction of precipitation phase in mountain passes can be improved by incorporating observations or models from above the surface layer.

  4. Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database.

    PubMed

    Chen-Ying Hung; Wei-Chen Chen; Po-Tsun Lai; Ching-Heng Lin; Chi-Chun Lee

    2017-07-01

    Electronic medical claims (EMCs) can be used to accurately predict the occurrence of a variety of diseases, which can contribute to precise medical interventions. While there is a growing interest in the application of machine learning (ML) techniques to address clinical problems, the use of deep-learning in healthcare have just gained attention recently. Deep learning, such as deep neural network (DNN), has achieved impressive results in the areas of speech recognition, computer vision, and natural language processing in recent years. However, deep learning is often difficult to comprehend due to the complexities in its framework. Furthermore, this method has not yet been demonstrated to achieve a better performance comparing to other conventional ML algorithms in disease prediction tasks using EMCs. In this study, we utilize a large population-based EMC database of around 800,000 patients to compare DNN with three other ML approaches for predicting 5-year stroke occurrence. The result shows that DNN and gradient boosting decision tree (GBDT) can result in similarly high prediction accuracies that are better compared to logistic regression (LR) and support vector machine (SVM) approaches. Meanwhile, DNN achieves optimal results by using lesser amounts of patient data when comparing to GBDT method.

  5. Prediction and warning system of SEP events and solar flares for risk estimation in space launch operations

    NASA Astrophysics Data System (ADS)

    García-Rigo, Alberto; Núñez, Marlon; Qahwaji, Rami; Ashamari, Omar; Jiggens, Piers; Pérez, Gustau; Hernández-Pajares, Manuel; Hilgers, Alain

    2016-07-01

    A web-based prototype system for predicting solar energetic particle (SEP) events and solar flares for use by space launch operators is presented. The system has been developed as a result of the European Space Agency (ESA) project SEPsFLAREs (Solar Events Prediction system For space LAunch Risk Estimation). The system consists of several modules covering the prediction of solar flares and early SEP Warnings (labeled Warning tool), the prediction of SEP event occurrence and onset, and the prediction of SEP event peak and duration. In addition, the system acquires data for solar flare nowcasting from Global Navigation Satellite Systems (GNSS)-based techniques (GNSS Solar Flare Detector, GSFLAD and the Sunlit Ionosphere Sudden Total Electron Content Enhancement Detector, SISTED) as additional independent products that may also prove useful for space launch operators.

  6. Customer Churn Prediction for Broadband Internet Services

    NASA Astrophysics Data System (ADS)

    Huang, B. Q.; Kechadi, M.-T.; Buckley, B.

    Although churn prediction has been an area of research in the voice branch of telecommunications services, more focused studies on the huge growth area of Broadband Internet services are limited. Therefore, this paper presents a new set of features for broadband Internet customer churn prediction, based on Henley segments, the broadband usage, dial types, the spend of dial-up, line-information, bill and payment information, account information. Then the four prediction techniques (Logistic Regressions, Decision Trees, Multilayer Perceptron Neural Networks and Support Vector Machines) are applied in customer churn, based on the new features. Finally, the evaluation of new features and a comparative analysis of the predictors are made for broadband customer churn prediction. The experimental results show that the new features with these four modelling techniques are efficient for customer churn prediction in the broadband service field.

  7. Development and Validation of a Clinic-Based Prediction Tool to Identify Female Athletes at High Risk for Anterior Cruciate Ligament Injury

    PubMed Central

    Myer, Gregory D.; Ford, Kevin R.; Khoury, Jane; Succop, Paul; Hewett, Timothy E.

    2012-01-01

    Background Prospective measures of high knee abduction moment (KAM) during landing identify female athletes at high risk for anterior cruciate ligament injury. Laboratory-based measurements demonstrate 90% accuracy in prediction of high KAM. Clinic-based prediction algorithms that employ correlates derived from laboratory-based measurements also demonstrate high accuracy for prediction of high KAM mechanics during landing. Hypotheses Clinic-based measures derived from highly predictive laboratory-based models are valid for the accurate prediction of high KAM status, and simultaneous measurements using laboratory-based and clinic-based techniques highly correlate. Study Design Cohort study (diagnosis); Level of evidence, 2. Methods One hundred female athletes (basketball, soccer, volleyball players) were tested using laboratory-based measures to confirm the validity of identified laboratory-based correlate variables to clinic-based measures included in a prediction algorithm to determine high KAM status. To analyze selected clinic-based surrogate predictors, another cohort of 20 female athletes was simultaneously tested with both clinic-based and laboratory-based measures. Results The prediction model (odds ratio: 95% confidence interval), derived from laboratory-based surrogates including (1) knee valgus motion (1.59: 1.17-2.16 cm), (2) knee flexion range of motion (0.94: 0.89°-1.00°), (3) body mass (0.98: 0.94-1.03 kg), (4) tibia length (1.55: 1.20-2.07 cm), and (5) quadriceps-to-hamstrings ratio (1.70: 0.48%-6.0%), predicted high KAM status with 84% sensitivity and 67% specificity (P < .001). Clinic-based techniques that used a calibrated physician’s scale, a standard measuring tape, standard camcorder, ImageJ software, and an isokinetic dynamometer showed high correlation (knee valgus motion, r = .87; knee flexion range of motion, r = .95; and tibia length, r = .98) to simultaneous laboratory-based measurements. Body mass and quadriceps-to-hamstrings ratio were included in both methodologies and therefore had r values of 1.0. Conclusion Clinically obtainable measures of increased knee valgus, knee flexion range of motion, body mass, tibia length, and quadriceps-to-hamstrings ratio predict high KAM status in female athletes with high sensitivity and specificity. Female athletes who demonstrate high KAM landing mechanics are at increased risk for anterior cruciate ligament injury and are more likely to benefit from neuromuscular training targeted to this risk factor. Use of the developed clinic-based assessment tool may facilitate high-risk athletes’ entry into appropriate interventions that will have greater potential to reduce their injury risk. PMID:20595554

  8. A genomic biomarker signature can predict skin sensitizers using a cell-based in vitro alternative to animal tests

    PubMed Central

    2011-01-01

    Background Allergic contact dermatitis is an inflammatory skin disease that affects a significant proportion of the population. This disease is caused by an adverse immune response towards chemical haptens, and leads to a substantial economic burden for society. Current test of sensitizing chemicals rely on animal experimentation. New legislations on the registration and use of chemicals within pharmaceutical and cosmetic industries have stimulated significant research efforts to develop alternative, human cell-based assays for the prediction of sensitization. The aim is to replace animal experiments with in vitro tests displaying a higher predictive power. Results We have developed a novel cell-based assay for the prediction of sensitizing chemicals. By analyzing the transcriptome of the human cell line MUTZ-3 after 24 h stimulation, using 20 different sensitizing chemicals, 20 non-sensitizing chemicals and vehicle controls, we have identified a biomarker signature of 200 genes with potent discriminatory ability. Using a Support Vector Machine for supervised classification, the prediction performance of the assay revealed an area under the ROC curve of 0.98. In addition, categorizing the chemicals according to the LLNA assay, this gene signature could also predict sensitizing potency. The identified markers are involved in biological pathways with immunological relevant functions, which can shed light on the process of human sensitization. Conclusions A gene signature predicting sensitization, using a human cell line in vitro, has been identified. This simple and robust cell-based assay has the potential to completely replace or drastically reduce the utilization of test systems based on experimental animals. Being based on human biology, the assay is proposed to be more accurate for predicting sensitization in humans, than the traditional animal-based tests. PMID:21824406

  9. A discrete element method-based approach to predict the breakage of coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Varun; Sun, Xin; Xu, Wei

    Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been informed by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments. However, the predictive capabilities for new coals and processes are limited. This work presents a Discrete Element Method based computational framework to predict particle size distribution resulting from the breakage of coal particles characterized by the coal’s physical properties. The effect ofmore » certain operating parameters on the breakage behavior of coal particles also is examined.« less

  10. SeqRate: sequence-based protein folding type classification and rates prediction

    PubMed Central

    2010-01-01

    Background Protein folding rate is an important property of a protein. Predicting protein folding rate is useful for understanding protein folding process and guiding protein design. Most previous methods of predicting protein folding rate require the tertiary structure of a protein as an input. And most methods do not distinguish the different kinetic nature (two-state folding or multi-state folding) of the proteins. Here we developed a method, SeqRate, to predict both protein folding kinetic type (two-state versus multi-state) and real-value folding rate using sequence length, amino acid composition, contact order, contact number, and secondary structure information predicted from only protein sequence with support vector machines. Results We systematically studied the contributions of individual features to folding rate prediction. On a standard benchmark dataset, the accuracy of folding kinetic type classification is 80%. The Pearson correlation coefficient and the mean absolute difference between predicted and experimental folding rates (sec-1) in the base-10 logarithmic scale are 0.81 and 0.79 for two-state protein folders, and 0.80 and 0.68 for three-state protein folders. SeqRate is the first sequence-based method for protein folding type classification and its accuracy of fold rate prediction is improved over previous sequence-based methods. Its performance can be further enhanced with additional information, such as structure-based geometric contacts, as inputs. Conclusions Both the web server and software of predicting folding rate are publicly available at http://casp.rnet.missouri.edu/fold_rate/index.html. PMID:20438647

  11. Using string invariants for prediction searching for optimal parameters

    NASA Astrophysics Data System (ADS)

    Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard

    2016-02-01

    We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.

  12. Improving Multi-Sensor Drought Monitoring, Prediction and Recovery Assessment Using Gravimetry Information

    NASA Astrophysics Data System (ADS)

    Aghakouchak, Amir; Tourian, Mohammad J.

    2015-04-01

    Development of reliable drought monitoring, prediction and recovery assessment tools are fundamental to water resources management. This presentation focuses on how gravimetry information can improve drought assessment. First, we provide an overview of the Global Integrated Drought Monitoring and Prediction System (GIDMaPS) which offers near real-time drought information using remote sensing observations and model simulations. Then, we present a framework for integration of satellite gravimetry information for improving drought prediction and recovery assessment. The input data include satellite-based and model-based precipitation, soil moisture estimates and equivalent water height. Previous studies show that drought assessment based on one single indicator may not be sufficient. For this reason, GIDMaPS provides drought information based on multiple drought indicators including Standardized Precipitation Index (SPI), Standardized Soil Moisture Index (SSI) and the Multivariate Standardized Drought Index (MSDI) which combines SPI and SSI probabilistically. MSDI incorporates the meteorological and agricultural drought conditions and provides composite multi-index drought information for overall characterization of droughts. GIDMaPS includes a seasonal prediction component based on a statistical persistence-based approach. The prediction component of GIDMaPS provides the empirical probability of drought for different severity levels. In this presentation we present a new component in which the drought prediction information based on SPI, SSI and MSDI are conditioned on equivalent water height obtained from the Gravity Recovery and Climate Experiment (GRACE). Using a Bayesian approach, GRACE information is used to evaluate persistence of drought. Finally, the deficit equivalent water height based on GRACE is used for assessing drought recovery. In this presentation, both monitoring and prediction components of GIDMaPS will be discussed, and the results from 2014 California Drought will be presented. Further Reading: Hao Z., AghaKouchak A., Nakhjiri N., Farahmand A., 2014, Global Integrated Drought Monitoring and Prediction System, Scientific Data, 1:140001, 1-10, doi: 10.1038/sdata.2014.1.

  13. A new predictive indicator for development of pressure ulcers in bedridden patients based on common laboratory tests results.

    PubMed

    Hatanaka, N; Yamamoto, Y; Ichihara, K; Mastuo, S; Nakamura, Y; Watanabe, M; Iwatani, Y

    2008-04-01

    Various scales have been devised to predict development of pressure ulcers on the basis of clinical and laboratory data, such as the Braden Scale (Braden score), which is used to monitor activity and skin conditions of bedridden patients. However, none of these scales facilitates clinically reliable prediction. To develop a clinical laboratory data-based predictive equation for the development of pressure ulcers. Subjects were 149 hospitalised patients with respiratory disorders who were monitored for the development of pressure ulcers over a 3-month period. The proportional hazards model (Cox regression) was used to analyse the results of 12 basic laboratory tests on the day of hospitalisation in comparison with Braden score. Pressure ulcers developed in 38 patients within the study period. A Cox regression model consisting solely of Braden scale items showed that none of these items contributed to significantly predicting pressure ulcers. Rather, a combination of haemoglobin (Hb), C-reactive protein (CRP), albumin (Alb), age, and gender produced the best model for prediction. Using the set of explanatory variables, we created a new indicator based on a multiple logistic regression equation. The new indicator showed high sensitivity (0.73) and specificity (0.70), and its diagnostic power was higher than that of Alb, Hb, CRP, or the Braden score alone. The new indicator may become a more useful clinical tool for predicting presser ulcers than Braden score. The new indicator warrants verification studies to facilitate its clinical implementation in the future.

  14. Urban Ecological Security Simulation and Prediction Using an Improved Cellular Automata (CA) Approach-A Case Study for the City of Wuhan in China.

    PubMed

    Gao, Yuan; Zhang, Chuanrong; He, Qingsong; Liu, Yaolin

    2017-06-15

    Ecological security is an important research topic, especially urban ecological security. As highly populated eco-systems, cities always have more fragile ecological environments. However, most of the research on urban ecological security in literature has focused on evaluating current or past status of the ecological environment. Very little literature has carried out simulation or prediction of future ecological security. In addition, there is even less literature exploring the urban ecological environment at a fine scale. To fill-in the literature gap, in this study we simulated and predicted urban ecological security at a fine scale (district level) using an improved Cellular Automata (CA) approach. First we used the pressure-state-response (PSR) method based on grid-scale data to evaluate urban ecological security. Then, based on the evaluation results, we imported the geographically weighted regression (GWR) concept into the CA model to simulate and predict urban ecological security. We applied the improved CA approach in a case study-simulating and predicting urban ecological security for the city of Wuhan in Central China. By comparing the simulated ecological security values from 2010 using the improved CA model to the actual ecological security values of 2010, we got a relatively high value of the kappa coefficient, which indicates that this CA model can simulate or predict well future development of ecological security in Wuhan. Based on the prediction results for 2020, we made some policy recommendations for each district in Wuhan.

  15. Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model.

    PubMed

    Xianfang, Wang; Junmei, Wang; Xiaolei, Wang; Yue, Zhang

    2017-01-01

    The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server.

  16. Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model

    PubMed Central

    Xiaolei, Wang

    2017-01-01

    The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server. PMID:28497044

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Namhata, Argha; Oladyshkin, Sergey; Dilmore, Rober

    Carbon dioxide (CO2) storage into geological formations is regarded as an important mitigation strategy for anthropogenic CO2 emissions to the atmosphere. This study first simulates the leakage of CO2 and brine from a storage reservoir through the caprock. Then, we estimate the resulting pressure changes at the zone overlying the caprock also known as Above Zone Monitoring Interval (AZMI). A data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is then used to quantify the uncertainty in the above zone pressure prediction based on the uncertainties in different geologic parameters. Finally, a global sensitivity analysis is performed with Sobol indices basedmore » on the aPC technique to determine the relative importance of different parameters on pressure prediction. The results indicate that there can be uncertainty in pressure prediction locally around the leakage zones. The degree of such uncertainty in prediction depends on the quality of site specific information available for analysis. The scientific results from this study provide substantial insight that there is a need for site-specific data for efficient predictions of risks associated with storage activities. The presented approach can provide a basis of optimized pressure based monitoring network design at carbon storage sites.« less

  18. Predictive models of safety based on audit findings: Part 2: Measurement of model validity.

    PubMed

    Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor

    2013-07-01

    Part 1 of this study sequence developed a human factors/ergonomics (HF/E) based classification system (termed HFACS-MA) for safety audit findings and proved its measurement reliability. In Part 2, we used the human error categories of HFACS-MA as predictors of future safety performance. Audit records and monthly safety incident reports from two airlines submitted to their regulatory authority were available for analysis, covering over 6.5 years. Two participants derived consensus results of HF/E errors from the audit reports using HFACS-MA. We adopted Neural Network and Poisson regression methods to establish nonlinear and linear prediction models respectively. These models were tested for the validity of prediction of the safety data, and only Neural Network method resulted in substantially significant predictive ability for each airline. Alternative predictions from counting of audit findings and from time sequence of safety data produced some significant results, but of much smaller magnitude than HFACS-MA. The use of HF/E analysis of audit findings provided proactive predictors of future safety performance in the aviation maintenance field. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  19. A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Shengzhi; Ming, Bo; Huang, Qiang

    It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less

  20. Cloud-based Predictive Modeling System and its Application to Asthma Readmission Prediction

    PubMed Central

    Chen, Robert; Su, Hang; Khalilia, Mohammed; Lin, Sizhe; Peng, Yue; Davis, Tod; Hirsh, Daniel A; Searles, Elizabeth; Tejedor-Sojo, Javier; Thompson, Michael; Sun, Jimeng

    2015-01-01

    The predictive modeling process is time consuming and requires clinical researchers to handle complex electronic health record (EHR) data in restricted computational environments. To address this problem, we implemented a cloud-based predictive modeling system via a hybrid setup combining a secure private server with the Amazon Web Services (AWS) Elastic MapReduce platform. EHR data is preprocessed on a private server and the resulting de-identified event sequences are hosted on AWS. Based on user-specified modeling configurations, an on-demand web service launches a cluster of Elastic Compute 2 (EC2) instances on AWS to perform feature selection and classification algorithms in a distributed fashion. Afterwards, the secure private server aggregates results and displays them via interactive visualization. We tested the system on a pediatric asthma readmission task on a de-identified EHR dataset of 2,967 patients. We conduct a larger scale experiment on the CMS Linkable 2008–2010 Medicare Data Entrepreneurs’ Synthetic Public Use File dataset of 2 million patients, which achieves over 25-fold speedup compared to sequential execution. PMID:26958172

  1. It's OK if 'my brain made me do it': people's intuitions about free will and neuroscientific prediction.

    PubMed

    Nahmias, Eddy; Shepard, Jason; Reuter, Shane

    2014-11-01

    In recent years, a number of prominent scientists have argued that free will is an illusion, appealing to evidence demonstrating that information about brain activity can be used to predict behavior before people are aware of having made a decision. These scientists claim that the possibility of perfect prediction based on neural information challenges the ordinary understanding of free will. In this paper we provide evidence suggesting that most people do not view the possibility of neuro-prediction as a threat to free will unless it also raises concerns about manipulation of the agent's behavior. In Experiment 1 two scenarios described future brain imaging technology that allows perfect prediction of decisions and actions based on earlier neural activity, and this possibility did not undermine most people's attributions of free will or responsibility, except in the scenario that also allowed manipulation. In Experiment 2 the scenarios increased the salience of the physicalist implications of neuro-prediction, while in Experiment 3 the scenarios suggested dualism, with perfect prediction by mindreaders. The patterns of results for these two experiments were similar to the results in Experiment 1, suggesting that participants do not understand free will to require specific metaphysical conditions regarding the mind-body relation. Most people seem to understand free will in a way that is not threatened by perfect prediction based on neural information, suggesting that they believe that just because "my brain made me do it," that does not mean that I didn't do it of my own free will. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Web-based thyroid imaging reporting and data system: Malignancy risk of atypia of undetermined significance or follicular lesion of undetermined significance thyroid nodules calculated by a combination of ultrasonography features and biopsy results.

    PubMed

    Choi, Young Jun; Baek, Jung Hwan; Shin, Jung Hee; Shim, Woo Hyun; Kim, Seon-Ok; Lee, Won-Hong; Song, Dong Eun; Kim, Tae Yong; Chung, Ki-Wook; Lee, Jeong Hyun

    2018-05-13

    The purpose of this study was to construct a web-based predictive model using ultrasound characteristics and subcategorized biopsy results for thyroid nodules of atypia of undetermined significance/follicular lesion of undetermined significance (AUS/FLUS) to stratify the risk of malignancy. Data included 672 thyroid nodules from 656 patients from a historical cohort. We analyzed ultrasound images of thyroid nodules and biopsy results according to nuclear atypia and architectural atypia. Multivariate logistic regression analysis was performed to predict whether nodules were diagnosed as malignant or benign. The ultrasound features, including spiculated margin, marked hypoechogenicity, calcifications, biopsy results, and cytologic atypia, showed significant differences between groups. A 13-point risk scoring system was developed, and the area under the curve (AUC) of the receiver operating characteristic (ROC) curve of the development and validation sets were 0.837 and 0.830, respectively (http://www.gap.kr/thyroidnodule_b3.php). We devised a web-based predictive model using the combined information of ultrasound characteristics and biopsy results for AUS/FLUS thyroid nodules to stratify the malignant risk. © 2018 Wiley Periodicals, Inc.

  3. NEP: web server for epitope prediction based on antibody neutralization of viral strains with diverse sequences

    PubMed Central

    Chuang, Gwo-Yu; Liou, David; Kwong, Peter D.; Georgiev, Ivelin S.

    2014-01-01

    Delineation of the antigenic site, or epitope, recognized by an antibody can provide clues about functional vulnerabilities and resistance mechanisms, and can therefore guide antibody optimization and epitope-based vaccine design. Previously, we developed an algorithm for antibody-epitope prediction based on antibody neutralization of viral strains with diverse sequences and validated the algorithm on a set of broadly neutralizing HIV-1 antibodies. Here we describe the implementation of this algorithm, NEP (Neutralization-based Epitope Prediction), as a web-based server. The users must supply as input: (i) an alignment of antigen sequences of diverse viral strains; (ii) neutralization data for the antibody of interest against the same set of antigen sequences; and (iii) (optional) a structure of the unbound antigen, for enhanced prediction accuracy. The prediction results can be downloaded or viewed interactively on the antigen structure (if supplied) from the web browser using a JSmol applet. Since neutralization experiments are typically performed as one of the first steps in the characterization of an antibody to determine its breadth and potency, the NEP server can be used to predict antibody-epitope information at no additional experimental costs. NEP can be accessed on the internet at http://exon.niaid.nih.gov/nep. PMID:24782517

  4. In Situ Measurement of Some Soil Properties in Paddy Soil Using Visible and Near-Infrared Spectroscopy

    PubMed Central

    Wenjun, Ji; Zhou, Shi; Jingyi, Huang; Shuo, Li

    2014-01-01

    In situ measurements with visible and near-infrared spectroscopy (vis-NIR) provide an efficient way for acquiring soil information of paddy soils in the short time gap between the harvest and following rotation. The aim of this study was to evaluate its feasibility to predict a series of soil properties including organic matter (OM), organic carbon (OC), total nitrogen (TN), available nitrogen (AN), available phosphorus (AP), available potassium (AK) and pH of paddy soils in Zhejiang province, China. Firstly, the linear partial least squares regression (PLSR) was performed on the in situ spectra and the predictions were compared to those with laboratory-based recorded spectra. Then, the non-linear least-square support vector machine (LS-SVM) algorithm was carried out aiming to extract more useful information from the in situ spectra and improve predictions. Results show that in terms of OC, OM, TN, AN and pH, (i) the predictions were worse using in situ spectra compared to laboratory-based spectra with PLSR algorithm (ii) the prediction accuracy using LS-SVM (R2>0.75, RPD>1.90) was obviously improved with in situ vis-NIR spectra compared to PLSR algorithm, and comparable or even better than results generated using laboratory-based spectra with PLSR; (iii) in terms of AP and AK, poor predictions were obtained with in situ spectra (R2<0.5, RPD<1.50) either using PLSR or LS-SVM. The results highlight the use of LS-SVM for in situ vis-NIR spectroscopic estimation of soil properties of paddy soils. PMID:25153132

  5. SCPRED: accurate prediction of protein structural class for sequences of twilight-zone similarity with predicting sequences.

    PubMed

    Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke

    2008-05-01

    Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are capable of separating the structural classes in spite of their low dimensionality. We also demonstrate that the SCPRED's predictions can be successfully used as a post-processing filter to improve performance of modern fold classification methods.

  6. Minimalist ensemble algorithms for genome-wide protein localization prediction.

    PubMed

    Lin, Jhih-Rong; Mondal, Ananda Mohan; Liu, Rong; Hu, Jianjun

    2012-07-03

    Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi.

  7. An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Saxena, Abhinav; Goebel, Kai

    2012-01-01

    Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the future inputs to the system. Prognostics algorithm must account for this inherent uncertainty. In addition, these algorithms never know exactly the state of the system at the desired time of prediction, or the exact model describing the future evolution of the system, accumulating additional uncertainty into the predicted EOL. Prediction algorithms that do not account for these sources of uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in the prediction problem. We develop a general model-based prediction algorithm that incorporates these sources of uncertainty, and propose a novel approach to efficiently handle uncertainty in the future input trajectories of a system by using the unscented transformation. Using this approach, we are not only able to reduce the computational load but also estimate the bounds of uncertainty in a deterministic manner, which can be useful to consider during decision-making. Using a lithium-ion battery as a case study, we perform several simulation-based experiments to explore these issues, and validate the overall approach using experimental data from a battery testbed.

  8. Predictions of heading date in bread wheat (Triticum aestivum L.) using QTL-based parameters of an ecophysiological model

    PubMed Central

    Bogard, Matthieu; Ravel, Catherine; Paux, Etienne; Bordes, Jacques; Balfourier, François; Chapman, Scott C.; Le Gouis, Jacques; Allard, Vincent

    2014-01-01

    Prediction of wheat phenology facilitates the selection of cultivars with specific adaptations to a particular environment. However, while QTL analysis for heading date can identify major genes controlling phenology, the results are limited to the environments and genotypes tested. Moreover, while ecophysiological models allow accurate predictions in new environments, they may require substantial phenotypic data to parameterize each genotype. Also, the model parameters are rarely related to all underlying genes, and all the possible allelic combinations that could be obtained by breeding cannot be tested with models. In this study, a QTL-based model is proposed to predict heading date in bread wheat (Triticum aestivum L.). Two parameters of an ecophysiological model (V sat and P base, representing genotype vernalization requirements and photoperiod sensitivity, respectively) were optimized for 210 genotypes grown in 10 contrasting location × sowing date combinations. Multiple linear regression models predicting V sat and P base with 11 and 12 associated genetic markers accounted for 71 and 68% of the variance of these parameters, respectively. QTL-based V sat and P base estimates were able to predict heading date of an independent validation data set (88 genotypes in six location × sowing date combinations) with a root mean square error of prediction of 5 to 8.6 days, explaining 48 to 63% of the variation for heading date. The QTL-based model proposed in this study may be used for agronomic purposes and to assist breeders in suggesting locally adapted ideotypes for wheat phenology. PMID:25148833

  9. Predicting Turns in Proteins with a Unified Model

    PubMed Central

    Song, Qi; Li, Tonghua; Cong, Peisheng; Sun, Jiangming; Li, Dapeng; Tang, Shengnan

    2012-01-01

    Motivation Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. Results In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i) using newly exploited features of structural evolution information (secondary structure and shape string of protein) based on structure homologies, (ii) considering all types of turns in a unified model, and (iii) practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries) by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications. PMID:23144872

  10. Predicting the effect of cytochrome P450 inhibitors on substrate drugs: analysis of physiologically based pharmacokinetic modeling submissions to the US Food and Drug Administration.

    PubMed

    Wagner, Christian; Pan, Yuzhuo; Hsu, Vicky; Grillo, Joseph A; Zhang, Lei; Reynolds, Kellie S; Sinha, Vikram; Zhao, Ping

    2015-01-01

    The US Food and Drug Administration (FDA) has seen a recent increase in the application of physiologically based pharmacokinetic (PBPK) modeling towards assessing the potential of drug-drug interactions (DDI) in clinically relevant scenarios. To continue our assessment of such approaches, we evaluated the predictive performance of PBPK modeling in predicting cytochrome P450 (CYP)-mediated DDI. This evaluation was based on 15 substrate PBPK models submitted by nine sponsors between 2009 and 2013. For these 15 models, a total of 26 DDI studies (cases) with various CYP inhibitors were available. Sponsors developed the PBPK models, reportedly without considering clinical DDI data. Inhibitor models were either developed by sponsors or provided by PBPK software developers and applied with minimal or no modification. The metric for assessing predictive performance of the sponsors' PBPK approach was the R predicted/observed value (R predicted/observed = [predicted mean exposure ratio]/[observed mean exposure ratio], with the exposure ratio defined as [C max (maximum plasma concentration) or AUC (area under the plasma concentration-time curve) in the presence of CYP inhibition]/[C max or AUC in the absence of CYP inhibition]). In 81 % (21/26) and 77 % (20/26) of cases, respectively, the R predicted/observed values for AUC and C max ratios were within a pre-defined threshold of 1.25-fold of the observed data. For all cases, the R predicted/observed values for AUC and C max were within a 2-fold range. These results suggest that, based on the submissions to the FDA to date, there is a high degree of concordance between PBPK-predicted and observed effects of CYP inhibition, especially CYP3A-based, on the exposure of drug substrates.

  11. Univariate Time Series Prediction of Solar Power Using a Hybrid Wavelet-ARMA-NARX Prediction Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng

    This paper proposes a new hybrid method for super short-term solar power prediction. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An accurate super short-time prediction is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to predict one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA)more » models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for prediction. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA prediction. The proposed method is applied to the data captured from UCLA solar PV panels and the results are compared with some of the common and most recent solar power prediction methods. The results validate the effectiveness of the proposed approach and show a considerable improvement in the prediction precision.« less

  12. Improved nonlinear prediction method

    NASA Astrophysics Data System (ADS)

    Adenan, Nur Hamiza; Md Noorani, Mohd Salmi

    2014-06-01

    The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.

  13. Development of an accident duration prediction model on the Korean Freeway Systems.

    PubMed

    Chung, Younshik

    2010-01-01

    Since duration prediction is one of the most important steps in an accident management process, there have been several approaches developed for modeling accident duration. This paper presents a model for the purpose of accident duration prediction based on accurately recorded and large accident dataset from the Korean Freeway Systems. To develop the duration prediction model, this study utilizes the log-logistic accelerated failure time (AFT) metric model and a 2-year accident duration dataset from 2006 to 2007. Specifically, the 2006 dataset is utilized to develop the prediction model and then, the 2007 dataset was employed to test the temporal transferability of the 2006 model. Although the duration prediction model has limitations such as large prediction error due to the individual differences of the accident treatment teams in terms of clearing similar accidents, the results from the 2006 model yielded a reasonable prediction based on the mean absolute percentage error (MAPE) scale. Additionally, the results of the statistical test for temporal transferability indicated that the estimated parameters in the duration prediction model are stable over time. Thus, this temporal stability suggests that the model may have potential to be used as a basis for making rational diversion and dispatching decisions in the event of an accident. Ultimately, such information will beneficially help in mitigating traffic congestion due to accidents.

  14. Evaluation of the predictive capability of coupled thermo-hydro-mechanical models for a heated bentonite/clay system (HE-E) in the Mont Terri Rock Laboratory

    DOE PAGES

    Garitte, B.; Shao, H.; Wang, X. R.; ...

    2017-01-09

    Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less

  15. Evaluation of the predictive capability of coupled thermo-hydro-mechanical models for a heated bentonite/clay system (HE-E) in the Mont Terri Rock Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garitte, B.; Shao, H.; Wang, X. R.

    Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less

  16. A group evolving-based framework with perturbations for link prediction

    NASA Astrophysics Data System (ADS)

    Si, Cuiqi; Jiao, Licheng; Wu, Jianshe; Zhao, Jin

    2017-06-01

    Link prediction is a ubiquitous application in many fields which uses partially observed information to predict absence or presence of links between node pairs. The group evolving study provides reasonable explanations on the behaviors of nodes, relations between nodes and community formation in a network. Possible events in group evolution include continuing, growing, splitting, forming and so on. The changes discovered in networks are to some extent the result of these events. In this work, we present a group evolving-based characterization of node's behavioral patterns, and via which we can estimate the probability they tend to interact. In general, the primary aim of this paper is to offer a minimal toy model to detect missing links based on evolution of groups and give a simpler explanation on the rationality of the model. We first introduce perturbations into networks to obtain stable cluster structures, and the stable clusters determine the stability of each node. Then fluctuations, another node behavior, are assumed by the participation of each node to its own belonging group. Finally, we demonstrate that such characteristics allow us to predict link existence and propose a model for link prediction which outperforms many classical methods with a decreasing computational time in large scales. Encouraging experimental results obtained on real networks show that our approach can effectively predict missing links in network, and even when nearly 40% of the edges are missing, it also retains stationary performance.

  17. Estimation and mapping of above-ground biomass of mangrove forests and their replacement land uses in the Philippines using Sentinel imagery

    NASA Astrophysics Data System (ADS)

    Castillo, Jose Alan A.; Apan, Armando A.; Maraseni, Tek N.; Salmo, Severino G.

    2017-12-01

    The recent launch of the Sentinel-1 (SAR) and Sentinel-2 (multispectral) missions offers a new opportunity for land-based biomass mapping and monitoring especially in the tropics where deforestation is highest. Yet, unlike in agriculture and inland land uses, the use of Sentinel imagery has not been evaluated for biomass retrieval in mangrove forest and the non-forest land uses that replaced mangroves. In this study, we evaluated the ability of Sentinel imagery for the retrieval and predictive mapping of above-ground biomass of mangroves and their replacement land uses. We used Sentinel SAR and multispectral imagery to develop biomass prediction models through the conventional linear regression and novel Machine Learning algorithms. We developed models each from SAR raw polarisation backscatter data, multispectral bands, vegetation indices, and canopy biophysical variables. The results show that the model based on biophysical variable Leaf Area Index (LAI) derived from Sentinel-2 was more accurate in predicting the overall above-ground biomass. In contrast, the model which utilised optical bands had the lowest accuracy. However, the SAR-based model was more accurate in predicting the biomass in the usually deficient to low vegetation cover non-forest replacement land uses such as abandoned aquaculture pond, cleared mangrove and abandoned salt pond. These models had 0.82-0.83 correlation/agreement of observed and predicted value, and root mean square error of 27.8-28.5 Mg ha-1. Among the Sentinel-2 multispectral bands, the red and red edge bands (bands 4, 5 and 7), combined with elevation data, were the best variable set combination for biomass prediction. The red edge-based Inverted Red-Edge Chlorophyll Index had the highest prediction accuracy among the vegetation indices. Overall, Sentinel-1 SAR and Sentinel-2 multispectral imagery can provide satisfactory results in the retrieval and predictive mapping of the above-ground biomass of mangroves and the replacement non-forest land uses, especially with the inclusion of elevation data. The study demonstrates encouraging results in biomass mapping of mangroves and other coastal land uses in the tropics using the freely accessible and relatively high-resolution Sentinel imagery.

  18. A novel time series link prediction method: Learning automata approach

    NASA Astrophysics Data System (ADS)

    Moradabadi, Behnaz; Meybodi, Mohammad Reza

    2017-09-01

    Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.

  19. Comparison of four statistical and machine learning methods for crash severity prediction.

    PubMed

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A new method for enhancer prediction based on deep belief network.

    PubMed

    Bu, Hongda; Gan, Yanglan; Wang, Yang; Zhou, Shuigeng; Guan, Jihong

    2017-10-16

    Studies have shown that enhancers are significant regulatory elements to play crucial roles in gene expression regulation. Since enhancers are unrelated to the orientation and distance to their target genes, it is a challenging mission for scholars and researchers to accurately predicting distal enhancers. In the past years, with the high-throughout ChiP-seq technologies development, several computational techniques emerge to predict enhancers using epigenetic or genomic features. Nevertheless, the inconsistency of computational models across different cell-lines and the unsatisfactory prediction performance call for further research in this area. Here, we propose a new Deep Belief Network (DBN) based computational method for enhancer prediction, which is called EnhancerDBN. This method combines diverse features, composed of DNA sequence compositional features, DNA methylation and histone modifications. Our computational results indicate that 1) EnhancerDBN outperforms 13 existing methods in prediction, and 2) GC content and DNA methylation can serve as relevant features for enhancer prediction. Deep learning is effective in boosting the performance of enhancer prediction.

  1. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  2. Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture

    PubMed Central

    Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang

    2016-01-01

    The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176

  3. The real-time prediction and inhibition of linguistic outcomes: Effects of language and literacy skill.

    PubMed

    Kukona, Anuenue; Braze, David; Johns, Clinton L; Mencl, W Einar; Van Dyke, Julie A; Magnuson, James S; Pugh, Kenneth R; Shankweiler, Donald P; Tabor, Whitney

    2016-11-01

    Recent studies have found considerable individual variation in language comprehenders' predictive behaviors, as revealed by their anticipatory eye movements during language comprehension. The current study investigated the relationship between these predictive behaviors and the language and literacy skills of a diverse, community-based sample of young adults. We found that rapid automatized naming (RAN) was a key determinant of comprehenders' prediction ability (e.g., as reflected in predictive eye movements to a white cake on hearing "The boy will eat the white…"). Simultaneously, comprehension-based measures predicted participants' ability to inhibit eye movements to objects that shared features with predictable referents but were implausible completions (e.g., as reflected in eye movements to a white but inedible white car). These findings suggest that the excitatory and inhibitory mechanisms that support prediction during language processing are closely linked with specific cognitive abilities that support literacy. We show that a self-organizing cognitive architecture captures this pattern of results. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis

    NASA Technical Reports Server (NTRS)

    Mcanelly, W. B.; Young, C. T. K.

    1973-01-01

    Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.

  5. A novel one-class SVM based negative data sampling method for reconstructing proteome-wide HTLV-human protein interaction networks.

    PubMed

    Mei, Suyu; Zhu, Hao

    2015-01-26

    Protein-protein interaction (PPI) prediction is generally treated as a problem of binary classification wherein negative data sampling is still an open problem to be addressed. The commonly used random sampling is prone to yield less representative negative data with considerable false negatives. Meanwhile rational constraints are seldom exerted on model selection to reduce the risk of false positive predictions for most of the existing computational methods. In this work, we propose a novel negative data sampling method based on one-class SVM (support vector machine, SVM) to predict proteome-wide protein interactions between HTLV retrovirus and Homo sapiens, wherein one-class SVM is used to choose reliable and representative negative data, and two-class SVM is used to yield proteome-wide outcomes as predictive feedback for rational model selection. Computational results suggest that one-class SVM is more suited to be used as negative data sampling method than two-class PPI predictor, and the predictive feedback constrained model selection helps to yield a rational predictive model that reduces the risk of false positive predictions. Some predictions have been validated by the recent literature. Lastly, gene ontology based clustering of the predicted PPI networks is conducted to provide valuable cues for the pathogenesis of HTLV retrovirus.

  6. Development of a noise prediction model based on advanced fuzzy approaches in typical industrial workrooms.

    PubMed

    Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir

    2014-01-01

    Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.

  7. Adaptive MPC based on MIMO ARX-Laguerre model.

    PubMed

    Ben Abdelwahed, Imen; Mbarek, Abdelkader; Bouzrara, Kais

    2017-03-01

    This paper proposes a method for synthesizing an adaptive predictive controller using a reduced complexity model. This latter is given by the projection of the ARX model on Laguerre bases. The resulting model is entitled MIMO ARX-Laguerre and it is characterized by an easy recursive representation. The adaptive predictive control law is computed based on multi-step-ahead finite-element predictors, identified directly from experimental input/output data. The model is tuned in each iteration by an online identification algorithms of both model parameters and Laguerre poles. The proposed approach avoids time consuming numerical optimization algorithms associated with most common linear predictive control strategies, which makes it suitable for real-time implementation. The method is used to synthesize and test in numerical simulations adaptive predictive controllers for the CSTR process benchmark. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Application of clustering analysis in the prediction of photovoltaic power generation based on neural network

    NASA Astrophysics Data System (ADS)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.

  9. Simulated Annealing Based Hybrid Forecast for Improving Daily Municipal Solid Waste Generation Prediction

    PubMed Central

    Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei

    2014-01-01

    A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508

  10. Exploring Change Processes in School-Based Mentoring for Bullied Children.

    PubMed

    Craig, James T; Gregus, Samantha J; Burton, Ally; Hernandez Rodriguez, Juventino; Blue, Mallory; Faith, Melissa A; Cavell, Timothy A

    2016-02-01

    We examined change processes associated with the school-based, lunchtime mentoring of bullied children. We used data from a one-semester open trial of Lunch Buddy (LB) mentoring (N = 24) to examine changes in bullied children's lunchtime peer relationships. We also tested whether these changes predicted key outcomes (i.e., peer victimization, social preference) post-mentoring. Results provided partial support that bullied children paired with LB mentors experienced improved lunchtime peer relationships and that gains in lunchtime relationships predicted post-mentoring levels of social preference and peer victimization. Neither child nor mentors' ratings of the mentoring relationship predicted post-mentoring outcomes; however, child-rated mentor support and conflict predicted improvements in lunchtime peer relationships. We discuss implications for future research on school-based mentoring as a form of selective intervention for bullied children.

  11. Composite Nanomechanics: A Mechanistic Properties Prediction

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Handler, Louis M.; Manderscheid, Jane M.

    2007-01-01

    A unique mechanistic theory is described to predict the properties of nanocomposites. The theory is based on composite micromechanics with progressive substructuring down to a nanoscale slice of a nanofiber where all the governing equations are formulated. These equations hav e been programmed in a computer code. That computer code is used to predict 25 properties of a mononanofiber laminate. The results are pr esented graphically and discussed with respect to their practical sig nificance. Most of the results show smooth distributions. Results for matrix-dependent properties show bimodal through-the-thickness distr ibution with discontinuous changes from mode to mode.

  12. Classifier ensemble based on feature selection and diversity measures for predicting the affinity of A(2B) adenosine receptor antagonists.

    PubMed

    Bonet, Isis; Franco-Montero, Pedro; Rivero, Virginia; Teijeira, Marta; Borges, Fernanda; Uriarte, Eugenio; Morales Helguera, Aliuska

    2013-12-23

    A(2B) adenosine receptor antagonists may be beneficial in treating diseases like asthma, diabetes, diabetic retinopathy, and certain cancers. This has stimulated research for the development of potent ligands for this subtype, based on quantitative structure-affinity relationships. In this work, a new ensemble machine learning algorithm is proposed for classification and prediction of the ligand-binding affinity of A(2B) adenosine receptor antagonists. This algorithm is based on the training of different classifier models with multiple training sets (composed of the same compounds but represented by diverse features). The k-nearest neighbor, decision trees, neural networks, and support vector machines were used as single classifiers. To select the base classifiers for combining into the ensemble, several diversity measures were employed. The final multiclassifier prediction results were computed from the output obtained by using a combination of selected base classifiers output, by utilizing different mathematical functions including the following: majority vote, maximum and average probability. In this work, 10-fold cross- and external validation were used. The strategy led to the following results: i) the single classifiers, together with previous features selections, resulted in good overall accuracy, ii) a comparison between single classifiers, and their combinations in the multiclassifier model, showed that using our ensemble gave a better performance than the single classifier model, and iii) our multiclassifier model performed better than the most widely used multiclassifier models in the literature. The results and statistical analysis demonstrated the supremacy of our multiclassifier approach for predicting the affinity of A(2B) adenosine receptor antagonists, and it can be used to develop other QSAR models.

  13. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion.

    PubMed

    Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.

  14. On some methods for assessing earthquake predictions

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.; Peresan, A.

    2017-09-01

    A regional approach to the problem of assessing earthquake predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted earthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.

  15. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  16. Analysis of experimental results of the inlet for the NASA hypersonic research engine aerothermodynamic integration model. [wind tunnel tests of ramjet engine hypersonic inlets

    NASA Technical Reports Server (NTRS)

    Andrews, E. H., Jr.; Mackley, E. A.

    1976-01-01

    An aerodynamic engine inlet analysis was performed on the experimental results obtained at nominal Mach numbers of 5, 6, and 7 from the NASA Hypersonic Research Engine (HRE) Aerothermodynamic Integration Model (AIM). Incorporation on the AIM of the mixed-compression inlet design represented the final phase of an inlet development program of the HRE Project. The purpose of this analysis was to compare the AIM inlet experimental results with theoretical results. Experimental performance was based on measured surface pressures used in a one-dimensional force-momentum theorem. Results of the analysis indicate that surface static-pressure measurements agree reasonably well with theoretical predictions except in the regions where the theory predicts large pressure discontinuities. Experimental and theoretical results both based on the one-dimensional force-momentum theorem yielded inlet performance parameters as functions of Mach number that exhibited reasonable agreement. Previous predictions of inlet unstart that resulted from pressure disturbances created by fuel injection and combustion appeared to be pessimistic.

  17. Predictive modeling of surimi cake shelf life at different storage temperatures

    NASA Astrophysics Data System (ADS)

    Wang, Yatong; Hou, Yanhua; Wang, Quanfu; Cui, Bingqing; Zhang, Xiangyu; Li, Xuepeng; Li, Yujin; Liu, Yuanping

    2017-04-01

    The Arrhenius model of the shelf life prediction which based on the TBARS index was established in this study. The results showed that the significant changed of AV, POV, COV and TBARS with temperature increased, and the reaction rate constants k was obtained by the first order reaction kinetics model. Then the secondary model fitting was based on the Arrhenius equation. There was the optimal fitting accuracy of TBARS in the first and the secondary model fitting (R2≥0.95). The verification test indicated that the relative error between the shelf life model prediction value and actual value was within ±10%, suggesting the model could predict the shelf life of surimi cake.

  18. Systematic prediction of gene function in Arabidopsis thaliana using a probabilistic functional gene network

    PubMed Central

    Hwang, Sohyun; Rhee, Seung Y; Marcotte, Edward M; Lee, Insuk

    2012-01-01

    AraNet is a functional gene network for the reference plant Arabidopsis and has been constructed in order to identify new genes associated with plant traits. It is highly predictive for diverse biological pathways and can be used to prioritize genes for functional screens. Moreover, AraNet provides a web-based tool with which plant biologists can efficiently discover novel functions of Arabidopsis genes (http://www.functionalnet.org/aranet/). This protocol explains how to conduct network-based prediction of gene functions using AraNet and how to interpret the prediction results. Functional discovery in plant biology is facilitated by combining candidate prioritization by AraNet with focused experimental tests. PMID:21886106

  19. Stochastic simulation of predictive space–time scenarios of wind speed using observations and physical model outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai

    We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less

  20. Predicting flight delay based on multiple linear regression

    NASA Astrophysics Data System (ADS)

    Ding, Yi

    2017-08-01

    Delay of flight has been regarded as one of the toughest difficulties in aviation control. How to establish an effective model to handle the delay prediction problem is a significant work. To solve the problem that the flight delay is difficult to predict, this study proposes a method to model the arriving flights and a multiple linear regression algorithm to predict delay, comparing with Naive-Bayes and C4.5 approach. Experiments based on a realistic dataset of domestic airports show that the accuracy of the proposed model approximates 80%, which is further improved than the Naive-Bayes and C4.5 approach approaches. The result testing shows that this method is convenient for calculation, and also can predict the flight delays effectively. It can provide decision basis for airport authorities.

  1. Prediction Interval Development for Wind-Tunnel Balance Check-Loading

    NASA Technical Reports Server (NTRS)

    Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.

    2014-01-01

    Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.

  2. Exploiting Information Diffusion Feature for Link Prediction in Sina Weibo

    NASA Astrophysics Data System (ADS)

    Li, Dong; Zhang, Yongchao; Xu, Zhiming; Chu, Dianhui; Li, Sheng

    2016-01-01

    The rapid development of online social networks (e.g., Twitter and Facebook) has promoted research related to social networks in which link prediction is a key problem. Although numerous attempts have been made for link prediction based on network structure, node attribute and so on, few of the current studies have considered the impact of information diffusion on link creation and prediction. This paper mainly addresses Sina Weibo, which is the largest microblog platform with Chinese characteristics, and proposes the hypothesis that information diffusion influences link creation and verifies the hypothesis based on real data analysis. We also detect an important feature from the information diffusion process, which is used to promote link prediction performance. Finally, the experimental results on Sina Weibo dataset have demonstrated the effectiveness of our methods.

  3. Exploiting Information Diffusion Feature for Link Prediction in Sina Weibo.

    PubMed

    Li, Dong; Zhang, Yongchao; Xu, Zhiming; Chu, Dianhui; Li, Sheng

    2016-01-28

    The rapid development of online social networks (e.g., Twitter and Facebook) has promoted research related to social networks in which link prediction is a key problem. Although numerous attempts have been made for link prediction based on network structure, node attribute and so on, few of the current studies have considered the impact of information diffusion on link creation and prediction. This paper mainly addresses Sina Weibo, which is the largest microblog platform with Chinese characteristics, and proposes the hypothesis that information diffusion influences link creation and verifies the hypothesis based on real data analysis. We also detect an important feature from the information diffusion process, which is used to promote link prediction performance. Finally, the experimental results on Sina Weibo dataset have demonstrated the effectiveness of our methods.

  4. Stochastic simulation of predictive space–time scenarios of wind speed using observations and physical model outputs

    DOE PAGES

    Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai

    2018-03-01

    We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less

  5. Making smart social judgments takes time: infants' recruitment of goal information when generating action predictions.

    PubMed

    Krogh-Jespersen, Sheila; Woodward, Amanda L

    2014-01-01

    Previous research has shown that young infants perceive others' actions as structured by goals. One open question is whether the recruitment of this understanding when predicting others' actions imposes a cognitive challenge for young infants. The current study explored infants' ability to utilize their knowledge of others' goals to rapidly predict future behavior in complex social environments and distinguish goal-directed actions from other kinds of movements. Fifteen-month-olds (N = 40) viewed videos of an actor engaged in either a goal-directed (grasping) or an ambiguous (brushing the back of her hand) action on a Tobii eye-tracker. At test, critical elements of the scene were changed and infants' predictive fixations were examined to determine whether they relied on goal information to anticipate the actor's future behavior. Results revealed that infants reliably generated goal-based visual predictions for the grasping action, but not for the back-of-hand behavior. Moreover, response latencies were longer for goal-based predictions than for location-based predictions, suggesting that goal-based predictions are cognitively taxing. Analyses of areas of interest indicated that heightened attention to the overall scene, as opposed to specific patterns of attention, was the critical indicator of successful judgments regarding an actor's future goal-directed behavior. These findings shed light on the processes that support "smart" social behavior in infants, as it may be a challenge for young infants to use information about others' intentions to inform rapid predictions.

  6. A statistical learning approach to the modeling of chromatographic retention of oligonucleotides incorporating sequence and secondary structure data

    PubMed Central

    Sturm, Marc; Quinten, Sascha; Huber, Christian G.; Kohlbacher, Oliver

    2007-01-01

    We propose a new model for predicting the retention time of oligonucleotides. The model is based on ν support vector regression using features derived from base sequence and predicted secondary structure of oligonucleotides. Because of the secondary structure information, the model is applicable even at relatively low temperatures where the secondary structure is not suppressed by thermal denaturing. This makes the prediction of oligonucleotide retention time for arbitrary temperatures possible, provided that the target temperature lies within the temperature range of the training data. We describe different possibilities of feature calculation from base sequence and secondary structure, present the results and compare our model to existing models. PMID:17567619

  7. Multiaxial Fatigue Life Prediction Based on Short Crack Propagation Model with Equivalent Strain Parameter

    NASA Astrophysics Data System (ADS)

    Zhao, Xiang-Feng; Shang, De-Guang; Sun, Yu-Juan; Song, Ming-Liang; Wang, Xiao-Wei

    2018-01-01

    The maximum shear strain and the normal strain excursion on the critical plane are regarded as the primary parameters of the crack driving force to establish a new short crack model in this paper. An equivalent strain-based intensity factor is proposed to correlate the short crack growth rate under multiaxial loading. According to the short crack model, a new method is proposed for multiaxial fatigue life prediction based on crack growth analysis. It is demonstrated that the method can be used under proportional and non-proportional loadings. The predicted results showed a good agreement with experimental lives in both high-cycle and low-cycle regions.

  8. Anisotropic constitutive modeling for nickel base single crystal superalloys using a crystallographic approach

    NASA Technical Reports Server (NTRS)

    Stouffer, D. C.; Sheh, M. Y.

    1988-01-01

    A micromechanical model based on crystallographic slip theory was formulated for nickel-base single crystal superalloys. The current equations include both drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments have been conducted to evaluate the effect of back stress in single crystals. The results showed that (1) the back stress is orientation dependent; and (2) the back stress state variable in the inelastic flow equation is necessary for predicting anelastic behavior of the material. The model also demonstrated improved fatigue predictive capability. Model predictions and experimental data are presented for single crystal superalloy Rene N4 at 982 C.

  9. Through-process modelling of texture and anisotropy in AA5182

    NASA Astrophysics Data System (ADS)

    Crumbach, M.; Neumann, L.; Goerdeler, M.; Aretz, H.; Gottstein, G.; Kopp, R.

    2006-07-01

    A through-process texture and anisotropy prediction for AA5182 sheet production from hot rolling through cold rolling and annealing is reported. Thermo-mechanical process data predicted by the finite element method (FEM) package T-Pack based on the software LARSTRAN were fed into a combination of physics based microstructure models for deformation texture (GIA), work hardening (3IVM), nucleation texture (ReNuc), and recrystallization texture (StaRT). The final simulated sheet texture was fed into a FEM simulation of cup drawing employing a new concept of interactively updated texture based yield locus predictions. The modelling results of texture development and anisotropy were compared to experimental data. The applicability to other alloys and processes is discussed.

  10. Ligand Binding Site Detection by Local Structure Alignment and Its Performance Complementarity

    PubMed Central

    Lee, Hui Sun; Im, Wonpil

    2013-01-01

    Accurate determination of potential ligand binding sites (BS) is a key step for protein function characterization and structure-based drug design. Despite promising results of template-based BS prediction methods using global structure alignment (GSA), there is a room to improve the performance by properly incorporating local structure alignment (LSA) because BS are local structures and often similar for proteins with dissimilar global folds. We present a template-based ligand BS prediction method using G-LoSA, our LSA tool. A large benchmark set validation shows that G-LoSA predicts drug-like ligands’ positions in single-chain protein targets more precisely than TM-align, a GSA-based method, while the overall success rate of TM-align is better. G-LoSA is particularly efficient for accurate detection of local structures conserved across proteins with diverse global topologies. Recognizing the performance complementarity of G-LoSA to TM-align and a non-template geometry-based method, fpocket, a robust consensus scoring method, CMCS-BSP (Complementary Methods and Consensus Scoring for ligand Binding Site Prediction), is developed and shows improvement on prediction accuracy. The G-LoSA source code is freely available at http://im.bioinformatics.ku.edu/GLoSA. PMID:23957286

  11. Scoring in genetically modified organism proficiency tests based on log-transformed results.

    PubMed

    Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P

    2006-01-01

    The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.

  12. Tuberculin Skin Tests versus Interferon-Gamma Release Assays in Tuberculosis Screening among Immigrant Visa Applicants.

    PubMed

    Chuke, Stella O; Yen, Nguyen Thi Ngoc; Laserson, Kayla F; Phuoc, Nguyen Huu; Trinh, Nguyen An; Nhung, Duong Thi Cam; Mai, Vo Thi Chi; Qui, An Dang; Hai, Hoang Hoa; Loan, Le Thien Huong; Jones, Warren G; Whitworth, William C; Shah, J Jina; Painter, John A; Mazurek, Gerald H; Maloney, Susan A

    2014-01-01

    Objective. Use of tuberculin skin tests (TSTs) and interferon gamma release assays (IGRAs) as part of tuberculosis (TB) screening among immigrants from high TB-burden countries has not been fully evaluated. Methods. Prevalence of Mycobacterium tuberculosis infection (MTBI) based on TST, or the QuantiFERON-TB Gold test (QFT-G), was determined among immigrant applicants in Vietnam bound for the United States (US); factors associated with test results and discordance were assessed; predictive values of TST and QFT-G for identifying chest radiographs (CXRs) consistent with TB were calculated. Results. Of 1,246 immigrant visa applicants studied, 57.9% were TST positive, 28.3% were QFT-G positive, and test agreement was 59.4%. Increasing age was associated with positive TST results, positive QFT-G results, TST-positive but QFT-G-negative discordance, and abnormal CXRs consistent with TB. Positive predictive values of TST and QFT-G for an abnormal CXR were 25.9% and 25.6%, respectively. Conclusion. The estimated prevalence of MTBI among US-bound visa applicants in Vietnam based on TST was twice that based on QFT-G, and 14 times higher than a TST-based estimate of MTBI prevalence reported for the general US population in 2000. QFT-G was not better than TST at predicting abnormal CXRs consistent with TB.

  13. A Knowledge-Base for a Personalized Infectious Disease Risk Prediction System.

    PubMed

    Vinarti, Retno; Hederman, Lucy

    2018-01-01

    We present a knowledge-base to represent collated infectious disease risk (IDR) knowledge. The knowledge is about personal and contextual risk of contracting an infectious disease obtained from declarative sources (e.g. Atlas of Human Infectious Diseases). Automated prediction requires encoding this knowledge in a form that can produce risk probabilities (e.g. Bayesian Network - BN). The knowledge-base presented in this paper feeds an algorithm that can auto-generate the BN. The knowledge from 234 infectious diseases was compiled. From this compilation, we designed an ontology and five rule types for modelling IDR knowledge in general. The evaluation aims to assess whether the knowledge-base structure, and its application to three disease-country contexts, meets the needs of personalized IDR prediction system. From the evaluation results, the knowledge-base conforms to the system's purpose: personalization of infectious disease risk.

  14. Group-based guilt as a predictor of commitment to apology.

    PubMed

    McGarty, Craig; Pedersen, Anne; Leach, Colin Wayne; Mansell, Tamarra; Waller, Julie; Bliuc, Ana-Maria

    2005-12-01

    Whether the Australian government should officially apologize to Indigenous Australians for past wrongs is hotly debated in Australia. The predictors of support amongst non-Indigenous Australians for such an apology were examined in two studies. The first study (N=164) showed that group-based guilt was a good predictor of support for a government apology, as was the perception that non-Indigenous Australians were relatively advantaged. In the second study (N=116) it was found that group-based guilt was an excellent predictor of support for apology and was itself predicted by perceived non-Indigenous responsibility for harsh treatment of Indigenous people, and an absence of doubts about the legitimacy of group-based guilt. National identification was not a predictor of group-based guilt. The results of the two studies suggest that, just as individual emotions predict individual action tendencies, so group-based guilt predicts support for actions or decisions to be taken at the collective level.

  15. Local-search based prediction of medical image registration error

    NASA Astrophysics Data System (ADS)

    Saygili, Görkem

    2018-03-01

    Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.

  16. Is questionnaire-based sitting time inaccurate and can it be improved? A cross-sectional investigation using accelerometer-based sitting time

    PubMed Central

    Gupta, Nidhi; Christiansen, Caroline Stordal; Hanisch, Christiana; Bay, Hans; Burr, Hermann; Holtermann, Andreas

    2017-01-01

    Objectives To investigate the differences between a questionnaire-based and accelerometer-based sitting time, and develop a model for improving the accuracy of questionnaire-based sitting time for predicting accelerometer-based sitting time. Methods 183 workers in a cross-sectional study reported sitting time per day using a single question during the measurement period, and wore 2 Actigraph GT3X+ accelerometers on the thigh and trunk for 1–4 working days to determine their actual sitting time per day using the validated Acti4 software. Least squares regression models were fitted with questionnaire-based siting time and other self-reported predictors to predict accelerometer-based sitting time. Results Questionnaire-based and accelerometer-based average sitting times were ≈272 and ≈476 min/day, respectively. A low Pearson correlation (r=0.32), high mean bias (204.1 min) and wide limits of agreement (549.8 to −139.7 min) between questionnaire-based and accelerometer-based sitting time were found. The prediction model based on questionnaire-based sitting explained 10% of the variance in accelerometer-based sitting time. Inclusion of 9 self-reported predictors in the model increased the explained variance to 41%, with 10% optimism using a resampling bootstrap validation. Based on a split validation analysis, the developed prediction model on ≈75% of the workers (n=132) reduced the mean and the SD of the difference between questionnaire-based and accelerometer-based sitting time by 64% and 42%, respectively, in the remaining 25% of the workers. Conclusions This study indicates that questionnaire-based sitting time has low validity and that a prediction model can be one solution to materially improve the precision of questionnaire-based sitting time. PMID:28093433

  17. Modeling and evaluating of surface roughness prediction in micro-grinding on soda-lime glass considering tool characterization

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Gong, Yadong; Wang, Jinsheng

    2013-11-01

    The current research of micro-grinding mainly focuses on the optimal processing technology for different materials. However, the material removal mechanism in micro-grinding is the base of achieving high quality processing surface. Therefore, a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography is proposed in this paper. The differences of material removal mechanism between convention grinding process and micro-grinding process are analyzed. Topography characterization has been done on micro-grinding tools which are fabricated by electroplating. Models of grain density generation and grain interval are built, and new predicting model of micro-grinding surface roughness is developed. In order to verify the precision and application effect of the surface roughness prediction model proposed, a micro-grinding orthogonally experiment on soda-lime glass is designed and conducted. A series of micro-machining surfaces which are 78 nm to 0.98 μm roughness of brittle material is achieved. It is found that experimental roughness results and the predicting roughness data have an evident coincidence, and the component variable of describing the size effects in predicting model is calculated to be 1.5×107 by reverse method based on the experimental results. The proposed model builds a set of distribution to consider grains distribution densities in different protrusion heights. Finally, the characterization of micro-grinding tools which are used in the experiment has been done based on the distribution set. It is concluded that there is a significant coincidence between surface prediction data from the proposed model and measurements from experiment results. Therefore, the effectiveness of the model is demonstrated. This paper proposes a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography, which would provide significant research theory and experimental reference of material removal mechanism in micro-grinding of soda-lime glass.

  18. Regression model estimation of early season crop proportions: North Dakota, some preliminary results

    NASA Technical Reports Server (NTRS)

    Lin, K. K. (Principal Investigator)

    1982-01-01

    To estimate crop proportions early in the season, an approach is proposed based on: use of a regression-based prediction equation to obtain an a priori estimate for specific major crop groups; modification of this estimate using current-year LANDSAT and weather data; and a breakdown of the major crop groups into specific crops by regression models. Results from the development and evaluation of appropriate regression models for the first portion of the proposed approach are presented. The results show that the model predicts 1980 crop proportions very well at both county and crop reporting district levels. In terms of planted acreage, the model underpredicted 9.1 percent of the 1980 published data on planted acreage at the county level. It predicted almost exactly the 1980 published data on planted acreage at the crop reporting district level and overpredicted the planted acreage by just 0.92 percent.

  19. Modeling Progressive Damage Using Local Displacement Discontinuities Within the FEAMAC Multiscale Modeling Framework

    NASA Technical Reports Server (NTRS)

    Ranatunga, Vipul; Bednarcyk, Brett A.; Arnold, Steven M.

    2010-01-01

    A method for performing progressive damage modeling in composite materials and structures based on continuum level interfacial displacement discontinuities is presented. The proposed method enables the exponential evolution of the interfacial compliance, resulting in unloading of the tractions at the interface after delamination or failure occurs. In this paper, the proposed continuum displacement discontinuity model has been used to simulate failure within both isotropic and orthotropic materials efficiently and to explore the possibility of predicting the crack path, therein. Simulation results obtained from Mode-I and Mode-II fracture compare the proposed approach with the cohesive element approach and Virtual Crack Closure Techniques (VCCT) available within the ABAQUS (ABAQUS, Inc.) finite element software. Furthermore, an eccentrically loaded 3-point bend test has been simulated with the displacement discontinuity model, and the resulting crack path prediction has been compared with a prediction based on the extended finite element model (XFEM) approach.

  20. A Statistics-Based Cracking Criterion of Resin-Bonded Silica Sand for Casting Process Simulation

    NASA Astrophysics Data System (ADS)

    Wang, Huimin; Lu, Yan; Ripplinger, Keith; Detwiler, Duane; Luo, Alan A.

    2017-02-01

    Cracking of sand molds/cores can result in many casting defects such as veining. A robust cracking criterion is needed in casting process simulation for predicting/controlling such defects. A cracking probability map, relating to fracture stress and effective volume, was proposed for resin-bonded silica sand based on Weibull statistics. Three-point bending test results of sand samples were used to generate the cracking map and set up a safety line for cracking criterion. Tensile test results confirmed the accuracy of the safety line for cracking prediction. A laboratory casting experiment was designed and carried out to predict cracking of a cup mold during aluminum casting. The stress-strain behavior and the effective volume of the cup molds were calculated using a finite element analysis code ProCAST®. Furthermore, an energy dispersive spectroscopy fractographic examination of the sand samples confirmed the binder cracking in resin-bonded silica sand.

  1. Importance of the habitat choice behavior assumed when modeling the effects of food and temperature on fish populations

    USGS Publications Warehouse

    Wildhaber, Mark L.; Lamberson, Peter J.

    2004-01-01

    Various mechanisms of habitat choice in fishes based on food and/or temperature have been proposed: optimal foraging for food alone; behavioral thermoregulation for temperature alone; and behavioral energetics and discounted matching for food and temperature combined. Along with development of habitat choice mechanisms, there has been a major push to develop and apply to fish populations individual-based models that incorporate various forms of these mechanisms. However, it is not known how the wide variation in observed and hypothesized mechanisms of fish habitat choice could alter fish population predictions (e.g. growth, size distributions, etc.). We used spatially explicit, individual-based modeling to compare predicted fish populations using different submodels of patch choice behavior under various food and temperature distributions. We compared predicted growth, temperature experience, food consumption, and final spatial distribution using the different models. Our results demonstrated that the habitat choice mechanism assumed in fish population modeling simulations was critical to predictions of fish distribution and growth rates. Hence, resource managers who use modeling results to predict fish population trends should be very aware of and understand the underlying patch choice mechanisms used in their models to assure that those mechanisms correctly represent the fish populations being modeled.

  2. Fatigue Life Methodology for Bonded Composite Skin/Stringer Configurations

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Paris, Isabelle L.; OBrien, T. Kevin; Minguet, Pierre J.

    2001-01-01

    A methodology is presented for determining the fatigue life of composite structures based on fatigue characterization data and geometric nonlinear finite element (FE) analyses. To demonstrate the approach, predicted results were compared to fatigue tests performed on specimens which represented a tapered composite flange bonded onto a composite skin. In a first step, tension tests were performed to evaluate the debonding mechanisms between the flange and the skin. In a second step, a 2D FE model was developed to analyze the tests. To predict matrix cracking onset, the relationship between the tension load and the maximum principal stresses transverse to the fiber direction was determined through FE analysis. Transverse tension fatigue life data were used to -enerate an onset fatigue life P-N curve for matrix cracking. The resulting prediction was in good agreement with data from the fatigue tests. In a third step, a fracture mechanics approach based on FE analysis was used to determine the relationship between the tension load and the critical energy release rate. Mixed mode energy release rate fatigue life data were used to create a fatigue life onset G-N curve for delamination. The resulting prediction was in good agreement with data from the fatigue tests. Further, the prediction curve for cumulative life to failure was generated from the previous onset fatigue life curves. The results showed that the methodology offers a significant potential to Predict cumulative fatigue life of composite structures.

  3. Comparison and optimization of in silico algorithms for predicting the pathogenicity of sodium channel variants in epilepsy.

    PubMed

    Holland, Katherine D; Bouley, Thomas M; Horn, Paul S

    2017-07-01

    Variants in neuronal voltage-gated sodium channel α-subunits genes SCN1A, SCN2A, and SCN8A are common in early onset epileptic encephalopathies and other autosomal dominant childhood epilepsy syndromes. However, in clinical practice, missense variants are often classified as variants of uncertain significance when missense variants are identified but heritability cannot be determined. Genetic testing reports often include results of computational tests to estimate pathogenicity and the frequency of that variant in population-based databases. The objective of this work was to enhance clinicians' understanding of results by (1) determining how effectively computational algorithms predict epileptogenicity of sodium channel (SCN) missense variants; (2) optimizing their predictive capabilities; and (3) determining if epilepsy-associated SCN variants are present in population-based databases. This will help clinicians better understand the results of indeterminate SCN test results in people with epilepsy. Pathogenic, likely pathogenic, and benign variants in SCNs were identified using databases of sodium channel variants. Benign variants were also identified from population-based databases. Eight algorithms commonly used to predict pathogenicity were compared. In addition, logistic regression was used to determine if a combination of algorithms could better predict pathogenicity. Based on American College of Medical Genetic Criteria, 440 variants were classified as pathogenic or likely pathogenic and 84 were classified as benign or likely benign. Twenty-eight variants previously associated with epilepsy were present in population-based gene databases. The output provided by most computational algorithms had a high sensitivity but low specificity with an accuracy of 0.52-0.77. Accuracy could be improved by adjusting the threshold for pathogenicity. Using this adjustment, the Mendelian Clinically Applicable Pathogenicity (M-CAP) algorithm had an accuracy of 0.90 and a combination of algorithms increased the accuracy to 0.92. Potentially pathogenic variants are present in population-based sources. Most computational algorithms overestimate pathogenicity; however, a weighted combination of several algorithms increased classification accuracy to >0.90. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  4. Improve SSME power balance model

    NASA Technical Reports Server (NTRS)

    Karr, Gerald R.

    1992-01-01

    Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.

  5. A Hybrid RANS/LES Approach for Predicting Jet Noise

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2006-01-01

    Hybrid acoustic prediction methods have an important advantage over the current Reynolds averaged Navier-Stokes (RANS) based methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence. Unfortunately, they are unable to account for the high frequency sound generated by the turbulence in the initial mixing layers. This paper introduces an alternative approach that directly calculates the sound from a hybrid RANS/LES flow model (which can resolve the steep gradients in the initial mixing layers near the nozzle lip) and adopts modeling techniques similar to those used in current RANS based noise prediction methods to determine the unknown sources in the equations for the remaining unresolved components of the sound field. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid noise prediction methods.

  6. Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-yu; Zhang, Li-jie

    2017-10-01

    Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s

  7. Point-Mass Aircraft Trajectory Prediction Using a Hierarchical, Highly-Adaptable Software Design

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Woods, Sharon E.; Wing, David J.

    2017-01-01

    A highly adaptable and extensible method for predicting four-dimensional trajectories of civil aircraft has been developed. This method, Behavior-Based Trajectory Prediction, is based on taxonomic concepts developed for the description and comparison of trajectory prediction software. A hierarchical approach to the "behavioral" layer of a point-mass model of aircraft flight, a clear separation between the "behavioral" and "mathematical" layers of the model, and an abstraction of the methods of integrating differential equations in the "mathematical" layer have been demonstrated to support aircraft models of different types (in particular, turbojet vs. turboprop aircraft) using performance models at different levels of detail and in different formats, and promise to be easily extensible to other aircraft types and sources of data. The resulting trajectories predict location, altitude, lateral and vertical speeds, and fuel consumption along the flight path of the subject aircraft accurately and quickly, accounting for local conditions of wind and outside air temperature. The Behavior-Based Trajectory Prediction concept was implemented in NASA's Traffic Aware Planner (TAP) flight-optimizing cockpit software application.

  8. Thermodynamic heuristics with case-based reasoning: combined insights for RNA pseudoknot secondary structure.

    PubMed

    Al-Khatib, Ra'ed M; Rashid, Nur'Aini Abdul; Abdullah, Rosni

    2011-08-01

    The secondary structure of RNA pseudoknots has been extensively inferred and scrutinized by computational approaches. Experimental methods for determining RNA structure are time consuming and tedious; therefore, predictive computational approaches are required. Predicting the most accurate and energy-stable pseudoknot RNA secondary structure has been proven to be an NP-hard problem. In this paper, a new RNA folding approach, termed MSeeker, is presented; it includes KnotSeeker (a heuristic method) and Mfold (a thermodynamic algorithm). The global optimization of this thermodynamic heuristic approach was further enhanced by using a case-based reasoning technique as a local optimization method. MSeeker is a proposed algorithm for predicting RNA pseudoknot structure from individual sequences, especially long ones. This research demonstrates that MSeeker improves the sensitivity and specificity of existing RNA pseudoknot structure predictions. The performance and structural results from this proposed method were evaluated against seven other state-of-the-art pseudoknot prediction methods. The MSeeker method had better sensitivity than the DotKnot, FlexStem, HotKnots, pknotsRG, ILM, NUPACK and pknotsRE methods, with 79% of the predicted pseudoknot base-pairs being correct.

  9. Evaluation of a computational model to predict elbow range of motion

    PubMed Central

    Nishiwaki, Masao; Johnson, James A.; King, Graham J. W.; Athwal, George S.

    2014-01-01

    Computer models capable of predicting elbow flexion and extension range of motion (ROM) limits would be useful for assisting surgeons in improving the outcomes of surgical treatment of patients with elbow contractures. A simple and robust computer-based model was developed that predicts elbow joint ROM using bone geometries calculated from computed tomography image data. The model assumes a hinge-like flexion-extension axis, and that elbow passive ROM limits can be based on terminal bony impingement. The model was validated against experimental results with a cadaveric specimen, and was able to predict the flexion and extension limits of the intact joint to 0° and 3°, respectively. The model was also able to predict the flexion and extension limits to 1° and 2°, respectively, when simulated osteophytes were inserted into the joint. Future studies based on this approach will be used for the prediction of elbow flexion-extension ROM in patients with primary osteoarthritis to help identify motion-limiting hypertrophic osteophytes, and will eventually permit real-time computer-assisted navigated excisions. PMID:24841799

  10. Probability Based hERG Blocker Classifiers.

    PubMed

    Wang, Zhi; Mussa, Hamse Y; Lowe, Robert; Glen, Robert C; Yan, Aixia

    2012-09-01

    The US Food and Drug Administration (FDA) require in vitro human ether-a-go-go related (hERG) ion channel affinity tests for all drug candidates prior to clinical trials. In this study, probabilistic-based methods were employed to develop prediction models on hERG inhibition prediction, which are different from traditional QSAR models that are mainly based on supervised 'hard point' (HP) classification approaches giving 'yes/no' answers. The obtained models can 'ascertain' whether or not a given set of compounds can block hERG ion channels. The results presented indicate that the proposed probabilistic-based method can be a valuable tool for ranking compounds with respect to their potential cardio-toxicity and will be promising for other toxic property predictions. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Quality prediction modeling for sintered ores based on mechanism models of sintering and extreme learning machine based error compensation

    NASA Astrophysics Data System (ADS)

    Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang

    2018-06-01

    Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.

  12. A Probabilistic Model of Meter Perception: Simulating Enculturation.

    PubMed

    van der Weij, Bastiaan; Pearce, Marcus T; Honing, Henkjan

    2017-01-01

    Enculturation is known to shape the perception of meter in music but this is not explicitly accounted for by current cognitive models of meter perception. We hypothesize that the induction of meter is a result of predictive coding: interpreting onsets in a rhythm relative to a periodic meter facilitates prediction of future onsets. Such prediction, we hypothesize, is based on previous exposure to rhythms. As such, predictive coding provides a possible explanation for the way meter perception is shaped by the cultural environment. Based on this hypothesis, we present a probabilistic model of meter perception that uses statistical properties of the relation between rhythm and meter to infer meter from quantized rhythms. We show that our model can successfully predict annotated time signatures from quantized rhythmic patterns derived from folk melodies. Furthermore, we show that by inferring meter, our model improves prediction of the onsets of future events compared to a similar probabilistic model that does not infer meter. Finally, as a proof of concept, we demonstrate how our model can be used in a simulation of enculturation. From the results of this simulation, we derive a class of rhythms that are likely to be interpreted differently by enculturated listeners with different histories of exposure to rhythms.

  13. Enhanced NMR Discrimination of Pharmaceutically Relevant Molecular Crystal Forms through Fragment-Based Ab Initio Chemical Shift Predictions.

    PubMed

    Hartman, Joshua D; Day, Graeme M; Beran, Gregory J O

    2016-11-02

    Chemical shift prediction plays an important role in the determination or validation of crystal structures with solid-state nuclear magnetic resonance (NMR) spectroscopy. One of the fundamental theoretical challenges lies in discriminating variations in chemical shifts resulting from different crystallographic environments. Fragment-based electronic structure methods provide an alternative to the widely used plane wave gauge-including projector augmented wave (GIPAW) density functional technique for chemical shift prediction. Fragment methods allow hybrid density functionals to be employed routinely in chemical shift prediction, and we have recently demonstrated appreciable improvements in the accuracy of the predicted shifts when using the hybrid PBE0 functional instead of generalized gradient approximation (GGA) functionals like PBE. Here, we investigate the solid-state 13 C and 15 N NMR spectra for multiple crystal forms of acetaminophen, phenobarbital, and testosterone. We demonstrate that the use of the hybrid density functional instead of a GGA provides both higher accuracy in the chemical shifts and increased discrimination among the different crystallographic environments. Finally, these results also provide compelling evidence for the transferability of the linear regression parameters mapping predicted chemical shieldings to chemical shifts that were derived in an earlier study.

  14. Enhanced NMR Discrimination of Pharmaceutically Relevant Molecular Crystal Forms through Fragment-Based Ab Initio Chemical Shift Predictions

    PubMed Central

    2016-01-01

    Chemical shift prediction plays an important role in the determination or validation of crystal structures with solid-state nuclear magnetic resonance (NMR) spectroscopy. One of the fundamental theoretical challenges lies in discriminating variations in chemical shifts resulting from different crystallographic environments. Fragment-based electronic structure methods provide an alternative to the widely used plane wave gauge-including projector augmented wave (GIPAW) density functional technique for chemical shift prediction. Fragment methods allow hybrid density functionals to be employed routinely in chemical shift prediction, and we have recently demonstrated appreciable improvements in the accuracy of the predicted shifts when using the hybrid PBE0 functional instead of generalized gradient approximation (GGA) functionals like PBE. Here, we investigate the solid-state 13C and 15N NMR spectra for multiple crystal forms of acetaminophen, phenobarbital, and testosterone. We demonstrate that the use of the hybrid density functional instead of a GGA provides both higher accuracy in the chemical shifts and increased discrimination among the different crystallographic environments. Finally, these results also provide compelling evidence for the transferability of the linear regression parameters mapping predicted chemical shieldings to chemical shifts that were derived in an earlier study. PMID:27829821

  15. [Prediction of the side-cut product yield of atmospheric/vacuum distillation unit by NIR crude oil rapid assay].

    PubMed

    Wang, Yan-Bin; Hu, Yu-Zhong; Li, Wen-Le; Zhang, Wei-Song; Zhou, Feng; Luo, Zhi

    2014-10-01

    In the present paper, based on the fast evaluation technique of near infrared, a method to predict the yield of atmos- pheric and vacuum line was developed, combined with H/CAMS software. Firstly, the near-infrared (NIR) spectroscopy method for rapidly determining the true boiling point of crude oil was developed. With commercially available crude oil spectroscopy da- tabase and experiments test from Guangxi Petrochemical Company, calibration model was established and a topological method was used as the calibration. The model can be employed to predict the true boiling point of crude oil. Secondly, the true boiling point based on NIR rapid assay was converted to the side-cut product yield of atmospheric/vacuum distillation unit by H/CAMS software. The predicted yield and the actual yield of distillation product for naphtha, diesel, wax and residual oil were compared in a 7-month period. The result showed that the NIR rapid crude assay can predict the side-cut product yield accurately. The near infrared analytic method for predicting yield has the advantages of fast analysis, reliable results, and being easy to online operate, and it can provide elementary data for refinery planning optimization and crude oil blending.

  16. Prediction of blast-induced air overpressure: a hybrid AI-based predictive model.

    PubMed

    Jahed Armaghani, Danial; Hajihassani, Mohsen; Marto, Aminaton; Shirani Faradonbeh, Roohollah; Mohamad, Edy Tonnizam

    2015-11-01

    Blast operations in the vicinity of residential areas usually produce significant environmental problems which may cause severe damage to the nearby areas. Blast-induced air overpressure (AOp) is one of the most important environmental impacts of blast operations which needs to be predicted to minimize the potential risk of damage. This paper presents an artificial neural network (ANN) optimized by the imperialist competitive algorithm (ICA) for the prediction of AOp induced by quarry blasting. For this purpose, 95 blasting operations were precisely monitored in a granite quarry site in Malaysia and AOp values were recorded in each operation. Furthermore, the most influential parameters on AOp, including the maximum charge per delay and the distance between the blast-face and monitoring point, were measured and used to train the ICA-ANN model. Based on the generalized predictor equation and considering the measured data from the granite quarry site, a new empirical equation was developed to predict AOp. For comparison purposes, conventional ANN models were developed and compared with the ICA-ANN results. The results demonstrated that the proposed ICA-ANN model is able to predict blast-induced AOp more accurately than other presented techniques.

  17. Knowledge-based analysis of microarrays for the discovery of transcriptional regulation relationships.

    PubMed

    Seok, Junhee; Kaushal, Amit; Davis, Ronald W; Xiao, Wenzhong

    2010-01-18

    The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data.

  18. Ares I-X Ascent Base Environments

    NASA Technical Reports Server (NTRS)

    Mobley, B. L.; Bender, R. L.; Canabal, F.; Smith, Sheldon D.

    2011-01-01

    Plume induced base heating environments were measured during the flight of the NASA Constellation Ares I-X developmental launch vehicle, successfully flown on October 28, 2009. The Ares IX first stage is a four segment Space Shuttle derived booster with base consisting of a flared aft skirt, deceleration and tumble motors, and a thermal curtain surrounding the first stage 7.2 area ratio nozzle. Developmental Flight Instrumentation (DFI) consisted of radiometers, calorimeters, pressure transducers and gas temperature probes installed on the aft skirt and nozzle to measure the base environments. In addition, thermocouples were also installed between the layers of the flexible thermal curtain to provide insight into the curtain response to the base environments and to assist in understanding curtain failure during reentry. Plume radiation environment predictions were generated by the Reverse Monte Carlo (RMC) code and the convective base heating predictions utilized heritage MSFC empirical methods. These predictions were compared to the DFI data and results from the flight videography. Radiation predictions agreed with the flight measured data early in flight but gauge failures prevented high altitude comparisons. The convective environment comparisons demonstrated the need to improve the prediction methodology; particularly for low altitude, local plume recirculation. The convective comparisons showed relatively good agreement at altitudes greater than 50,000 feet.

  19. Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra

    NASA Astrophysics Data System (ADS)

    Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi

    2018-04-01

    Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models’ performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.

  20. Structural features of microRNA (miRNA) precursors and their relevance to miRNA biogenesis and small interfering RNA/short hairpin RNA design.

    PubMed

    Krol, Jacek; Sobczak, Krzysztof; Wilczynska, Urszula; Drath, Maria; Jasinska, Anna; Kaczynska, Danuta; Krzyzosiak, Wlodzimierz J

    2004-10-01

    We have established the structures of 10 human microRNA (miRNA) precursors using biochemical methods. Eight of these structures turned out to be different from those that were computer-predicted. The differences localized in the terminal loop region and at the opposite side of the precursor hairpin stem. We have analyzed the features of these structures from the perspectives of miRNA biogenesis and active strand selection. We demonstrated the different thermodynamic stability profiles for pre-miRNA hairpins harboring miRNAs at their 5'- and 3'-sides and discussed their functional implications. Our results showed that miRNA prediction based on predicted precursor structures may give ambiguous results, and the success rate is significantly higher for the experimentally determined structures. On the other hand, the differences between the predicted and experimentally determined structures did not affect the stability of termini produced through "conceptual dicing." This result confirms the value of thermodynamic analysis based on mfold as a predictor of strand section by RNAi-induced silencing complex (RISC).

  1. Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra.

    PubMed

    Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi

    2018-03-13

    Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models' performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.

  2. Predicting the spatial distribution of soil profile in Adapazari/Turkey by artificial neural networks using CPT data

    NASA Astrophysics Data System (ADS)

    Arel, Ersin

    2012-06-01

    The infamous soils of Adapazari, Turkey, that failed extensively during the 46-s long magnitude 7.4 earthquake in 1999 have since been the subject of a research program. Boreholes, piezocone soundings and voluminous laboratory testing have enabled researchers to apply sophisticated methods to determine the soil profiles in the city using the existing database. This paper describes the use of the artificial neural network (ANN) model to predict the complex soil profiles of Adapazari, based on cone penetration test (CPT) results. More than 3236 field CPT readings have been collected from 117 soundings spread over an area of 26 km2. An attempt has been made to develop the ANN model using multilayer perceptrons trained with a feed-forward back-propagation algorithm. The results show that the ANN model is fairly accurate in predicting complex soil profiles. Soil identification using CPT test results has principally been based on the Robertson charts. Applying neural network systems using the chart offers a powerful and rapid route to reliable prediction of the soil profiles.

  3. Resting energy expenditure prediction in recreational athletes of 18-35 years: confirmation of Cunningham equation and an improved weight-based alternative.

    PubMed

    ten Haaf, Twan; Weijs, Peter J M

    2014-01-01

    Resting energy expenditure (REE) is expected to be higher in athletes because of their relatively high fat free mass (FFM). Therefore, REE predictive equation for recreational athletes may be required. The aim of this study was to validate existing REE predictive equations and to develop a new recreational athlete specific equation. 90 (53 M, 37 F) adult athletes, exercising on average 9.1 ± 5.0 hours a week and 5.0 ± 1.8 times a week, were included. REE was measured using indirect calorimetry (Vmax Encore n29), FFM and FM were measured using air displacement plethysmography. Multiple linear regression analysis was used to develop a new FFM-based and weight-based REE predictive equation. The percentage accurate predictions (within 10% of measured REE), percentage bias, root mean square error and limits of agreement were calculated. Results: The Cunningham equation and the new weight-based equation REE(kJ / d) = 49.940* weight(kg) + 2459.053* height(m) - 34.014* age(y) + 799.257* sex(M = 1,F = 0) + 122.502 and the new FFM-based equation REE(kJ / d) = 95.272*FFM(kg) + 2026.161 performed equally well. De Lorenzo's equation predicted REE less accurate, but better than the other generally used REE predictive equations. Harris-Benedict, WHO, Schofield, Mifflin and Owen all showed less than 50% accuracy. For a population of (Dutch) recreational athletes, the REE can accurately be predicted with the existing Cunningham equation. Since body composition measurement is not always possible, and other generally used equations fail, the new weight-based equation is advised for use in sports nutrition.

  4. Prediction of a Therapeutic Dose for Buagafuran, a Potent Anxiolytic Agent by Physiologically Based Pharmacokinetic/Pharmacodynamic Modeling Starting from Pharmacokinetics in Rats and Human.

    PubMed

    Yang, Fen; Wang, Baolian; Liu, Zhihao; Xia, Xuejun; Wang, Weijun; Yin, Dali; Sheng, Li; Li, Yan

    2017-01-01

    Physiologically based pharmacokinetic (PBPK)/pharmacodynamic (PD) models can contribute to animal-to-human extrapolation and therapeutic dose predictions. Buagafuran is a novel anxiolytic agent and phase I clinical trials of buagafuran have been completed. In this paper, a potentially effective dose for buagafuran of 30 mg t.i.d. in human was estimated based on the human brain concentration predicted by a PBPK/PD modeling. The software GastroPlus TM was used to build the PBPK/PD model for buagafuran in rat which related the brain tissue concentrations of buagafuran and the times of animals entering the open arms in the pharmacological model of elevated plus-maze. Buagafuran concentrations in human plasma were fitted and brain tissue concentrations were predicted by using a human PBPK model in which the predicted plasma profiles were in good agreement with observations. The results provided supportive data for the rational use of buagafuran in clinic.

  5. Self-Adaptive Prediction of Cloud Resource Demands Using Ensemble Model and Subtractive-Fuzzy Clustering Based Fuzzy Neural Network

    PubMed Central

    Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong

    2015-01-01

    In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands. PMID:25691896

  6. A hadoop-based method to predict potential effective drug combination.

    PubMed

    Sun, Yifan; Xiong, Yi; Xu, Qian; Wei, Dongqing

    2014-01-01

    Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request.

  7. A Hadoop-Based Method to Predict Potential Effective Drug Combination

    PubMed Central

    Xiong, Yi; Xu, Qian; Wei, Dongqing

    2014-01-01

    Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request. PMID:25147789

  8. Computational simulations of vocal fold vibration: Bernoulli versus Navier-Stokes.

    PubMed

    Decker, Gifford Z; Thomson, Scott L

    2007-05-01

    The use of the mechanical energy (ME) equation for fluid flow, an extension of the Bernoulli equation, to predict the aerodynamic loading on a two-dimensional finite element vocal fold model is examined. Three steady, one-dimensional ME flow models, incorporating different methods of flow separation point prediction, were compared. For two models, determination of the flow separation point was based on fixed ratios of the glottal area at separation to the minimum glottal area; for the third model, the separation point determination was based on fluid mechanics boundary layer theory. Results of flow rate, separation point, and intraglottal pressure distribution were compared with those of an unsteady, two-dimensional, finite element Navier-Stokes model. Cases were considered with a rigid glottal profile as well as with a vibrating vocal fold. For small glottal widths, the three ME flow models yielded good predictions of flow rate and intraglottal pressure distribution, but poor predictions of separation location. For larger orifice widths, the ME models were poor predictors of flow rate and intraglottal pressure, but they satisfactorily predicted separation location. For the vibrating vocal fold case, all models resulted in similar predictions of mean intraglottal pressure, maximum orifice area, and vibration frequency, but vastly different predictions of separation location and maximum flow rate.

  9. Control surface hinge moment prediction using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Simpson, Christopher David

    The following research determines the feasibility of predicting control surface hinge moments using various computational methods. A detailed analysis is conducted using a 2D GA(W)-1 airfoil with a 20% plain flap. Simple hinge moment prediction methods are tested, including empirical Datcom relations and XFOIL. Steady-state and time-accurate turbulent, viscous, Navier-Stokes solutions are computed using Fun3D. Hinge moment coefficients are computed. Mesh construction techniques are discussed. An adjoint-based mesh adaptation case is also evaluated. An NACA 0012 45-degree swept horizontal stabilizer with a 25% elevator is also evaluated using Fun3D. Results are compared with experimental wind-tunnel data obtained from references. Finally, the costs of various solution methods are estimated. Results indicate that while a steady-state Navier-Stokes solution can accurately predict control surface hinge moments for small angles of attack and deflection angles, a time-accurate solution is necessary to accurately predict hinge moments in the presence of flow separation. The ability to capture the unsteady vortex shedding behavior present in moderate to large control surface deflections is found to be critical to hinge moment prediction accuracy. Adjoint-based mesh adaptation is shown to give hinge moment predictions similar to a globally-refined mesh for a steady-state 2D simulation.

  10. Accurate disulfide-bonding network predictions improve ab initio structure prediction of cysteine-rich proteins

    PubMed Central

    Yang, Jing; He, Bao-Ji; Jang, Richard; Zhang, Yang; Shen, Hong-Bin

    2015-01-01

    Abstract Motivation: Cysteine-rich proteins cover many important families in nature but there are currently no methods specifically designed for modeling the structure of these proteins. The accuracy of disulfide connectivity pattern prediction, particularly for the proteins of higher-order connections, e.g. >3 bonds, is too low to effectively assist structure assembly simulations. Results: We propose a new hierarchical order reduction protocol called Cyscon for disulfide-bonding prediction. The most confident disulfide bonds are first identified and bonding prediction is then focused on the remaining cysteine residues based on SVR training. Compared with purely machine learning-based approaches, Cyscon improved the average accuracy of connectivity pattern prediction by 21.9%. For proteins with more than 5 disulfide bonds, Cyscon improved the accuracy by 585% on the benchmark set of PDBCYS. When applied to 158 non-redundant cysteine-rich proteins, Cyscon predictions helped increase (or decrease) the TM-score (or RMSD) of the ab initio QUARK modeling by 12.1% (or 14.4%). This result demonstrates a new avenue to improve the ab initio structure modeling for cysteine-rich proteins. Availability and implementation: http://www.csbio.sjtu.edu.cn/bioinf/Cyscon/ Contact: zhng@umich.edu or hbshen@sjtu.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26254435

  11. Estimation of the Driving Style Based on the Users’ Activity and Environment Influence

    PubMed Central

    Sysoev, Mikhail; Kos, Andrej; Guna, Jože; Pogačnik, Matevž

    2017-01-01

    New models and methods have been designed to predict the influence of the user’s environment and activity information to the driving style in standard automotive environments. For these purposes, an experiment was conducted providing two types of analysis: (i) the evaluation of a self-assessment of the driving style; (ii) the prediction of aggressive driving style based on drivers’ activity and environment parameters. Sixty seven h of driving data from 10 drivers were collected for analysis in this study. The new parameters used in the experiment are the car door opening and closing manner, which were applied to improve the prediction accuracy. An Android application called Sensoric was developed to collect low-level smartphone data about the users’ activity. The driving style was predicted from the user’s environment and activity data collected before driving. The prediction was tested against the actual driving style, calculated from objective driving data. The prediction has shown encouraging results, with precision values ranging from 0.727 up to 0.909 for aggressive driving recognition rate. The obtained results lend support to the hypothesis that user’s environment and activity data could be used for the prediction of the aggressive driving style in advance, before the driving starts. PMID:29065476

  12. Prediction of Safety Stock Using Fuzzy Time Series (FTS) and Technology of Radio Frequency Identification (RFID) for Stock Control at Vendor Managed Inventory (VMI)

    NASA Astrophysics Data System (ADS)

    Mashuri, Chamdan; Suryono; Suseno, Jatmiko Endro

    2018-02-01

    This research was conducted by prediction of safety stock using Fuzzy Time Series (FTS) and technology of Radio Frequency Identification (RFID) for stock control at Vendor Managed Inventory (VMI). Well-controlled stock influenced company revenue and minimized cost. It discussed about information system of safety stock prediction developed through programming language of PHP. Input data consisted of demand got from automatic, online and real time acquisition using technology of RFID, then, sent to server and stored at online database. Furthermore, data of acquisition result was predicted by using algorithm of FTS applying universe of discourse defining and fuzzy sets determination. Fuzzy set result was continued to division process of universe of discourse in order to be to final step. Prediction result was displayed at information system dashboard developed. By using 60 data from demand data, prediction score was 450.331 and safety stock was 135.535. Prediction result was done by error deviation validation using Mean Square Percent Error of 15%. It proved that FTS was good enough in predicting demand and safety stock for stock control. For deeper analysis, researchers used data of demand and universe of discourse U varying at FTS to get various result based on test data used.

  13. LaSVM-based big data learning system for dynamic prediction of air pollution in Tehran.

    PubMed

    Ghaemi, Z; Alimohammadi, A; Farnaghi, M

    2018-04-20

    Due to critical impacts of air pollution, prediction and monitoring of air quality in urban areas are important tasks. However, because of the dynamic nature and high spatio-temporal variability, prediction of the air pollutant concentrations is a complex spatio-temporal problem. Distribution of pollutant concentration is influenced by various factors such as the historical pollution data and weather conditions. Conventional methods such as the support vector machine (SVM) or artificial neural networks (ANN) show some deficiencies when huge amount of streaming data have to be analyzed for urban air pollution prediction. In order to overcome the limitations of the conventional methods and improve the performance of urban air pollution prediction in Tehran, a spatio-temporal system is designed using a LaSVM-based online algorithm. Pollutant concentration and meteorological data along with geographical parameters are continually fed to the developed online forecasting system. Performance of the system is evaluated by comparing the prediction results of the Air Quality Index (AQI) with those of a traditional SVM algorithm. Results show an outstanding increase of speed by the online algorithm while preserving the accuracy of the SVM classifier. Comparison of the hourly predictions for next coming 24 h, with those of the measured pollution data in Tehran pollution monitoring stations shows an overall accuracy of 0.71, root mean square error of 0.54 and coefficient of determination of 0.81. These results are indicators of the practical usefulness of the online algorithm for real-time spatial and temporal prediction of the urban air quality.

  14. Analysis of spatial distribution of land cover maps accuracy

    NASA Astrophysics Data System (ADS)

    Khatami, R.; Mountrakis, G.; Stehman, S. V.

    2017-12-01

    Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.

  15. TH-E-BRF-05: Comparison of Survival-Time Prediction Models After Radiotherapy for High-Grade Glioma Patients Based On Clinical and DVH Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magome, T; Haga, A; Igaki, H

    Purpose: Although many outcome prediction models based on dose-volume information have been proposed, it is well known that the prognosis may be affected also by multiple clinical factors. The purpose of this study is to predict the survival time after radiotherapy for high-grade glioma patients based on features including clinical and dose-volume histogram (DVH) information. Methods: A total of 35 patients with high-grade glioma (oligodendroglioma: 2, anaplastic astrocytoma: 3, glioblastoma: 30) were selected in this study. All patients were treated with prescribed dose of 30–80 Gy after surgical resection or biopsy from 2006 to 2013 at The University of Tokyomore » Hospital. All cases were randomly separated into training dataset (30 cases) and test dataset (5 cases). The survival time after radiotherapy was predicted based on a multiple linear regression analysis and artificial neural network (ANN) by using 204 candidate features. The candidate features included the 12 clinical features (tumor location, extent of surgical resection, treatment duration of radiotherapy, etc.), and the 192 DVH features (maximum dose, minimum dose, D95, V60, etc.). The effective features for the prediction were selected according to a step-wise method by using 30 training cases. The prediction accuracy was evaluated by a coefficient of determination (R{sup 2}) between the predicted and actual survival time for the training and test dataset. Results: In the multiple regression analysis, the value of R{sup 2} between the predicted and actual survival time was 0.460 for the training dataset and 0.375 for the test dataset. On the other hand, in the ANN analysis, the value of R{sup 2} was 0.806 for the training dataset and 0.811 for the test dataset. Conclusion: Although a large number of patients would be needed for more accurate and robust prediction, our preliminary Result showed the potential to predict the outcome in the patients with high-grade glioma. This work was partly supported by the JSPS Core-to-Core Program(No. 23003) and Grant-in-aid from the JSPS Fellows.« less

  16. A wavelet-based technique to predict treatment outcome for Major Depressive Disorder.

    PubMed

    Mumtaz, Wajid; Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad; Malik, Aamir Saeed

    2017-01-01

    Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant's treatment outcome may help during antidepressant's selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant's treatment outcome for the MDD patients.

  17. Elastic properties of graphene: A pseudo-beam model with modified internal bending moment and its application

    NASA Astrophysics Data System (ADS)

    Xia, Z. M.; Wang, C. G.; Tan, H. F.

    2018-04-01

    A pseudo-beam model with modified internal bending moment is presented to predict elastic properties of graphene, including the Young's modulus and Poisson's ratio. In order to overcome a drawback in existing molecular structural mechanics models, which only account for pure bending (constant bending moment), the presented model accounts for linear bending moments deduced from the balance equations. Based on this pseudo-beam model, an analytical prediction is accomplished to predict the Young's modulus and Poisson's ratio of graphene based on the equation of the strain energies by using Castigliano second theorem. Then, the elastic properties of graphene are calculated compared with results available in literature, which verifies the feasibility of the pseudo-beam model. Finally, the pseudo-beam model is utilized to study the twisting wrinkling characteristics of annular graphene. Due to modifications of the internal bending moment, the wrinkling behaviors of graphene sheet are predicted accurately. The obtained results show that the pseudo-beam model has a good ability to predict the elastic properties of graphene accurately, especially the out-of-plane deformation behavior.

  18. Novel Computational Approaches to Drug Discovery

    NASA Astrophysics Data System (ADS)

    Skolnick, Jeffrey; Brylinski, Michal

    2010-01-01

    New approaches to protein functional inference based on protein structure and evolution are described. First, FINDSITE, a threading based approach to protein function prediction, is summarized. Then, the results of large scale benchmarking of ligand binding site prediction, ligand screening, including applications to HIV protease, and GO molecular functional inference are presented. A key advantage of FINDSITE is its ability to use low resolution, predicted structures as well as high resolution experimental structures. Then, an extension of FINDSITE to ligand screening in GPCRs using predicted GPCR structures, FINDSITE/QDOCKX, is presented. This is a particularly difficult case as there are few experimentally solved GPCR structures. Thus, we first train on a subset of known binding ligands for a set of GPCRs; this is then followed by benchmarking against a large ligand library. For the virtual ligand screening of a number of Dopamine receptors, encouraging results are seen, with significant enrichment in identified ligands over those found in the training set. Thus, FINDSITE and its extensions represent a powerful approach to the successful prediction of a variety of molecular functions.

  19. A collaborative environment for developing and validating predictive tools for protein biophysical characteristics

    NASA Astrophysics Data System (ADS)

    Johnston, Michael A.; Farrell, Damien; Nielsen, Jens Erik

    2012-04-01

    The exchange of information between experimentalists and theoreticians is crucial to improving the predictive ability of theoretical methods and hence our understanding of the related biology. However many barriers exist which prevent the flow of information between the two disciplines. Enabling effective collaboration requires that experimentalists can easily apply computational tools to their data, share their data with theoreticians, and that both the experimental data and computational results are accessible to the wider community. We present a prototype collaborative environment for developing and validating predictive tools for protein biophysical characteristics. The environment is built on two central components; a new python-based integration module which allows theoreticians to provide and manage remote access to their programs; and PEATDB, a program for storing and sharing experimental data from protein biophysical characterisation studies. We demonstrate our approach by integrating PEATSA, a web-based service for predicting changes in protein biophysical characteristics, into PEATDB. Furthermore, we illustrate how the resulting environment aids method development using the Potapov dataset of experimentally measured ΔΔGfold values, previously employed to validate and train protein stability prediction algorithms.

  20. Early warnings of hazardous thunderstorms over Lake Victoria

    NASA Astrophysics Data System (ADS)

    Thiery, Wim; Gudmundsson, Lukas; Bedka, Kristopher; Semazzi, Fredrick H. M.; Lhermitte, Stef; Willems, Patrick; van Lipzig, Nicole P. M.; Seneviratne, Sonia I.

    2017-07-01

    Weather extremes have harmful impacts on communities around Lake Victoria in East Africa. Every year, intense nighttime thunderstorms cause numerous boating accidents on the lake, resulting in thousands of deaths among fishermen. Operational storm warning systems are therefore crucial. Here we complement ongoing early warning efforts based on numerical weather prediction, by presenting a new satellite data-driven storm prediction system, the prototype Lake Victoria Intense storm Early Warning System (VIEWS). VIEWS derives predictability from the correlation between afternoon land storm activity and nighttime storm intensity on Lake Victoria, and relies on logistic regression techniques to forecast extreme thunderstorms from satellite observations. Evaluation of the statistical model reveals that predictive power is high and independent of the type of input dataset. We then optimise the configuration and show that false alarms also contain valuable information. Our results suggest that regression-based models that are motivated through process understanding have the potential to reduce the vulnerability of local fishing communities around Lake Victoria. The experimental prediction system is publicly available under the MIT licence at http://github.com/wthiery/VIEWS.

Top