Extending Theory-Based Quantitative Predictions to New Health Behaviors.
Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O
2016-04-01
Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.
A New Perspective in the Etiology, Treatment, Prevention and Prediction of Space Motion Sickness
1988-12-01
ulsint options: Dilanlin (First choice based previous ground based ef- ficacity’), Dextromethorphan \\Dilantin, Carbamnazcpine, Dextrornczhorpl.an\\Car...First choice based previous ground based er- ricacity), Dcxtromethorphan\\Dilantin, Carbamazepinc, Dextromethorphan \\Car- bamrazepine 137 EVALUATION OF...Anticonvulsant options: Dilantin (First choice bised previous ground based ef- * flicacioy), Dextromethorphan \\Dilantin, Carbamazepine, Dextromethorphan \\Car
Chaitanya, Lakshmi; Breslin, Krystal; Zuñiga, Sofia; Wirken, Laura; Pośpiech, Ewelina; Kukla-Bartoszek, Magdalena; Sijen, Titia; Knijff, Peter de; Liu, Fan; Branicki, Wojciech; Kayser, Manfred; Walsh, Susan
2018-07-01
Forensic DNA Phenotyping (FDP), i.e. the prediction of human externally visible traits from DNA, has become a fast growing subfield within forensic genetics due to the intelligence information it can provide from DNA traces. FDP outcomes can help focus police investigations in search of unknown perpetrators, who are generally unidentifiable with standard DNA profiling. Therefore, we previously developed and forensically validated the IrisPlex DNA test system for eye colour prediction and the HIrisPlex system for combined eye and hair colour prediction from DNA traces. Here we introduce and forensically validate the HIrisPlex-S DNA test system (S for skin) for the simultaneous prediction of eye, hair, and skin colour from trace DNA. This FDP system consists of two SNaPshot-based multiplex assays targeting a total of 41 SNPs via a novel multiplex assay for 17 skin colour predictive SNPs and the previous HIrisPlex assay for 24 eye and hair colour predictive SNPs, 19 of which also contribute to skin colour prediction. The HIrisPlex-S system further comprises three statistical prediction models, the previously developed IrisPlex model for eye colour prediction based on 6 SNPs, the previous HIrisPlex model for hair colour prediction based on 22 SNPs, and the recently introduced HIrisPlex-S model for skin colour prediction based on 36 SNPs. In the forensic developmental validation testing, the novel 17-plex assay performed in full agreement with the Scientific Working Group on DNA Analysis Methods (SWGDAM) guidelines, as previously shown for the 24-plex assay. Sensitivity testing of the 17-plex assay revealed complete SNP profiles from as little as 63 pg of input DNA, equalling the previously demonstrated sensitivity threshold of the 24-plex HIrisPlex assay. Testing of simulated forensic casework samples such as blood, semen, saliva stains, of inhibited DNA samples, of low quantity touch (trace) DNA samples, and of artificially degraded DNA samples as well as concordance testing, demonstrated the robustness, efficiency, and forensic suitability of the new 17-plex assay, as previously shown for the 24-plex assay. Finally, we provide an update to the publically available HIrisPlex website https://hirisplex.erasmusmc.nl/, now allowing the estimation of individual probabilities for 3 eye, 4 hair, and 5 skin colour categories from HIrisPlex-S input genotypes. The HIrisPlex-S DNA test represents the first forensically validated tool for skin colour prediction, and reflects the first forensically validated tool for simultaneous eye, hair and skin colour prediction from DNA. Copyright © 2018 Elsevier B.V. All rights reserved.
Weighted bi-prediction for light field image coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2017-09-01
Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.
An evidential link prediction method and link predictability based on Shannon entropy
NASA Astrophysics Data System (ADS)
Yin, Likang; Zheng, Haoyang; Bian, Tian; Deng, Yong
2017-09-01
Predicting missing links is of both theoretical value and practical interest in network science. In this paper, we empirically investigate a new link prediction method base on similarity and compare nine well-known local similarity measures on nine real networks. Most of the previous studies focus on the accuracy, however, it is crucial to consider the link predictability as an initial property of networks itself. Hence, this paper has proposed a new link prediction approach called evidential measure (EM) based on Dempster-Shafer theory. Moreover, this paper proposed a new method to measure link predictability via local information and Shannon entropy.
AptRank: an adaptive PageRank model for protein function prediction on bi-relational graphs.
Jiang, Biaobin; Kloster, Kyle; Gleich, David F; Gribskov, Michael
2017-06-15
Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model. We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training. The MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank . gribskov@purdue.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Prediction-based dynamic load-sharing heuristics
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.; Devarakonda, Murthy; Iyer, Ravishankar K.
1993-01-01
The authors present dynamic load-sharing heuristics that use predicted resource requirements of processes to manage workloads in a distributed system. A previously developed statistical pattern-recognition method is employed for resource prediction. While nonprediction-based heuristics depend on a rapidly changing system status, the new heuristics depend on slowly changing program resource usage patterns. Furthermore, prediction-based heuristics can be more effective since they use future requirements rather than just the current system state. Four prediction-based heuristics, two centralized and two distributed, are presented. Using trace driven simulations, they are compared against random scheduling and two effective nonprediction based heuristics. Results show that the prediction-based centralized heuristics achieve up to 30 percent better response times than the nonprediction centralized heuristic, and that the prediction-based distributed heuristics achieve up to 50 percent improvements relative to their nonprediction counterpart.
A state-based probabilistic model for tumor respiratory motion prediction
NASA Astrophysics Data System (ADS)
Kalet, Alan; Sandison, George; Wu, Huanmei; Schmitz, Ruth
2010-12-01
This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more general HMM-type predictive models. RMS errors for the time average model approach the theoretical limit of the HMM, and predicted state sequences are well correlated with sequences known to fit the data.
Segev, G; Langston, C; Takada, K; Kass, P H; Cowgill, L D
2016-05-01
A scoring system for outcome prediction in dogs with acute kidney injury (AKI) recently has been developed but has not been validated. The scoring system previously developed for outcome prediction will accurately predict outcome in a validation cohort of dogs with AKI managed with hemodialysis. One hundred fifteen client-owned dogs with AKI. Medical records of dogs with AKI treated by hemodialysis between 2011 and 2015 were reviewed. Dogs were included only if all variables required to calculate the final predictive score were available, and the 30-day outcome was known. A predictive score for 3 models was calculated for each dog. Logistic regression was used to evaluate the association of the final predictive score with each model's outcome. Receiver operating curve (ROC) analyses were performed to determine sensitivity and specificity for each model based on previously established cut-off values. Higher scores for each model were associated with decreased survival probability (P < .001). Based on previously established cut-off values, 3 models (models A, B, C) were associated with sensitivities/specificities of 73/75%, 71/80%, and 75/86%, respectively, and correctly classified 74-80% of the dogs. All models were simple to apply and allowed outcome prediction that closely corresponded with actual outcome in an independent cohort. As expected, accuracies were slightly lower compared with those from the previously reported cohort used initially to develop the models. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
Ge, Shufan; Tu, Yifan; Hu, Ming
2017-01-01
Glucuronidation is the most important phase II metabolic pathway which is responsible for the clearance of many endogenous and exogenous compounds. To better understand the elimination process for compounds undergoing glucuronidation and identify compounds with desirable in vivo pharmacokinetic properties, many efforts have been made to predict in vivo glucuronidation using in vitro data. In this article, we reviewed typical approaches used in previous predictions. The problems and challenges in prediction of glucuronidation were discussed. Besides that different incubation conditions can affect the prediction accuracy, other factors including efflux / uptake transporters, enterohepatic recycling, and deglucuronidation reactions also contribute to the disposition of glucuronides and make the prediction more difficult. PBPK modeling, which can describe more complicated process in vivo, is a promising prediction strategy which may greatly improve the prediction of glucuronidation and potential DDIs involving glucuronidation. Based on previous studies, we proposed a transport-glucuronidation classification system, which was built based on the kinetics of both glucuronidation and transport of the glucuronide. This system could be a very useful tool to achieve better in vivo predictions. PMID:28966903
Andrés, Axel; Rosés, Martí; Bosch, Elisabeth
2014-11-28
In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.
Deep Visual Attention Prediction
NASA Astrophysics Data System (ADS)
Wang, Wenguan; Shen, Jianbing
2018-05-01
In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.
NASA Astrophysics Data System (ADS)
Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro
2016-04-01
The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.
SeqRate: sequence-based protein folding type classification and rates prediction
2010-01-01
Background Protein folding rate is an important property of a protein. Predicting protein folding rate is useful for understanding protein folding process and guiding protein design. Most previous methods of predicting protein folding rate require the tertiary structure of a protein as an input. And most methods do not distinguish the different kinetic nature (two-state folding or multi-state folding) of the proteins. Here we developed a method, SeqRate, to predict both protein folding kinetic type (two-state versus multi-state) and real-value folding rate using sequence length, amino acid composition, contact order, contact number, and secondary structure information predicted from only protein sequence with support vector machines. Results We systematically studied the contributions of individual features to folding rate prediction. On a standard benchmark dataset, the accuracy of folding kinetic type classification is 80%. The Pearson correlation coefficient and the mean absolute difference between predicted and experimental folding rates (sec-1) in the base-10 logarithmic scale are 0.81 and 0.79 for two-state protein folders, and 0.80 and 0.68 for three-state protein folders. SeqRate is the first sequence-based method for protein folding type classification and its accuracy of fold rate prediction is improved over previous sequence-based methods. Its performance can be further enhanced with additional information, such as structure-based geometric contacts, as inputs. Conclusions Both the web server and software of predicting folding rate are publicly available at http://casp.rnet.missouri.edu/fold_rate/index.html. PMID:20438647
Warren, Johanna B; Hamilton, Andrew
2015-12-01
Seven validated prospective scoring systems, and one unvalidated system, predict a successful TOLAC based on a variety of clinical factors. The systems use different outcome statistics, so their predictive accuracy can't be directly compared.
Neural Differentiation of Incorrectly Predicted Memories.
Kim, Ghootae; Norman, Kenneth A; Turk-Browne, Nicholas B
2017-02-22
When an item is predicted in a particular context but the prediction is violated, memory for that item is weakened (Kim et al., 2014). Here, we explore what happens when such previously mispredicted items are later reencountered. According to prior neural network simulations, this sequence of events-misprediction and subsequent restudy-should lead to differentiation of the item's neural representation from the previous context (on which the misprediction was based). Specifically, misprediction weakens connections in the representation to features shared with the previous context and restudy allows new features to be incorporated into the representation that are not shared with the previous context. This cycle of misprediction and restudy should have the net effect of moving the item's neural representation away from the neural representation of the previous context. We tested this hypothesis using human fMRI by tracking changes in item-specific BOLD activity patterns in the hippocampus, a key structure for representing memories and generating predictions. In left CA2/3/DG, we found greater neural differentiation for items that were repeatedly mispredicted and restudied compared with items from a control condition that was identical except without misprediction. We also measured prediction strength in a trial-by-trial fashion and found that greater misprediction for an item led to more differentiation, further supporting our hypothesis. Therefore, the consequences of prediction error go beyond memory weakening. If the mispredicted item is restudied, the brain adaptively differentiates its memory representation to improve the accuracy of subsequent predictions and to shield it from further weakening. SIGNIFICANCE STATEMENT Competition between overlapping memories leads to weakening of nontarget memories over time, making it easier to access target memories. However, a nontarget memory in one context might become a target memory in another context. How do such memories get restrengthened without increasing competition again? Computational models suggest that the brain handles this by reducing neural connections to the previous context and adding connections to new features that were not part of the previous context. The result is neural differentiation away from the previous context. Here, we provide support for this theory, using fMRI to track neural representations of individual memories in the hippocampus and how they change based on learning. Copyright © 2017 the authors 0270-6474/17/372022-10$15.00/0.
Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating
Wang, Bingkun; Huang, Yongfeng; Li, Xing
2016-01-01
E-commerce develops rapidly. Learning and taking good advantage of the myriad reviews from online customers has become crucial to the success in this game, which calls for increasingly more accuracy in sentiment classification of these reviews. Therefore the finer-grained review rating prediction is preferred over the rough binary sentiment classification. There are mainly two types of method in current review rating prediction. One includes methods based on review text content which focus almost exclusively on textual content and seldom relate to those reviewers and items remarked in other relevant reviews. The other one contains methods based on collaborative filtering which extract information from previous records in the reviewer-item rating matrix, however, ignoring review textual content. Here we proposed a framework for review rating prediction which shows the effective combination of the two. Then we further proposed three specific methods under this framework. Experiments on two movie review datasets demonstrate that our review rating prediction framework has better performance than those previous methods. PMID:26880879
Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating.
Wang, Bingkun; Huang, Yongfeng; Li, Xing
2016-01-01
E-commerce develops rapidly. Learning and taking good advantage of the myriad reviews from online customers has become crucial to the success in this game, which calls for increasingly more accuracy in sentiment classification of these reviews. Therefore the finer-grained review rating prediction is preferred over the rough binary sentiment classification. There are mainly two types of method in current review rating prediction. One includes methods based on review text content which focus almost exclusively on textual content and seldom relate to those reviewers and items remarked in other relevant reviews. The other one contains methods based on collaborative filtering which extract information from previous records in the reviewer-item rating matrix, however, ignoring review textual content. Here we proposed a framework for review rating prediction which shows the effective combination of the two. Then we further proposed three specific methods under this framework. Experiments on two movie review datasets demonstrate that our review rating prediction framework has better performance than those previous methods.
Blind prediction of noncanonical RNA structure at atomic accuracy.
Watkins, Andrew M; Geniesse, Caleb; Kladwang, Wipapat; Zakrevsky, Paul; Jaeger, Luc; Das, Rhiju
2018-05-01
Prediction of RNA structure from nucleotide sequence remains an unsolved grand challenge of biochemistry and requires distinct concepts from protein structure prediction. Despite extensive algorithmic development in recent years, modeling of noncanonical base pairs of new RNA structural motifs has not been achieved in blind challenges. We report a stepwise Monte Carlo (SWM) method with a unique add-and-delete move set that enables predictions of noncanonical base pairs of complex RNA structures. A benchmark of 82 diverse motifs establishes the method's general ability to recover noncanonical pairs ab initio, including multistrand motifs that have been refractory to prior approaches. In a blind challenge, SWM models predicted nucleotide-resolution chemical mapping and compensatory mutagenesis experiments for three in vitro selected tetraloop/receptors with previously unsolved structures (C7.2, C7.10, and R1). As a final test, SWM blindly and correctly predicted all noncanonical pairs of a Zika virus double pseudoknot during a recent community-wide RNA-Puzzle. Stepwise structure formation, as encoded in the SWM method, enables modeling of noncanonical RNA structure in a variety of previously intractable problems.
Translational Modeling in Schizophrenia: Predicting Human Dopamine D2 Receptor Occupancy.
Johnson, Martin; Kozielska, Magdalena; Pilla Reddy, Venkatesh; Vermeulen, An; Barton, Hugh A; Grimwood, Sarah; de Greef, Rik; Groothuis, Geny M M; Danhof, Meindert; Proost, Johannes H
2016-04-01
To assess the ability of a previously developed hybrid physiology-based pharmacokinetic-pharmacodynamic (PBPKPD) model in rats to predict the dopamine D2 receptor occupancy (D2RO) in human striatum following administration of antipsychotic drugs. A hybrid PBPKPD model, previously developed using information on plasma concentrations, brain exposure and D2RO in rats, was used as the basis for the prediction of D2RO in human. The rat pharmacokinetic and brain physiology parameters were substituted with human population pharmacokinetic parameters and human physiological information. To predict the passive transport across the human blood-brain barrier, apparent permeability values were scaled based on rat and human brain endothelial surface area. Active efflux clearance in brain was scaled from rat to human using both human brain endothelial surface area and MDR1 expression. Binding constants at the D2 receptor were scaled based on the differences between in vitro and in vivo systems of the same species. The predictive power of this physiology-based approach was determined by comparing the D2RO predictions with the observed human D2RO of six antipsychotics at clinically relevant doses. Predicted human D2RO was in good agreement with clinically observed D2RO for five antipsychotics. Models using in vitro information predicted human D2RO well for most of the compounds evaluated in this analysis. However, human D2RO was under-predicted for haloperidol. The rat hybrid PBPKPD model structure, integrated with in vitro information and human pharmacokinetic and physiological information, constitutes a scientific basis to predict the time course of D2RO in man.
Yuta, Atsushi; Ukai, Kotaro; Sakakura, Yasuo; Tani, Hideshi; Matsuda, Fukiko; Yang, Tian-qun; Majima, Yuichi
2002-07-01
We made a prediction of the Japanese cedar (Cryptomeria japonica) pollen counts at Tsu city based on male flower-setting conditions of standard trees. The 69 standard trees from 23 kinds of clones, planted at Mie Prefecture Science and Technology Promotion Center (Hakusan, Mie) in 1964, were selected. Male flower-setting conditions for 276 faces (69 trees x 4 points of the compass) were scored from 0 to 3. The average of scores and total pollen counts from 1988 to 2000 was analyzed. As the results, the average scores from standard trees and total pollen counts except two mass pollen-scattered years in 1995 and 2000 had a positive correlation (r = 0.914) by linear function. On the mass pollen-scattered years, pollen counts were influenced from the previous year. Therefore, the score of the present year minus that of the previous year were used for analysis. The average scores from male flower-setting conditions and pollen counts had a strong positive correlation (r = 0.994) when positive scores by taking account of the previous year were analyzed. We conclude that prediction of pollen counts are possible based on the male flower-setting conditions of standard trees.
Marufuzzaman, M; Reaz, M B I; Ali, M A M; Rahman, L F
2015-01-01
The goal of smart homes is to create an intelligent environment adapting the inhabitants need and assisting the person who needs special care and safety in their daily life. This can be reached by collecting the ADL (activities of daily living) data and further analysis within existing computing elements. In this research, a very recent algorithm named sequence prediction via enhanced episode discovery (SPEED) is modified and in order to improve accuracy time component is included. The modified SPEED or M-SPEED is a sequence prediction algorithm, which modified the previous SPEED algorithm by using time duration of appliance's ON-OFF states to decide the next state. M-SPEED discovered periodic episodes of inhabitant behavior, trained it with learned episodes, and made decisions based on the obtained knowledge. The results showed that M-SPEED achieves 96.8% prediction accuracy, which is better than other time prediction algorithms like PUBS, ALZ with temporal rules and the previous SPEED. Since human behavior shows natural temporal patterns, duration times can be used to predict future events more accurately. This inhabitant activity prediction system will certainly improve the smart homes by ensuring safety and better care for elderly and handicapped people.
Choosing the appropriate forecasting model for predictive parameter control.
Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars
2014-01-01
All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.
Entropy-based link prediction in weighted networks
NASA Astrophysics Data System (ADS)
Xu, Zhongqi; Pu, Cunlai; Ramiz Sharafat, Rajput; Li, Lunbo; Yang, Jian
2017-01-01
Information entropy has been proved to be an effective tool to quantify the structural importance of complex networks. In the previous work (Xu et al, 2016 \\cite{xu2016}), we measure the contribution of a path in link prediction with information entropy. In this paper, we further quantify the contribution of a path with both path entropy and path weight, and propose a weighted prediction index based on the contributions of paths, namely Weighted Path Entropy (WPE), to improve the prediction accuracy in weighted networks. Empirical experiments on six weighted real-world networks show that WPE achieves higher prediction accuracy than three typical weighted indices.
The SIST-M: Predictive validity of a brief structured Clinical Dementia Rating interview
Okereke, Olivia I.; Pantoja-Galicia, Norberto; Copeland, Maura; Hyman, Bradley T.; Wanggaard, Taylor; Albert, Marilyn S.; Betensky, Rebecca A.; Blacker, Deborah
2011-01-01
Background We previously established reliability and cross-sectional validity of the SIST-M (Structured Interview and Scoring Tool–Massachusetts Alzheimer's Disease Research Center), a shortened version of an instrument shown to predict progression to Alzheimer disease (AD), even among persons with very mild cognitive impairment (vMCI). Objective To test predictive validity of the SIST-M. Methods Participants were 342 community-dwelling, non-demented older adults in a longitudinal study. Baseline Clinical Dementia Rating (CDR) ratings were determined by either: 1) clinician interviews or 2) a previously developed computer algorithm based on 60 questions (of a possible 131) extracted from clinician interviews. We developed age+gender+education-adjusted Cox proportional hazards models using CDR-sum-of-boxes (CDR-SB) as the predictor, where CDR-SB was determined by either clinician interview or algorithm; models were run for the full sample (n=342) and among those jointly classified as vMCI using clinician- and algorithm-based CDR ratings (n=156). We directly compared predictive accuracy using time-dependent Receiver Operating Characteristic (ROC) curves. Results AD hazard ratios (HRs) were similar for clinician-based and algorithm-based CDR-SB: for a 1-point increment in CDR-SB, respective HRs (95% CI)=3.1 (2.5,3.9) and 2.8 (2.2,3.5); among those with vMCI, respective HRs (95% CI) were 2.2 (1.6,3.2) and 2.1 (1.5,3.0). Similarly high predictive accuracy was achieved: the concordance probability (weighted average of the area-under-the-ROC curves) over follow-up was 0.78 vs. 0.76 using clinician-based vs. algorithm-based CDR-SB. Conclusion CDR scores based on items from this shortened interview had high predictive ability for AD – comparable to that using a lengthy clinical interview. PMID:21986342
Basak, Chandramallika; Voss, Michelle W.; Erickson, Kirk I.; Boot, Walter R.; Kramer, Arthur F.
2015-01-01
Previous studies have found that differences in brain volume among older adults predict performance in laboratory tasks of executive control, memory, and motor learning. In the present study we asked whether regional differences in brain volume as assessed by the application of a voxel-based morphometry technique on high resolution MRI would also be useful in predicting the acquisition of skill in complex tasks, such as strategy-based video games. Twenty older adults were trained for over 20 hours to play Rise of Nations, a complex real-time strategy game. These adults showed substantial improvements over the training period in game performance. MRI scans obtained prior to training revealed that the volume of a number of brain regions, which have been previously associated with subsets of the trained skills, predicted a substantial amount of variance in learning on the complex game. Thus, regional differences in brain volume can predict learning in complex tasks that entail the use of a variety of perceptual, cognitive and motor processes. PMID:21546146
Basak, Chandramallika; Voss, Michelle W; Erickson, Kirk I; Boot, Walter R; Kramer, Arthur F
2011-08-01
Previous studies have found that differences in brain volume among older adults predict performance in laboratory tasks of executive control, memory, and motor learning. In the present study we asked whether regional differences in brain volume as assessed by the application of a voxel-based morphometry technique on high resolution MRI would also be useful in predicting the acquisition of skill in complex tasks, such as strategy-based video games. Twenty older adults were trained for over 20 h to play Rise of Nations, a complex real-time strategy game. These adults showed substantial improvements over the training period in game performance. MRI scans obtained prior to training revealed that the volume of a number of brain regions, which have been previously associated with subsets of the trained skills, predicted a substantial amount of variance in learning on the complex game. Thus, regional differences in brain volume can predict learning in complex tasks that entail the use of a variety of perceptual, cognitive and motor processes. Copyright © 2011 Elsevier Inc. All rights reserved.
PrePhyloPro: phylogenetic profile-based prediction of whole proteome linkages
Niu, Yulong; Liu, Chengcheng; Moghimyfiroozabad, Shayan; Yang, Yi
2017-01-01
Direct and indirect functional links between proteins as well as their interactions as part of larger protein complexes or common signaling pathways may be predicted by analyzing the correlation of their evolutionary patterns. Based on phylogenetic profiling, here we present a highly scalable and time-efficient computational framework for predicting linkages within the whole human proteome. We have validated this method through analysis of 3,697 human pathways and molecular complexes and a comparison of our results with the prediction outcomes of previously published co-occurrency model-based and normalization methods. Here we also introduce PrePhyloPro, a web-based software that uses our method for accurately predicting proteome-wide linkages. We present data on interactions of human mitochondrial proteins, verifying the performance of this software. PrePhyloPro is freely available at http://prephylopro.org/phyloprofile/. PMID:28875072
Gazestani, Vahid H; Salavati, Reza
2015-01-01
Trypanosoma brucei is a vector-borne parasite with intricate life cycle that can cause serious diseases in humans and animals. This pathogen relies on fine regulation of gene expression to respond and adapt to variable environments, with implications in transmission and infectivity. However, the involved regulatory elements and their mechanisms of actions are largely unknown. Here, benefiting from a new graph-based approach for finding functional regulatory elements in RNA (GRAFFER), we have predicted 88 new RNA regulatory elements that are potentially involved in the gene regulatory network of T. brucei. We show that many of these newly predicted elements are responsive to both transcriptomic and proteomic changes during the life cycle of the parasite. Moreover, we found that 11 of predicted elements strikingly resemble previously identified regulatory elements for the parasite. Additionally, comparison with previously predicted motifs on T. brucei suggested the superior performance of our approach based on the current limited knowledge of regulatory elements in T. brucei.
Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis
NASA Technical Reports Server (NTRS)
Mcanelly, W. B.; Young, C. T. K.
1973-01-01
Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.
Modelling Influence and Opinion Evolution in Online Collective Behaviour
Gend, Pascal; Rentfrow, Peter J.; Hendrickx, Julien M.; Blondel, Vincent D.
2016-01-01
Opinion evolution and judgment revision are mediated through social influence. Based on a large crowdsourced in vitro experiment (n = 861), it is shown how a consensus model can be used to predict opinion evolution in online collective behaviour. It is the first time the predictive power of a quantitative model of opinion dynamics is tested against a real dataset. Unlike previous research on the topic, the model was validated on data which did not serve to calibrate it. This avoids to favor more complex models over more simple ones and prevents overfitting. The model is parametrized by the influenceability of each individual, a factor representing to what extent individuals incorporate external judgments. The prediction accuracy depends on prior knowledge on the participants’ past behaviour. Several situations reflecting data availability are compared. When the data is scarce, the data from previous participants is used to predict how a new participant will behave. Judgment revision includes unpredictable variations which limit the potential for prediction. A first measure of unpredictability is proposed. The measure is based on a specific control experiment. More than two thirds of the prediction errors are found to occur due to unpredictability of the human judgment revision process rather than to model imperfection. PMID:27336834
Neel, Maile C; Che-Castaldo, Judy P
2013-04-01
Recovery plans for species listed under the U.S. Endangered Species Act are required to specify measurable criteria that can be used to determine when the species can be delisted. For the 642 listed endangered and threatened plant species that have recovery plans, we applied recursive partitioning methods to test whether the number of individuals or populations required for delisting can be predicted on the basis of distributional and biological traits, previous abundance at multiple time steps, or a combination of traits and previous abundances. We also tested listing status (threatened or endangered) and the year the recovery plan was written as predictors of recovery criteria. We analyzed separately recovery criteria that were stated as number of populations and as number of individuals (population-based and individual-based criteria, respectively). Previous abundances alone were relatively good predictors of population-based recovery criteria. Fewer populations, but a greater proportion of historically known populations, were required to delist species that had few populations at listing compared with species that had more populations at listing. Previous abundances were also good predictors of individual-based delisting criteria when models included both abundances and traits. The physiographic division in which the species occur was also a good predictor of individual-based criteria. Our results suggest managers are relying on previous abundances and patterns of decline as guidelines for setting recovery criteria. This may be justifiable in that previous abundances inform managers of the effects of both intrinsic traits and extrinsic threats that interact and determine extinction risk. © 2013 Society for Conservation Biology.
A new method for the prediction of combustion instability
NASA Astrophysics Data System (ADS)
Flanagan, Steven Meville
This dissertation presents a new approach to the prediction of combustion instability in solid rocket motors. Previous attempts at developing computational tools to solve this problem have been largely unsuccessful, showing very poor agreement with experimental results and having little or no predictive capability. This is due primarily to deficiencies in the linear stability theory upon which these efforts have been based. Recent advances in linear instability theory by Flandro have demonstrated the importance of including unsteady rotational effects, previously considered negligible. Previous versions of the theory also neglected corrections to the unsteady flow field of the first order in the mean flow Mach number. This research explores the stability implications of extending the solution to include these corrections. Also, the corrected linear stability theory based upon a rotational unsteady flow field extended to first order in mean flow Mach number has been implemented in two computer programs developed for the Macintosh platform. A quasi one-dimensional version of the program has been developed which is based upon an approximate solution to the cavity acoustics problem. The three-dimensional program applies Greens's Function Discretization (GFD) to the solution for the acoustic mode shapes and frequency. GFD is a recently developed numerical method for finding fully three dimensional solutions for this class of problems. The analysis of complex motor geometries, previously a tedious and time consuming task, has also been greatly simplified through the development of a drawing package designed specifically to facilitate the specification of typical motor geometries. The combination of the drawing package, improved acoustic solutions, and new analysis, results in a tool which is capable of producing more accurate and meaningful predictions than have been possible in the past.
HLPI-Ensemble: Prediction of human lncRNA-protein interactions based on ensemble strategy.
Hu, Huan; Zhang, Li; Ai, Haixin; Zhang, Hui; Fan, Yetian; Zhao, Qi; Liu, Hongsheng
2018-03-27
LncRNA plays an important role in many biological and disease progression by binding to related proteins. However, the experimental methods for studying lncRNA-protein interactions are time-consuming and expensive. Although there are a few models designed to predict the interactions of ncRNA-protein, they all have some common drawbacks that limit their predictive performance. In this study, we present a model called HLPI-Ensemble designed specifically for human lncRNA-protein interactions. HLPI-Ensemble adopts the ensemble strategy based on three mainstream machine learning algorithms of Support Vector Machines (SVM), Random Forests (RF) and Extreme Gradient Boosting (XGB) to generate HLPI-SVM Ensemble, HLPI-RF Ensemble and HLPI-XGB Ensemble, respectively. The results of 10-fold cross-validation show that HLPI-SVM Ensemble, HLPI-RF Ensemble and HLPI-XGB Ensemble achieved AUCs of 0.95, 0.96 and 0.96, respectively, in the test dataset. Furthermore, we compared the performance of the HLPI-Ensemble models with the previous models through external validation dataset. The results show that the false positives (FPs) of HLPI-Ensemble models are much lower than that of the previous models, and other evaluation indicators of HLPI-Ensemble models are also higher than those of the previous models. It is further showed that HLPI-Ensemble models are superior in predicting human lncRNA-protein interaction compared with previous models. The HLPI-Ensemble is publicly available at: http://ccsipb.lnu.edu.cn/hlpiensemble/ .
Lu, Ruipeng; Mucaki, Eliseos J; Rogan, Peter K
2017-03-17
Data from ChIP-seq experiments can derive the genome-wide binding specificities of transcription factors (TFs) and other regulatory proteins. We analyzed 765 ENCODE ChIP-seq peak datasets of 207 human TFs with a novel motif discovery pipeline based on recursive, thresholded entropy minimization. This approach, while obviating the need to compensate for skewed nucleotide composition, distinguishes true binding motifs from noise, quantifies the strengths of individual binding sites based on computed affinity and detects adjacent cofactor binding sites that coordinate with the targets of primary, immunoprecipitated TFs. We obtained contiguous and bipartite information theory-based position weight matrices (iPWMs) for 93 sequence-specific TFs, discovered 23 cofactor motifs for 127 TFs and revealed six high-confidence novel motifs. The reliability and accuracy of these iPWMs were determined via four independent validation methods, including the detection of experimentally proven binding sites, explanation of effects of characterized SNPs, comparison with previously published motifs and statistical analyses. We also predict previously unreported TF coregulatory interactions (e.g. TF complexes). These iPWMs constitute a powerful tool for predicting the effects of sequence variants in known binding sites, performing mutation analysis on regulatory SNPs and predicting previously unrecognized binding sites and target genes. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Empirical Observations on the Sensitivity of Hot Cathode Ionization Type Vacuum Gages
NASA Technical Reports Server (NTRS)
Summers, R. L.
1969-01-01
A study of empirical methods of predicting tile relative sensitivities of hot cathode ionization gages is presented. Using previously published gage sensitivities, several rules for predicting relative sensitivity are tested. The relative sensitivity to different gases is shown to be invariant with gage type, in the linear range of gage operation. The total ionization cross section, molecular and molar polarizability, and refractive index are demonstrated to be useful parameters for predicting relative gage sensitivity. Using data from the literature, the probable error of predictions of relative gage sensitivity based on these molecular properties is found to be about 10 percent. A comprehensive table of predicted relative sensitivities, based on empirical methods, is presented.
NASA Technical Reports Server (NTRS)
Kalayeh, H. M.; Landgrebe, D. A.
1983-01-01
A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109
Won, Jonghun; Lee, Gyu Rie; Park, Hahnbeom; Seok, Chaok
2018-06-07
The second extracellular loops (ECL2s) of G-protein-coupled receptors (GPCRs) are often involved in GPCR functions, and their structures have important implications in drug discovery. However, structure prediction of ECL2 is difficult because of its long length and the structural diversity among different GPCRs. In this study, a new ECL2 conformational sampling method involving both template-based and ab initio sampling was developed. Inspired by the observation of similar ECL2 structures of closely related GPCRs, a template-based sampling method employing loop structure templates selected from the structure database was developed. A new metric for evaluating similarity of the target loop to templates was introduced for template selection. An ab initio loop sampling method was also developed to treat cases without highly similar templates. The ab initio method is based on the previously developed fragment assembly and loop closure method. A new sampling component that takes advantage of secondary structure prediction was added. In addition, a conserved disulfide bridge restraining ECL2 conformation was predicted and analytically incorporated into sampling, reducing the effective dimension of the conformational search space. The sampling method was combined with an existing energy function for comparison with previously reported loop structure prediction methods, and the benchmark test demonstrated outstanding performance.
Previous modelling of the median lethal dose (oral rat LD50) has indicated that local class-based models yield better correlations than global models. We evaluated the hypothesis that dividing the dataset by pesticidal mechanisms would improve prediction accuracy. A linear discri...
Predicting Naming Latencies with an Analogical Model
ERIC Educational Resources Information Center
Chandler, Steve
2008-01-01
Skousen's (1989, Analogical modeling of language, Kluwer Academic Publishers, Dordrecht) Analogical Model (AM) predicts behavior such as spelling pronunciation by comparing the characteristics of a test item (a given input word) to those of individual exemplars in a data set of previously encountered items. While AM and other exemplar-based models…
NASA Astrophysics Data System (ADS)
Totani, Tomonori; Takeuchi, Tsutomu T.
2002-05-01
We give an explanation for the origin of various properties observed in local infrared galaxies and make predictions for galaxy counts and cosmic background radiation (CBR) using a new model extended from that for optical/near-infrared galaxies. Important new characteristics of this study are that (1) mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies and that (2) the large-grain dust temperature Tdust is calculated based on a physical consideration for energy balance rather than by using the empirical relation between Tdust and total infrared luminosity LIR found in local galaxies, which has been employed in most previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, LIR-Tdust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μm) and CBR using this model. We found results considerably different from those of most previous works based on the empirical LIR-Tdust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40-80 K), as often seen in starburst galaxies or ultraluminous infrared galaxies in the local and high-z universe. This indicates that intense starbursts of forming elliptical galaxies should have occurred at z~2-3, in contrast to the previous results that significant starbursts beyond z~1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE detections of FIR CBR. The intergalactic optical depth of TeV gamma rays based on our model is also presented.
An experimental validation of a statistical-based damage detection approach.
DOT National Transportation Integrated Search
2011-01-01
In this work, a previously-developed, statistical-based, damage-detection approach was validated for its ability to : autonomously detect damage in bridges. The damage-detection approach uses statistical differences in the actual and : predicted beha...
Enhanced thermoelectric performance of graphene nanoribbon-based devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hossain, Md Sharafat, E-mail: hossain@student.unimelb.edu.au; Huynh, Duc Hau; Nguyen, Phuong Duc
There have been numerous theoretical studies on exciting thermoelectric properties of graphene nano-ribbons (GNRs); however, most of these studies are mainly based on simulations. In this work, we measure and characterize the thermoelectric properties of GNRs and compare the results with theoretical predictions. Our experimental results verify that nano-structuring and patterning graphene into nano-ribbons significantly enhance its thermoelectric power, confirming previous predictions. Although patterning results in lower conductance (G), the overall power factor (S{sup 2}G) increases for nanoribbons. We demonstrate that edge roughness plays an important role in achieving such an enhanced performance and support it through first principles simulations.more » We show that uncontrolled edge roughness, which is considered detrimental in GNR-based electronic devices, leads to enhanced thermoelectric performance of GNR-based thermoelectric devices. The result validates previously reported theoretical studies of GNRs and demonstrates the potential of GNRs for the realization of highly efficient thermoelectric devices.« less
Walter G. Thies; Douglas J. Westlind
2012-01-01
Fires, whether intentionally or accidentally set, commonly occur in western interior forests of the US. Following fire, managers need the ability to predict mortality of individual trees based on easily observed characteristics. Previously, a two-factor model using crown scorch and bole scorch proportions was developed with data from 3415 trees for predicting the...
CABS-fold: Server for the de novo and consensus-based prediction of protein structure.
Blaszczyk, Maciej; Jamroz, Michal; Kmiecik, Sebastian; Kolinski, Andrzej
2013-07-01
The CABS-fold web server provides tools for protein structure prediction from sequence only (de novo modeling) and also using alternative templates (consensus modeling). The web server is based on the CABS modeling procedures ranked in previous Critical Assessment of techniques for protein Structure Prediction competitions as one of the leading approaches for de novo and template-based modeling. Except for template data, fragmentary distance restraints can also be incorporated into the modeling process. The web server output is a coarse-grained trajectory of generated conformations, its Jmol representation and predicted models in all-atom resolution (together with accompanying analysis). CABS-fold can be freely accessed at http://biocomp.chem.uw.edu.pl/CABSfold.
CABS-fold: server for the de novo and consensus-based prediction of protein structure
Blaszczyk, Maciej; Jamroz, Michal; Kmiecik, Sebastian; Kolinski, Andrzej
2013-01-01
The CABS-fold web server provides tools for protein structure prediction from sequence only (de novo modeling) and also using alternative templates (consensus modeling). The web server is based on the CABS modeling procedures ranked in previous Critical Assessment of techniques for protein Structure Prediction competitions as one of the leading approaches for de novo and template-based modeling. Except for template data, fragmentary distance restraints can also be incorporated into the modeling process. The web server output is a coarse-grained trajectory of generated conformations, its Jmol representation and predicted models in all-atom resolution (together with accompanying analysis). CABS-fold can be freely accessed at http://biocomp.chem.uw.edu.pl/CABSfold. PMID:23748950
NASA Astrophysics Data System (ADS)
Aissaoui, Tayeb; Benguerba, Yacine; AlNashef, Inas M.
2017-08-01
The in-silico combination mechanism of triethylene glycol based DESs has been studied. COSMO-RS and graphical user interface TmoleX software were used to predict the interaction mechanism of hydrogen bond donors (HBDs) with hydrogen bond acceptors (HBA) to form DESs. The predicted IR results were compared with the previously reported experimental FT-IR analysis for the same studied DESs. The sigma profiles for the HBD, HBAs and formed DESs were interpreted to identify qualitatively molecular properties like polarity or hydrogen bonding donor and acceptor abilities. The predicted physicochemical properties reported in this study were in good agreement with experimental ones.
Improving real-time efficiency of case-based reasoning for medical diagnosis.
Park, Yoon-Joo
2014-01-01
Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.
Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin
2015-01-01
Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. PMID:26590280
Predicting disease progression from short biomarker series using expert advice algorithm
NASA Astrophysics Data System (ADS)
Morino, Kai; Hirata, Yoshito; Tomioka, Ryota; Kashima, Hisashi; Yamanishi, Kenji; Hayashi, Norihiro; Egawa, Shin; Aihara, Kazuyuki
2015-05-01
Well-trained clinicians may be able to provide diagnosis and prognosis from very short biomarker series using information and experience gained from previous patients. Although mathematical methods can potentially help clinicians to predict the progression of diseases, there is no method so far that estimates the patient state from very short time-series of a biomarker for making diagnosis and/or prognosis by employing the information of previous patients. Here, we propose a mathematical framework for integrating other patients' datasets to infer and predict the state of the disease in the current patient based on their short history. We extend a machine-learning framework of ``prediction with expert advice'' to deal with unstable dynamics. We construct this mathematical framework by combining expert advice with a mathematical model of prostate cancer. Our model predicted well the individual biomarker series of patients with prostate cancer that are used as clinical samples.
Predicting disease progression from short biomarker series using expert advice algorithm.
Morino, Kai; Hirata, Yoshito; Tomioka, Ryota; Kashima, Hisashi; Yamanishi, Kenji; Hayashi, Norihiro; Egawa, Shin; Aihara, Kazuyuki
2015-05-20
Well-trained clinicians may be able to provide diagnosis and prognosis from very short biomarker series using information and experience gained from previous patients. Although mathematical methods can potentially help clinicians to predict the progression of diseases, there is no method so far that estimates the patient state from very short time-series of a biomarker for making diagnosis and/or prognosis by employing the information of previous patients. Here, we propose a mathematical framework for integrating other patients' datasets to infer and predict the state of the disease in the current patient based on their short history. We extend a machine-learning framework of "prediction with expert advice" to deal with unstable dynamics. We construct this mathematical framework by combining expert advice with a mathematical model of prostate cancer. Our model predicted well the individual biomarker series of patients with prostate cancer that are used as clinical samples.
NASA Astrophysics Data System (ADS)
Nakatsugawa, M.; Kobayashi, Y.; Okazaki, R.; Taniguchi, Y.
2017-12-01
This research aims to improve accuracy of water level prediction calculations for more effective river management. In August 2016, Hokkaido was visited by four typhoons, whose heavy rainfall caused severe flooding. In the Tokoro river basin of Eastern Hokkaido, the water level (WL) at the Kamikawazoe gauging station, which is at the lower reaches exceeded the design high-water level and the water rose to the highest level on record. To predict such flood conditions and mitigate disaster damage, it is necessary to improve the accuracy of prediction as well as to prolong the lead time (LT) required for disaster mitigation measures such as flood-fighting activities and evacuation actions by residents. There is the need to predict the river water level around the peak stage earlier and more accurately. Previous research dealing with WL prediction had proposed a method in which the WL at the lower reaches is estimated by the correlation with the WL at the upper reaches (hereinafter: "the water level correlation method"). Additionally, a runoff model-based method has been generally used in which the discharge is estimated by giving rainfall prediction data to a runoff model such as a storage function model and then the WL is estimated from that discharge by using a WL discharge rating curve (H-Q curve). In this research, an attempt was made to predict WL by applying the Random Forest (RF) method, which is a machine learning method that can estimate the contribution of explanatory variables. Furthermore, from the practical point of view, we investigated the prediction of WL based on a multiple correlation (MC) method involving factors using explanatory variables with high contribution in the RF method, and we examined the proper selection of explanatory variables and the extension of LT. The following results were found: 1) Based on the RF method tuned up by learning from previous floods, the WL for the abnormal flood case of August 2016 was properly predicted with a lead time of 6 h. 2) Based on the contribution of explanatory variables, factors were selected for the MC method. In this way, plausible prediction results were obtained.
Model-free and model-based reward prediction errors in EEG.
Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy
2018-05-24
Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.
Allen, D D; Bond, C A
2001-07-01
Good admissions decisions are essential for identifying successful students and good practitioners. Various parameters have been shown to have predictive power for academic success. Previous academic performance, the Pharmacy College Admissions Test (PCAT), and specific prepharmacy courses have been suggested as academic performance indicators. However, critical thinking abilities have not been evaluated. We evaluated the connection between academic success and each of the following predictive parameters: the California Critical Thinking Skills Test (CCTST) score, PCAT score, interview score, overall academic performance prior to admission at a pharmacy school, and performance in specific prepharmacy courses. We confirmed previous reports but demonstrated intriguing results in predicting practice-based skills. Critical thinking skills predict practice-based course success. Also, the CCTST and PCAT scores (Pearson correlation [pc] = 0.448, p < 0.001) were closely related in our students. The strongest predictors of practice-related courses and clerkship success were PCAT (pc=0.237, p<0.001) and CCTST (pc = 0.201, p < 0.001). These findings and other analyses suggest that PCAT may predict critical thinking skills in pharmacy practice courses and clerkships. Further study is needed to confirm this finding and determine which PCAT components predict critical thinking abilities.
Prediction of frozen food properties during freezing using product composition.
Boonsupthip, W; Heldman, D R
2007-06-01
Frozen water fraction (FWF), as a function of temperature, is an important parameter for use in the design of food freezing processes. An FWF-prediction model, based on concentrations and molecular weights of specific product components, has been developed. Published food composition data were used to determine the identity and composition of key components. The model proposed in this investigation had been verified using published experimental FWF data and initial freezing temperature data, and by comparison to outputs from previously published models. It was found that specific food components with significant influence on freezing temperature depression of food products included low molecular weight water-soluble compounds with molality of 50 micromol per 100 g food or higher. Based on an analysis of 200 high-moisture food products, nearly 45% of the experimental initial freezing temperature data were within an absolute difference (AD) of +/- 0.15 degrees C and standard error (SE) of +/- 0.65 degrees C when compared to values predicted by the proposed model. The predicted relationship between temperature and FWF for all analyzed food products provided close agreements with experimental data (+/- 0.06 SE). The proposed model provided similar prediction capability for high- and intermediate-moisture food products. In addition, the proposed model provided statistically better prediction of initial freezing temperature and FWF than previous published models.
Inductive reasoning about causally transmitted properties.
Shafto, Patrick; Kemp, Charles; Bonawitz, Elizabeth Baraff; Coley, John D; Tenenbaum, Joshua B
2008-11-01
Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates' context-sensitive use of taxonomic and food web knowledge to guide reasoning about causal transmission and shows good qualitative agreement between model predictions and human inferences. A second experiment demonstrates strong quantitative and qualitative fits to inferences about a more complex artificial food web. A third experiment investigates human reasoning about complex novel food webs where species have known taxonomic relations. Results demonstrate a double-dissociation between the predictions of our causal model and a related taxonomic model [Kemp, C., & Tenenbaum, J. B. (2003). Learning domain structures. In Proceedings of the 25th annual conference of the cognitive science society]: the causal model predicts human inferences about diseases but not genes, while the taxonomic model predicts human inferences about genes but not diseases. We contrast our framework with previous models of category-based induction and previous formal instantiations of intuitive theories, and outline challenges in developing a complete model of context-sensitive reasoning.
PharmDock: a pharmacophore-based docking program
2014-01-01
Background Protein-based pharmacophore models are enriched with the information of potential interactions between ligands and the protein target. We have shown in a previous study that protein-based pharmacophore models can be applied for ligand pose prediction and pose ranking. In this publication, we present a new pharmacophore-based docking program PharmDock that combines pose sampling and ranking based on optimized protein-based pharmacophore models with local optimization using an empirical scoring function. Results Tests of PharmDock on ligand pose prediction, binding affinity estimation, compound ranking and virtual screening yielded comparable or better performance to existing and widely used docking programs. The docking program comes with an easy-to-use GUI within PyMOL. Two features have been incorporated in the program suite that allow for user-defined guidance of the docking process based on previous experimental data. Docking with those features demonstrated superior performance compared to unbiased docking. Conclusion A protein pharmacophore-based docking program, PharmDock, has been made available with a PyMOL plugin. PharmDock and the PyMOL plugin are freely available from http://people.pharmacy.purdue.edu/~mlill/software/pharmdock. PMID:24739488
Ghosts in the Machine II: Neural Correlates of Memory Interference from the Previous Trial.
Papadimitriou, Charalampos; White, Robert L; Snyder, Lawrence H
2017-04-01
Previous memoranda interfere with working memory. For example, spatial memories are biased toward locations memorized on the previous trial. We predicted, based on attractor network models of memory, that activity in the frontal eye fields (FEFs) encoding a previous target location can persist into the subsequent trial and that this ghost will then bias the readout of the current target. Contrary to this prediction, we find that FEF memory representations appear biased away from (not toward) the previous target location. The behavioral and neural data can be reconciled by a model in which receptive fields of memory neurons converge toward remembered locations, much as receptive fields converge toward attended locations. Convergence increases the resources available to encode the relevant memoranda and decreases overall error in the network, but the residual convergence from the previous trial can give rise to an attractive behavioral bias on the next trial. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Predicting PDZ domain mediated protein interactions from structure
2013-01-01
Background PDZ domains are structural protein domains that recognize simple linear amino acid motifs, often at protein C-termini, and mediate protein-protein interactions (PPIs) in important biological processes, such as ion channel regulation, cell polarity and neural development. PDZ domain-peptide interaction predictors have been developed based on domain and peptide sequence information. Since domain structure is known to influence binding specificity, we hypothesized that structural information could be used to predict new interactions compared to sequence-based predictors. Results We developed a novel computational predictor of PDZ domain and C-terminal peptide interactions using a support vector machine trained with PDZ domain structure and peptide sequence information. Performance was estimated using extensive cross validation testing. We used the structure-based predictor to scan the human proteome for ligands of 218 PDZ domains and show that the predictions correspond to known PDZ domain-peptide interactions and PPIs in curated databases. The structure-based predictor is complementary to the sequence-based predictor, finding unique known and novel PPIs, and is less dependent on training–testing domain sequence similarity. We used a functional enrichment analysis of our hits to create a predicted map of PDZ domain biology. This map highlights PDZ domain involvement in diverse biological processes, some only found by the structure-based predictor. Based on this analysis, we predict novel PDZ domain involvement in xenobiotic metabolism and suggest new interactions for other processes including wound healing and Wnt signalling. Conclusions We built a structure-based predictor of PDZ domain-peptide interactions, which can be used to scan C-terminal proteomes for PDZ interactions. We also show that the structure-based predictor finds many known PDZ mediated PPIs in human that were not found by our previous sequence-based predictor and is less dependent on training–testing domain sequence similarity. Using both predictors, we defined a functional map of human PDZ domain biology and predict novel PDZ domain function. Users may access our structure-based and previous sequence-based predictors at http://webservice.baderlab.org/domains/POW. PMID:23336252
Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.
Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O
2017-08-01
To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.
Formability prediction for AHSS materials using damage models
NASA Astrophysics Data System (ADS)
Amaral, R.; Santos, Abel D.; José, César de Sá; Miranda, Sara
2017-05-01
Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches.
An auxiliary optimization method for complex public transit route network based on link prediction
NASA Astrophysics Data System (ADS)
Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian
2018-02-01
Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.
Prediction and analysis of beta-turns in proteins by support vector machine.
Pham, Tho Hoan; Satou, Kenji; Ho, Tu Bao
2003-01-01
Tight turn has long been recognized as one of the three important features of proteins after the alpha-helix and beta-sheet. Tight turns play an important role in globular proteins from both the structural and functional points of view. More than 90% tight turns are beta-turns. Analysis and prediction of beta-turns in particular and tight turns in general are very useful for the design of new molecules such as drugs, pesticides, and antigens. In this paper, we introduce a support vector machine (SVM) approach to prediction and analysis of beta-turns. We have investigated two aspects of applying SVM to the prediction and analysis of beta-turns. First, we developed a new SVM method, called BTSVM, which predicts beta-turns of a protein from its sequence. The prediction results on the dataset of 426 non-homologous protein chains by sevenfold cross-validation technique showed that our method is superior to the other previous methods. Second, we analyzed how amino acid positions support (or prevent) the formation of beta-turns based on the "multivariable" classification model of a linear SVM. This model is more general than the other ones of previous statistical methods. Our analysis results are more comprehensive and easier to use than previously published analysis results.
Network of listed companies based on common shareholders and the prediction of market volatility
NASA Astrophysics Data System (ADS)
Li, Jie; Ren, Da; Feng, Xu; Zhang, Yongjie
2016-11-01
In this paper, we build a network of listed companies in the Chinese stock market based on common shareholding data from 2003 to 2013. We analyze the evolution of topological characteristics of the network (e.g., average degree, diameter, average path length and clustering coefficient) with respect to the time sequence. Additionally, we consider the economic implications of topological characteristic changes on market volatility and use them to make future predictions. Our study finds that the network diameter significantly predicts volatility. After adding control variables used in traditional financial studies (volume, turnover and previous volatility), network topology still significantly influences volatility and improves the predictive ability of the model.
Hayashi, Takanori; Matsuzaki, Yuri; Yanagisawa, Keisuke; Ohue, Masahito; Akiyama, Yutaka
2018-05-08
Protein-protein interactions (PPIs) play several roles in living cells, and computational PPI prediction is a major focus of many researchers. The three-dimensional (3D) structure and binding surface are important for the design of PPI inhibitors. Therefore, rigid body protein-protein docking calculations for two protein structures are expected to allow elucidation of PPIs different from known complexes in terms of 3D structures because known PPI information is not explicitly required. We have developed rapid PPI prediction software based on protein-protein docking, called MEGADOCK. In order to fully utilize the benefits of computational PPI predictions, it is necessary to construct a comprehensive database to gather prediction results and their predicted 3D complex structures and to make them easily accessible. Although several databases exist that provide predicted PPIs, the previous databases do not contain a sufficient number of entries for the purpose of discovering novel PPIs. In this study, we constructed an integrated database of MEGADOCK PPI predictions, named MEGADOCK-Web. MEGADOCK-Web provides more than 10 times the number of PPI predictions than previous databases and enables users to conduct PPI predictions that cannot be found in conventional PPI prediction databases. In MEGADOCK-Web, there are 7528 protein chains and 28,331,628 predicted PPIs from all possible combinations of those proteins. Each protein structure is annotated with PDB ID, chain ID, UniProt AC, related KEGG pathway IDs, and known PPI pairs. Additionally, MEGADOCK-Web provides four powerful functions: 1) searching precalculated PPI predictions, 2) providing annotations for each predicted protein pair with an experimentally known PPI, 3) visualizing candidates that may interact with the query protein on biochemical pathways, and 4) visualizing predicted complex structures through a 3D molecular viewer. MEGADOCK-Web provides a huge amount of comprehensive PPI predictions based on docking calculations with biochemical pathways and enables users to easily and quickly assess PPI feasibilities by archiving PPI predictions. MEGADOCK-Web also promotes the discovery of new PPIs and protein functions and is freely available for use at http://www.bi.cs.titech.ac.jp/megadock-web/ .
Next Place Prediction Based on Spatiotemporal Pattern Mining of Mobile Device Logs.
Lee, Sungjun; Lim, Junseok; Park, Jonghun; Kim, Kwanho
2016-01-23
Due to the recent explosive growth of location-aware services based on mobile devices, predicting the next places of a user is of increasing importance to enable proactive information services. In this paper, we introduce a data-driven framework that aims to predict the user's next places using his/her past visiting patterns analyzed from mobile device logs. Specifically, the notion of the spatiotemporal-periodic (STP) pattern is proposed to capture the visits with spatiotemporal periodicity by focusing on a detail level of location for each individual. Subsequently, we present algorithms that extract the STP patterns from a user's past visiting behaviors and predict the next places based on the patterns. The experiment results obtained by using a real-world dataset show that the proposed methods are more effective in predicting the user's next places than the previous approaches considered in most cases.
NASA Astrophysics Data System (ADS)
Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan
2018-02-01
Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.
Theil, P K; Flummer, C; Hurley, W L; Kristensen, N B; Labouriau, R L; Sørensen, M T
2014-12-01
The aims of the present study were to quantify colostrum intake (CI) of piglets using the D2O dilution technique, to develop a mechanistic model to predict CI, to compare these data with CI predicted by a previous empirical predictive model developed for bottle-fed piglets, and to study how composition of diets fed to gestating sows affected piglet CI, sow colostrum yield (CY), and colostrum composition. In total, 240 piglets from 40 litters were enriched with D2O. The CI measured by D2O from birth until 24 h after the birth of first-born piglet was on average 443 g (SD 151). Based on measured CI, a mechanistic model to predict CI was developed using piglet characteristics (24-h weight gain [WG; g], BW at birth [BWB; kg], and duration of CI [D; min]: CI, g=-106+2.26 WG+200 BWB+0.111 D-1,414 WG/D+0.0182 WG/BWB (R2=0.944). This model was used to predict the CI for all colostrum suckling piglets within the 40 litters (n=500, mean=437 g, SD=153 g) and was compared with the CI predicted by a previous empirical predictive model (mean=305 g, SD=140 g). The previous empirical model underestimated the CI by 30% compared with that obtained by the new mechanistic model. The sows were fed 1 of 4 gestation diets (n=10 per diet) based on different fiber sources (low fiber [17%] or potato pulp, pectin residue, or sugarbeet pulp [32 to 40%]) from mating until d 108 of gestation. From d 108 of gestation until parturition, sows were fed 1 of 5 prefarrowing diets (n=8 per diet) varying in supplemented fat (3% animal fat, 8% coconut oil, 8% sunflower oil, 8% fish oil, or 4% fish oil+4% octanoic acid). Sows fed diets with pectin residue or sugarbeet pulp during gestation produced colostrum with lower protein, fat, DM, and energy concentrations and higher lactose concentrations, and their piglets had greater CI as compared with sows fed potato pulp or the low-fiber diet (P<0.05), and sows fed pectin residue had a greater CY than potato pulp-fed sows (P<0.05). Prefarrowing diets affected neither CI nor CY, but the prefarrowing diet with coconut oil decreased lactose and increased DM concentrations of colostrum compared with other prefarrowing diets (P<0.05). In conclusion, the new mechanistic predictive model for CI suggests that the previous empirical predictive model underestimates CI of sow-reared piglets by 30%. It was also concluded that nutrition of sows during gestation affected CY and colostrum composition.
Kim, Yun Hak; Jeong, Dae Cheon; Pak, Kyoungjune; Goh, Tae Sik; Lee, Chi-Seung; Han, Myoung-Eun; Kim, Ji-Young; Liangwen, Liu; Kim, Chi Dae; Jang, Jeon Yeob; Cha, Wonjae; Oh, Sae-Ock
2017-09-29
Accurate prediction of prognosis is critical for therapeutic decisions regarding cancer patients. Many previously developed prognostic scoring systems have limitations in reflecting recent progress in the field of cancer biology such as microarray, next-generation sequencing, and signaling pathways. To develop a new prognostic scoring system for cancer patients, we used mRNA expression and clinical data in various independent breast cancer cohorts (n=1214) from the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) and Gene Expression Omnibus (GEO). A new prognostic score that reflects gene network inherent in genomic big data was calculated using Network-Regularized high-dimensional Cox-regression (Net-score). We compared its discriminatory power with those of two previously used statistical methods: stepwise variable selection via univariate Cox regression (Uni-score) and Cox regression via Elastic net (Enet-score). The Net scoring system showed better discriminatory power in prediction of disease-specific survival (DSS) than other statistical methods (p=0 in METABRIC training cohort, p=0.000331, 4.58e-06 in two METABRIC validation cohorts) when accuracy was examined by log-rank test. Notably, comparison of C-index and AUC values in receiver operating characteristic analysis at 5 years showed fewer differences between training and validation cohorts with the Net scoring system than other statistical methods, suggesting minimal overfitting. The Net-based scoring system also successfully predicted prognosis in various independent GEO cohorts with high discriminatory power. In conclusion, the Net-based scoring system showed better discriminative power than previous statistical methods in prognostic prediction for breast cancer patients. This new system will mark a new era in prognosis prediction for cancer patients.
Kim, Yun Hak; Jeong, Dae Cheon; Pak, Kyoungjune; Goh, Tae Sik; Lee, Chi-Seung; Han, Myoung-Eun; Kim, Ji-Young; Liangwen, Liu; Kim, Chi Dae; Jang, Jeon Yeob; Cha, Wonjae; Oh, Sae-Ock
2017-01-01
Accurate prediction of prognosis is critical for therapeutic decisions regarding cancer patients. Many previously developed prognostic scoring systems have limitations in reflecting recent progress in the field of cancer biology such as microarray, next-generation sequencing, and signaling pathways. To develop a new prognostic scoring system for cancer patients, we used mRNA expression and clinical data in various independent breast cancer cohorts (n=1214) from the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) and Gene Expression Omnibus (GEO). A new prognostic score that reflects gene network inherent in genomic big data was calculated using Network-Regularized high-dimensional Cox-regression (Net-score). We compared its discriminatory power with those of two previously used statistical methods: stepwise variable selection via univariate Cox regression (Uni-score) and Cox regression via Elastic net (Enet-score). The Net scoring system showed better discriminatory power in prediction of disease-specific survival (DSS) than other statistical methods (p=0 in METABRIC training cohort, p=0.000331, 4.58e-06 in two METABRIC validation cohorts) when accuracy was examined by log-rank test. Notably, comparison of C-index and AUC values in receiver operating characteristic analysis at 5 years showed fewer differences between training and validation cohorts with the Net scoring system than other statistical methods, suggesting minimal overfitting. The Net-based scoring system also successfully predicted prognosis in various independent GEO cohorts with high discriminatory power. In conclusion, the Net-based scoring system showed better discriminative power than previous statistical methods in prognostic prediction for breast cancer patients. This new system will mark a new era in prognosis prediction for cancer patients. PMID:29100405
ENSO-based probabilistic forecasts of March-May U.S. tornado and hail activity
NASA Astrophysics Data System (ADS)
Lepore, Chiara; Tippett, Michael K.; Allen, John T.
2017-09-01
Extended logistic regression is used to predict March-May severe convective storm (SCS) activity based on the preceding December-February (DJF) El Niño-Southern Oscillation (ENSO) state. The spatially resolved probabilistic forecasts are verified against U.S. tornado counts, hail events, and two environmental indices for severe convection. The cross-validated skill is positive for roughly a quarter of the U.S. Overall, indices are predicted with more skill than are storm reports, and hail events are predicted with more skill than tornado counts. Skill is higher in the cool phase of ENSO (La Niña like) when overall SCS activity is higher. SCS forecasts based on the predicted DJF ENSO state from coupled dynamical models initialized in October of the previous year extend the lead time with only a modest reduction in skill compared to forecasts based on the observed DJF ENSO state.
A vertical handoff decision algorithm based on ARMA prediction model
NASA Astrophysics Data System (ADS)
Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan
2012-01-01
With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.
Pseudoracemic amino acid complexes: blind predictions for flexible two-component crystals.
Görbitz, Carl Henrik; Dalhus, Bjørn; Day, Graeme M
2010-08-14
Ab initio prediction of the crystal packing in complexes between two flexible molecules is a particularly challenging computational chemistry problem. In this work we present results of single crystal structure determinations as well as theoretical predictions for three 1 ratio 1 complexes between hydrophobic l- and d-amino acids (pseudoracemates), known from previous crystallographic work to form structures with one of two alternative hydrogen bonding arrangements. These are accurately reproduced in the theoretical predictions together with a series of patterns that have never been observed experimentally. In this bewildering forest of potential polymorphs, hydrogen bonding arrangements and molecular conformations, the theoretical predictions succeeded, for all three complexes, in finding the correct hydrogen bonding pattern. For two of the complexes, the calculations also reproduce the exact space group and side chain orientations in the best ranked predicted structure. This includes one complex for which the observed crystal packing clearly contradicted previous experience based on experimental data for a substantial number of related amino acid complexes. The results highlight the significant recent advances that have been made in computational methods for crystal structure prediction.
Jeffrey J. Barry; John M. Buffington; Peter Goodwin; John .G. King; William W. Emmett
2008-01-01
Previous studies assessing the accuracy of bed-load transport equations have considered equation performance statistically based on paired observations of measured and predicted bed-load transport rates. However, transport measurements were typically taken during low flows, biasing the assessment of equation performance toward low discharges, and because equation...
The Role of Music Perception in Predicting Phonological Awareness in Five- and Six-Year-Old Children
ERIC Educational Resources Information Center
Lathroum, Linda M.
2011-01-01
The purpose of this study was to examine the role of music perception in predicting phonological awareness in five- and six-year-old children. This study was based on the hypothesis that music perception and phonological awareness appear to have parallel auditory perceptual mechanisms. Previous research investigating the relationship between these…
ERIC Educational Resources Information Center
Alltucker, Kevin W.; Bullis, Michael; Close, Daniel; Yovanoff, Paul
2006-01-01
We examined the differences between early and late start juvenile delinquents in a sample of 531 previously incarcerated youth in Oregon's juvenile justice system. Data were analyzed with logistic regression to predict early start delinquency based on four explanatory variables: foster care experience, family criminality, special education…
Albitar, Maher; Ma, Wanlong; Lund, Lars; Shahbaba, Babak; Uchio, Edward; Feddersen, Søren; Moylan, Donald; Wojno, Kirk; Shore, Neal
2018-03-01
Distinguishing between low- and high-grade prostate cancers (PCa) is important, but biopsy may underestimate the actual grade of cancer. We have previously shown that urine/plasma-based prostate-specific biomarkers can predict high grade PCa. Our objective was to determine the accuracy of a test using cell-free RNA levels of biomarkers in predicting prostatectomy results. This multicenter community-based prospective study was conducted using urine/blood samples collected from 306 patients. All recruited patients were treatment-naïve, without metastases, and had been biopsied, designated a Gleason Score (GS) based on biopsy, and assigned to prostatectomy prior to participation in the study. The primary outcome measure was the urine/plasma test accuracy in predicting high grade PCa on prostatectomy compared with biopsy findings. Sensitivity and specificity were calculated using standard formulas, while comparisons between groups were performed using the Wilcoxon Rank Sum, Kruskal-Wallis, Chi-Square, and Fisher's exact test. GS as assigned by standard 10-12 core biopsies was 3 + 3 in 90 (29.4%), 3 + 4 in 122 (39.8%), 4 + 3 in 50 (16.3%), and > 4 + 3 in 44 (14.4%) patients. The urine/plasma assay confirmed a previous validation and was highly accurate in predicting the presence of high-grade PCa (Gleason ≥3 + 4) with sensitivity between 88% and 95% as verified by prostatectomy findings. GS was upgraded after prostatectomy in 27% of patients and downgraded in 12% of patients. This plasma/urine biomarker test accurately predicts high grade cancer as determined by prostatectomy with a sensitivity at 92-97%, while the sensitivity of core biopsies was 78%. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Totani, T.; Takeuchi, T. T.
2001-12-01
A new model of infrared galaxy counts and the cosmic background radiation (CBR) is developed by extending a model for optical/near-infrared galaxies. Important new characteristics of this model are that mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies, and that the big grain dust temperature T dust is calculated based on a physical consideration for energy balance, rather than using the empirical relation between T dust and total infrared luminosity L IR found in local galaxies, which has been employed in most of previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, L IR-T dust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μ m) and CBR by this model. We found considerably different results from most of previous works based on the empirical L IR-T dust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40--80K). This indicates that intense starbursts of forming elliptical galaxies should have occurred at z ~ 2--3, in contrast to the previous results that significant starbursts beyond z ~ 1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE\\ detections of FIR CBR. The authors thank the financial support by the Japan Society for Promotion of Science.
Evaluation of free modeling targets in CASP11 and ROLL.
Kinch, Lisa N; Li, Wenlin; Monastyrskyy, Bohdan; Kryshtafovych, Andriy; Grishin, Nick V
2016-09-01
We present an assessment of 'template-free modeling' (FM) in CASP11and ROLL. Community-wide server performance suggested the use of automated scores similar to previous CASPs would provide a good system of evaluating performance, even in the absence of comprehensive manual assessment. The CASP11 FM category included several outstanding examples, including successful prediction by the Baker group of a 256-residue target (T0806-D1) that lacked sequence similarity to any existing template. The top server model prediction by Zhang's Quark, which was apparently selected and refined by several manual groups, encompassed the entire fold of target T0837-D1. Methods from the same two groups tended to dominate overall CASP11 FM and ROLL rankings. Comparison of top FM predictions with those from the previous CASP experiment revealed progress in the category, particularly reflected in high prediction accuracy for larger protein domains. FM prediction models for two cases were sufficient to provide functional insights that were otherwise not obtainable by traditional sequence analysis methods. Importantly, CASP11 abstracts revealed that alignment-based contact prediction methods brought about much of the CASP11 progress, producing both of the functionally relevant models as well as several of the other outstanding structure predictions. These methodological advances enabled de novo modeling of much larger domain structures than was previously possible and allowed prediction of functional sites. Proteins 2016; 84(Suppl 1):51-66. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Discrepancy-based and anticipated emotions in behavioral self-regulation.
Brown, Christina M; McConnell, Allen R
2011-10-01
Discrepancies between one's current and desired states evoke negative emotions, which presumably guide self-regulation. In the current work we evaluated the function of discrepancy-based emotions in behavioral self-regulation. Contrary to classic theories of self-regulation, discrepancy-based emotions did not predict the degree to which people engaged in self-regulatory behavior. Instead, expectations about how future self-discrepancies would make one feel (i.e., anticipated emotions) predicted self-regulation. However, anticipated emotions were influenced by previous discrepancy-based emotional experiences, suggesting that the latter do not directly motivate self-regulation but rather guide expectations. These findings are consistent with the perspective that emotions do not necessarily direct immediate behavior, but rather have an indirect effect by guiding expectations, which in turn predict goal-directed action.
Huang, David; Tang, Maolong; Wang, Li; Zhang, Xinbo; Armour, Rebecca L.; Gattey, Devin M.; Lombardi, Lorinna H.; Koch, Douglas D.
2013-01-01
Purpose: To use optical coherence tomography (OCT) to measure corneal power and improve the selection of intraocular lens (IOL) power in cataract surgeries after laser vision correction. Methods: Patients with previous myopic laser vision corrections were enrolled in this prospective study from two eye centers. Corneal thickness and power were measured by Fourier-domain OCT. Axial length, anterior chamber depth, and automated keratometry were measured by a partial coherence interferometer. An OCT-based IOL formula was developed. The mean absolute error of the OCT-based formula in predicting postoperative refraction was compared to two regression-based IOL formulae for eyes with previous laser vision correction. Results: Forty-six eyes of 46 patients all had uncomplicated cataract surgery with monofocal IOL implantation. The mean arithmetic prediction error of postoperative refraction was 0.05 ± 0.65 diopter (D) for the OCT formula, 0.14 ± 0.83 D for the Haigis-L formula, and 0.24 ± 0.82 D for the no-history Shammas-PL formula. The mean absolute error was 0.50 D for OCT compared to a mean absolute error of 0.67 D for Haigis-L and 0.67 D for Shammas-PL. The adjusted mean absolute error (average prediction error removed) was 0.49 D for OCT, 0.65 D for Haigis-L (P=.031), and 0.62 D for Shammas-PL (P=.044). For OCT, 61% of the eyes were within 0.5 D of prediction error, whereas 46% were within 0.5 D for both Haigis-L and Shammas-PL (P=.034). Conclusions: The predictive accuracy of OCT-based IOL power calculation was better than Haigis-L and Shammas-PL formulas in eyes after laser vision correction. PMID:24167323
Early Prediction of Intensive Care Unit-Acquired Weakness: A Multicenter External Validation Study.
Witteveen, Esther; Wieske, Luuk; Sommers, Juultje; Spijkstra, Jan-Jaap; de Waard, Monique C; Endeman, Henrik; Rijkenberg, Saskia; de Ruijter, Wouter; Sleeswijk, Mengalvio; Verhamme, Camiel; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke
2018-01-01
An early diagnosis of intensive care unit-acquired weakness (ICU-AW) is often not possible due to impaired consciousness. To avoid a diagnostic delay, we previously developed a prediction model, based on single-center data from 212 patients (development cohort), to predict ICU-AW at 2 days after ICU admission. The objective of this study was to investigate the external validity of the original prediction model in a new, multicenter cohort and, if necessary, to update the model. Newly admitted ICU patients who were mechanically ventilated at 48 hours after ICU admission were included. Predictors were prospectively recorded, and the outcome ICU-AW was defined by an average Medical Research Council score <4. In the validation cohort, consisting of 349 patients, we analyzed performance of the original prediction model by assessment of calibration and discrimination. Additionally, we updated the model in this validation cohort. Finally, we evaluated a new prediction model based on all patients of the development and validation cohort. Of 349 analyzed patients in the validation cohort, 190 (54%) developed ICU-AW. Both model calibration and discrimination of the original model were poor in the validation cohort. The area under the receiver operating characteristics curve (AUC-ROC) was 0.60 (95% confidence interval [CI]: 0.54-0.66). Model updating methods improved calibration but not discrimination. The new prediction model, based on all patients of the development and validation cohort (total of 536 patients) had a fair discrimination, AUC-ROC: 0.70 (95% CI: 0.66-0.75). The previously developed prediction model for ICU-AW showed poor performance in a new independent multicenter validation cohort. Model updating methods improved calibration but not discrimination. The newly derived prediction model showed fair discrimination. This indicates that early prediction of ICU-AW is still challenging and needs further attention.
Learning receptive fields using predictive feedback.
Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H
2006-01-01
Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.
Xia, Junfeng; Yue, Zhenyu; Di, Yunqiang; Zhu, Xiaolei; Zheng, Chun-Hou
2016-01-01
The identification of hot spots, a small subset of protein interfaces that accounts for the majority of binding free energy, is becoming more important for the research of drug design and cancer development. Based on our previous methods (APIS and KFC2), here we proposed a novel hot spot prediction method. For each hot spot residue, we firstly constructed a wide variety of 108 sequence, structural, and neighborhood features to characterize potential hot spot residues, including conventional ones and new one (pseudo hydrophobicity) exploited in this study. We then selected 3 top-ranking features that contribute the most in the classification by a two-step feature selection process consisting of minimal-redundancy-maximal-relevance algorithm and an exhaustive search method. We used support vector machines to build our final prediction model. When testing our model on an independent test set, our method showed the highest F1-score of 0.70 and MCC of 0.46 comparing with the existing state-of-the-art hot spot prediction methods. Our results indicate that these features are more effective than the conventional features considered previously, and that the combination of our and traditional features may support the creation of a discriminative feature set for efficient prediction of hot spots in protein interfaces. PMID:26934646
Xu, Yifang; Collins, Leslie M
2005-06-01
This work investigates dynamic range and intensity discrimination for electrical pulse-train stimuli that are modulated by noise using a stochastic auditory nerve model. Based on a hypothesized monotonic relationship between loudness and the number of spikes elicited by a stimulus, theoretical prediction of the uncomfortable level has previously been determined by comparing spike counts to a fixed threshold, Nucl. However, no specific rule for determining Nucl has been suggested. Our work determines the uncomfortable level based on the excitation pattern of the neural response in a normal ear. The number of fibers corresponding to the portion of the basilar membrane driven by a stimulus at an uncomfortable level in a normal ear is related to Nucl at an uncomfortable level of the electrical stimulus. Intensity discrimination limens are predicted using signal detection theory via the probability mass function of the neural response and via experimental simulations. The results show that the uncomfortable level for pulse-train stimuli increases slightly as noise level increases. Combining this with our previous threshold predictions, we hypothesize that the dynamic range for noise-modulated pulse-train stimuli should increase with additive noise. However, since our predictions indicate that intensity discrimination under noise degrades, overall intensity coding performance may not improve significantly.
Results on three predictions for July 2012 federal elections in Mexico based on past regularities.
Hernández-Saldaña, H
2013-01-01
The Presidential Election in Mexico of July 2012 has been the third time that PREP, Previous Electoral Results Program works. PREP gives voting outcomes based in electoral certificates of each polling station that arrive to capture centers. In previous ones, some statistical regularities had been observed, three of them were selected to make predictions and were published in arXiv:1207.0078 [physics.soc-ph]. Using the database made public in July 2012, two of the predictions were completely fulfilled, while, the third one was measured and confirmed using the database obtained upon request to the electoral authorities. The first two predictions confirmed by actual measures are: (ii) The Partido Revolucionario Institucional, PRI, is a sprinter and has a better performance in polling stations arriving late to capture centers during the process. (iii) Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well. The third prediction confirms that errare humanum est, since the error distributions of all the self-consistency variables appeared as a central power law with lateral lobes as in 2000 and 2006 electoral processes. The three measured regularities appeared no matter the political environment.
Results on Three Predictions for July 2012 Federal Elections in Mexico Based on Past Regularities
Hernández-Saldaña, H.
2013-01-01
The Presidential Election in Mexico of July 2012 has been the third time that PREP, Previous Electoral Results Program works. PREP gives voting outcomes based in electoral certificates of each polling station that arrive to capture centers. In previous ones, some statistical regularities had been observed, three of them were selected to make predictions and were published in arXiv:1207.0078 [physics.soc-ph]. Using the database made public in July 2012, two of the predictions were completely fulfilled, while, the third one was measured and confirmed using the database obtained upon request to the electoral authorities. The first two predictions confirmed by actual measures are: (ii) The Partido Revolucionario Institucional, PRI, is a sprinter and has a better performance in polling stations arriving late to capture centers during the process. (iii) Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well. The third prediction confirms that errare humanum est, since the error distributions of all the self-consistency variables appeared as a central power law with lateral lobes as in 2000 and 2006 electoral processes. The three measured regularities appeared no matter the political environment. PMID:24386103
Gong, Ping; Nan, Xiaofei; Barker, Natalie D; Boyd, Robert E; Chen, Yixin; Wilkins, Dawn E; Johnson, David R; Suedel, Burton C; Perkins, Edward J
2016-03-08
Chemical bioavailability is an important dose metric in environmental risk assessment. Although many approaches have been used to evaluate bioavailability, not a single approach is free from limitations. Previously, we developed a new genomics-based approach that integrated microarray technology and regression modeling for predicting bioavailability (tissue residue) of explosives compounds in exposed earthworms. In the present study, we further compared 18 different regression models and performed variable selection simultaneously with parameter estimation. This refined approach was applied to both previously collected and newly acquired earthworm microarray gene expression datasets for three explosive compounds. Our results demonstrate that a prediction accuracy of R(2) = 0.71-0.82 was achievable at a relatively low model complexity with as few as 3-10 predictor genes per model. These results are much more encouraging than our previous ones. This study has demonstrated that our approach is promising for bioavailability measurement, which warrants further studies of mixed contamination scenarios in field settings.
NASA Astrophysics Data System (ADS)
Tsao, Sinchai; Gajawelli, Niharika; Zhou, Jiayu; Shi, Jie; Ye, Jieping; Wang, Yalin; Lepore, Natasha
2014-03-01
Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method1 with novel MR-based multivariate morphometric surface map of the hippocampus2 to predict future cognitive scores of patients. Previous work by Zhou et al.1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al.2s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.
When high working memory capacity is and is not beneficial for predicting nonlinear processes.
Fischer, Helen; Holt, Daniel V
2017-04-01
Predicting the development of dynamic processes is vital in many areas of life. Previous findings are inconclusive as to whether higher working memory capacity (WMC) is always associated with using more accurate prediction strategies, or whether higher WMC can also be associated with using overly complex strategies that do not improve accuracy. In this study, participants predicted a range of systematically varied nonlinear processes based on exponential functions where prediction accuracy could or could not be enhanced using well-calibrated rules. Results indicate that higher WMC participants seem to rely more on well-calibrated strategies, leading to more accurate predictions for processes with highly nonlinear trajectories in the prediction region. Predictions of lower WMC participants, in contrast, point toward an increased use of simple exemplar-based prediction strategies, which perform just as well as more complex strategies when the prediction region is approximately linear. These results imply that with respect to predicting dynamic processes, working memory capacity limits are not generally a strength or a weakness, but that this depends on the process to be predicted.
Murchie, Brent; Tandon, Kanwarpreet; Hakim, Seifeldin; Shah, Kinchit; O'Rourke, Colin; Castro, Fernando J
2017-04-01
Colorectal cancer (CRC) screening guidelines likely over-generalizes CRC risk, 35% of Americans are not up to date with screening, and there is growing incidence of CRC in younger patients. We developed a practical prediction model for high-risk colon adenomas in an average-risk population, including an expanded definition of high-risk polyps (≥3 nonadvanced adenomas), exposing higher than average-risk patients. We also compared results with previously created calculators. Patients aged 40 to 59 years, undergoing first-time average-risk screening or diagnostic colonoscopies were evaluated. Risk calculators for advanced adenomas and high-risk adenomas were created based on age, body mass index, sex, race, and smoking history. Previously established calculators with similar risk factors were selected for comparison of concordance statistic (c-statistic) and external validation. A total of 5063 patients were included. Advanced adenomas, and high-risk adenomas were seen in 5.7% and 7.4% of the patient population, respectively. The c-statistic for our calculator was 0.639 for the prediction of advanced adenomas, and 0.650 for high-risk adenomas. When applied to our population, all previous models had lower c-statistic results although one performed similarly. Our model compares favorably to previously established prediction models. Age and body mass index were used as continuous variables, likely improving the c-statistic. It also reports absolute predictive probabilities of advanced and high-risk polyps, allowing for more individualized risk assessment of CRC.
ERIC Educational Resources Information Center
Basak, Chandramallika; Voss, Michelle W.; Erickson, Kirk I.; Boot, Walter R.; Kramer, Arthur F.
2011-01-01
Previous studies have found that differences in brain volume among older adults predict performance in laboratory tasks of executive control, memory, and motor learning. In the present study we asked whether regional differences in brain volume as assessed by the application of a voxel-based morphometry technique on high resolution MRI would also…
Wenchi Jin; Hong S. He; Frank R. Thompson
2016-01-01
Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...
ERIC Educational Resources Information Center
Imfeld, Thomas N.; And Others
1995-01-01
A method for predicting high dental caries increments for children, based on previous research, is presented. Three clinical findings were identified as predictors: number of sound primary molars, number of discolored pits/fissures on first permanent molars, and number of buccal and lingual smooth surfaces of first permanent molars with white…
Ma, Xin; Guo, Jing; Sun, Xiao
2016-01-01
DNA-binding proteins are fundamentally important in cellular processes. Several computational-based methods have been developed to improve the prediction of DNA-binding proteins in previous years. However, insufficient work has been done on the prediction of DNA-binding proteins from protein sequence information. In this paper, a novel predictor, DNABP (DNA-binding proteins), was designed to predict DNA-binding proteins using the random forest (RF) classifier with a hybrid feature. The hybrid feature contains two types of novel sequence features, which reflect information about the conservation of physicochemical properties of the amino acids, and the binding propensity of DNA-binding residues and non-binding propensities of non-binding residues. The comparisons with each feature demonstrated that these two novel features contributed most to the improvement in predictive ability. Furthermore, to improve the prediction performance of the DNABP model, feature selection using the minimum redundancy maximum relevance (mRMR) method combined with incremental feature selection (IFS) was carried out during the model construction. The results showed that the DNABP model could achieve 86.90% accuracy, 83.76% sensitivity, 90.03% specificity and a Matthews correlation coefficient of 0.727. High prediction accuracy and performance comparisons with previous research suggested that DNABP could be a useful approach to identify DNA-binding proteins from sequence information. The DNABP web server system is freely available at http://www.cbi.seu.edu.cn/DNABP/.
Dodds, James N; May, Jody C; McLean, John A
2017-11-21
Here we examine the relationship among resolving power (R p ), resolution (R pp ), and collision cross section (CCS) for compounds analyzed in previous ion mobility (IM) experiments representing a wide variety of instrument platforms and IM techniques. Our previous work indicated these three variables effectively describe and predict separation efficiency for drift tube ion mobility spectrometry experiments. In this work, we seek to determine if our previous findings are a general reflection of IM behavior that can be applied to various instrument platforms and mobility techniques. Results suggest IM distributions are well characterized by a Gaussian model and separation efficiency can be predicted on the basis of the empirical difference in the gas-phase CCS and a CCS-based resolving power definition (CCS/ΔCCS). Notably traveling wave (TWIMS) was found to operate at resolutions substantially higher than a single-peak resolving power suggested. When a CCS-based R p definition was utilized, TWIMS was found to operate at a resolving power between 40 and 50, confirming the previous observations by Giles and co-workers. After the separation axis (and corresponding resolving power) is converted to cross section space, it is possible to effectively predict separation behavior for all mobility techniques evaluated (i.e., uniform field, trapped ion mobility, traveling wave, cyclic, and overtone instruments) using the equations described in this work. Finally, we are able to establish for the first time that the current state-of-the-art ion mobility separations benchmark at a CCS-based resolving power of >300 that is sufficient to differentiate analyte ions with CCS differences as small as 0.5%.
A Hybrid RANS/LES Approach for Predicting Jet Noise
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.
2006-01-01
Hybrid acoustic prediction methods have an important advantage over the current Reynolds averaged Navier-Stokes (RANS) based methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence. Unfortunately, they are unable to account for the high frequency sound generated by the turbulence in the initial mixing layers. This paper introduces an alternative approach that directly calculates the sound from a hybrid RANS/LES flow model (which can resolve the steep gradients in the initial mixing layers near the nozzle lip) and adopts modeling techniques similar to those used in current RANS based noise prediction methods to determine the unknown sources in the equations for the remaining unresolved components of the sound field. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid noise prediction methods.
Neural network based short-term load forecasting using weather compensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, T.W.S.; Leung, C.T.
This paper presents a novel technique for electric load forecasting based on neural weather compensation. The proposed method is a nonlinear generalization of Box and Jenkins approach for nonstationary time-series prediction. A weather compensation neural network is implemented for one-day ahead electric load forecasting. The weather compensation neural network can accurately predict the change of actual electric load consumption from the previous day. The results, based on Hong Kong Island historical load demand, indicate that this methodology is capable of providing a more accurate load forecast with a 0.9% reduction in forecast error.
Development of estrogen receptor beta binding prediction model using large sets of chemicals.
Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao
2017-11-03
We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .
Sensorimotor Grounding of Musical Embodiment and the Role of Prediction: A Review
Maes, Pieter-Jan
2016-01-01
In a previous article, we reviewed empirical evidence demonstrating action-based effects on music perception to substantiate the musical embodiment thesis (Maes et al., 2014). Evidence was largely based on studies demonstrating that music perception automatically engages motor processes, or that body states/movements influence music perception. Here, we argue that more rigorous evidence is needed before any decisive conclusion in favor of a “radical” musical embodiment thesis can be posited. In the current article, we provide a focused review of recent research to collect further evidence for the “radical” embodiment thesis that music perception is a dynamic process firmly rooted in the natural disposition of sounds and the human auditory and motor system. Though, we emphasize that, on top of these natural dispositions, long-term processes operate, rooted in repeated sensorimotor experiences and leading to learning, prediction, and error minimization. This approach sheds new light on the development of musical repertoires, and may refine our understanding of action-based effects on music perception as discussed in our previous article (Maes et al., 2014). Additionally, we discuss two of our recent empirical studies demonstrating that music performance relies on similar principles of sensorimotor dynamics and predictive processing. PMID:26973587
Sensorimotor Grounding of Musical Embodiment and the Role of Prediction: A Review.
Maes, Pieter-Jan
2016-01-01
In a previous article, we reviewed empirical evidence demonstrating action-based effects on music perception to substantiate the musical embodiment thesis (Maes et al., 2014). Evidence was largely based on studies demonstrating that music perception automatically engages motor processes, or that body states/movements influence music perception. Here, we argue that more rigorous evidence is needed before any decisive conclusion in favor of a "radical" musical embodiment thesis can be posited. In the current article, we provide a focused review of recent research to collect further evidence for the "radical" embodiment thesis that music perception is a dynamic process firmly rooted in the natural disposition of sounds and the human auditory and motor system. Though, we emphasize that, on top of these natural dispositions, long-term processes operate, rooted in repeated sensorimotor experiences and leading to learning, prediction, and error minimization. This approach sheds new light on the development of musical repertoires, and may refine our understanding of action-based effects on music perception as discussed in our previous article (Maes et al., 2014). Additionally, we discuss two of our recent empirical studies demonstrating that music performance relies on similar principles of sensorimotor dynamics and predictive processing.
Predicting RNA folding thermodynamics with a reduced chain representation model
CAO, SONG; CHEN, SHI-JIE
2005-01-01
Based on the virtual bond representation for the nucleotide backbone, we develop a reduced conformational model for RNA. We use the experimentally measured atomic coordinates to model the helices and use the self-avoiding walks in a diamond lattice to model the loop conformations. The atomic coordinates of the helices and the lattice representation for the loops are matched at the loop–helix junction, where steric viability is accounted for. Unlike the previous simplified lattice-based models, the present virtual bond model can account for the atomic details of realistic three-dimensional RNA structures. Based on the model, we develop a statistical mechanical theory for RNA folding energy landscapes and folding thermodynamics. Tests against experiments show that the theory can give much more improved predictions for the native structures, the thermal denaturation curves, and the equilibrium folding/unfolding pathways than the previous models. The application of the model to the P5abc region of Tetrahymena group I ribozyme reveals the misfolded intermediates as well as the native-like intermediates in the equilibrium folding process. Moreover, based on the free energy landscape analysis for each and every loop mutation, the model predicts five lethal mutations that can completely alter the free energy landscape and the folding stability of the molecule. PMID:16251382
Mani, Ashutosh; Rao, Marepalli; James, Kelley; Bhattacharya, Amit
2015-01-01
The purpose of this study was to explore data-driven models, based on decision trees, to develop practical and easy to use predictive models for early identification of firefighters who are likely to cross the threshold of hyperthermia during live-fire training. Predictive models were created for three consecutive live-fire training scenarios. The final predicted outcome was a categorical variable: will a firefighter cross the upper threshold of hyperthermia - Yes/No. Two tiers of models were built, one with and one without taking into account the outcome (whether a firefighter crossed hyperthermia or not) from the previous training scenario. First tier of models included age, baseline heart rate and core body temperature, body mass index, and duration of training scenario as predictors. The second tier of models included the outcome of the previous scenario in the prediction space, in addition to all the predictors from the first tier of models. Classification and regression trees were used independently for prediction. The response variable for the regression tree was the quantitative variable: core body temperature at the end of each scenario. The predicted quantitative variable from regression trees was compared to the upper threshold of hyperthermia (38°C) to predict whether a firefighter would enter hyperthermia. The performance of classification and regression tree models was satisfactory for the second (success rate = 79%) and third (success rate = 89%) training scenarios but not for the first (success rate = 43%). Data-driven models based on decision trees can be a useful tool for predicting physiological response without modeling the underlying physiological systems. Early prediction of heat stress coupled with proactive interventions, such as pre-cooling, can help reduce heat stress in firefighters.
Catching What We Can't See: Manual Interception of Occluded Fly-Ball Trajectories
Bosco, Gianfranco; Delle Monache, Sergio; Lacquaniti, Francesco
2012-01-01
Control of interceptive actions may involve fine interplay between feedback-based and predictive mechanisms. These processes rely heavily on target motion information available when the target is visible. However, short-term visual memory signals as well as implicit knowledge about the environment may also contribute to elaborate a predictive representation of the target trajectory, especially when visual feedback is partially unavailable because other objects occlude the visual target. To determine how different processes and information sources are integrated in the control of the interceptive action, we manipulated a computer-generated visual environment representing a baseball game. Twenty-four subjects intercepted fly-ball trajectories by moving a mouse cursor and by indicating the interception with a button press. In two separate sessions, fly-ball trajectories were either fully visible or occluded for 750, 1000 or 1250 ms before ball landing. Natural ball motion was perturbed during the descending trajectory with effects of either weightlessness (0 g) or increased gravity (2 g) at times such that, for occluded trajectories, 500 ms of perturbed motion were visible before ball disappearance. To examine the contribution of previous visual experience with the perturbed trajectories to the interception of invisible targets, the order of visible and occluded sessions was permuted among subjects. Under these experimental conditions, we showed that, with fully visible targets, subjects combined servo-control and predictive strategies. Instead, when intercepting occluded targets, subjects relied mostly on predictive mechanisms based, however, on different type of information depending on previous visual experience. In fact, subjects without prior experience of the perturbed trajectories showed interceptive errors consistent with predictive estimates of the ball trajectory based on a-priori knowledge of gravity. Conversely, the interceptive responses of subjects previously exposed to fully visible trajectories were compatible with the fact that implicit knowledge of the perturbed motion was also taken into account for the extrapolation of occluded trajectories. PMID:23166653
Catching what we can't see: manual interception of occluded fly-ball trajectories.
Bosco, Gianfranco; Delle Monache, Sergio; Lacquaniti, Francesco
2012-01-01
Control of interceptive actions may involve fine interplay between feedback-based and predictive mechanisms. These processes rely heavily on target motion information available when the target is visible. However, short-term visual memory signals as well as implicit knowledge about the environment may also contribute to elaborate a predictive representation of the target trajectory, especially when visual feedback is partially unavailable because other objects occlude the visual target. To determine how different processes and information sources are integrated in the control of the interceptive action, we manipulated a computer-generated visual environment representing a baseball game. Twenty-four subjects intercepted fly-ball trajectories by moving a mouse cursor and by indicating the interception with a button press. In two separate sessions, fly-ball trajectories were either fully visible or occluded for 750, 1000 or 1250 ms before ball landing. Natural ball motion was perturbed during the descending trajectory with effects of either weightlessness (0 g) or increased gravity (2 g) at times such that, for occluded trajectories, 500 ms of perturbed motion were visible before ball disappearance. To examine the contribution of previous visual experience with the perturbed trajectories to the interception of invisible targets, the order of visible and occluded sessions was permuted among subjects. Under these experimental conditions, we showed that, with fully visible targets, subjects combined servo-control and predictive strategies. Instead, when intercepting occluded targets, subjects relied mostly on predictive mechanisms based, however, on different type of information depending on previous visual experience. In fact, subjects without prior experience of the perturbed trajectories showed interceptive errors consistent with predictive estimates of the ball trajectory based on a-priori knowledge of gravity. Conversely, the interceptive responses of subjects previously exposed to fully visible trajectories were compatible with the fact that implicit knowledge of the perturbed motion was also taken into account for the extrapolation of occluded trajectories.
Investigation of an Activity-Based Text-Processing Strategy in Mixed-Age Child Dyads
ERIC Educational Resources Information Center
Marley, Scott C.; Szabo, Zsuzsanna; Levin, Joel R.; Glenberg, Arthur M.
2011-01-01
The authors examined an activity-based listening strategy with first- and third-grade children in mixed-grade dyads. On the basis of theories of cognitive development and previous research, the authors predicted the following: (a) children in an activity-based strategy would recall more story events compared with those in a repetition strategy and…
A MELD-based model to determine risk of mortality among patients with acute variceal bleeding.
Reverter, Enric; Tandon, Puneeta; Augustin, Salvador; Turon, Fanny; Casu, Stefania; Bastiampillai, Ravin; Keough, Adam; Llop, Elba; González, Antonio; Seijo, Susana; Berzigotti, Annalisa; Ma, Mang; Genescà, Joan; Bosch, Jaume; García-Pagán, Joan Carles; Abraldes, Juan G
2014-02-01
Patients with cirrhosis with acute variceal bleeding (AVB) have high mortality rates (15%-20%). Previously described models are seldom used to determine prognoses of these patients, partially because they have not been validated externally and because they include subjective variables, such as bleeding during endoscopy and Child-Pugh score, which are evaluated inconsistently. We aimed to improve determination of risk for patients with AVB. We analyzed data collected from 178 patients with cirrhosis (Child-Pugh scores of A, B, and C: 15%, 57%, and 28%, respectively) and esophageal AVB who received standard therapy from 2007 through 2010. We tested the performance (discrimination and calibration) of previously described models, including the model for end-stage liver disease (MELD), and developed a new MELD calibration to predict the mortality of patients within 6 weeks of presentation with AVB. MELD-based predictions were validated in cohorts of patients from Canada (n = 240) and Spain (n = 221). Among study subjects, the 6-week mortality rate was 16%. MELD was the best model in terms of discrimination; it was recalibrated to predict the 6-week mortality rate with logistic regression (logit, -5.312 + 0.207 • MELD; bootstrapped R(2), 0.3295). MELD values of 19 or greater predicted 20% or greater mortality, whereas MELD scores less than 11 predicted less than 5% mortality. The model performed well for patients from Canada at all risk levels. In the Spanish validation set, in which all patients were treated with banding ligation, MELD predictions were accurate up to the 20% risk threshold. We developed a MELD-based model that accurately predicts mortality among patients with AVB, based on objective variables available at admission. This model could be useful to evaluate the efficacy of new therapies and stratify patients in randomized trials. Copyright © 2014 AGA Institute. Published by Elsevier Inc. All rights reserved.
Evaluation of Contextual Variability in Prediction of Reinforcer Effectiveness
ERIC Educational Resources Information Center
Pino, Olimpia; Dazzi, Carla
2005-01-01
Previous research has shown that stimulus preference assessments based on caregiver-opinion did not coincide with results of a more systematic method of assessing reinforcing value unless stimuli that were assessed to represent preferences were also preferred on paired stimulus presentation format, and that the relative preference based on the…
Predicting Plywood Properties with Wood-based Composite Models
Christopher Adam Senalik; Robert J. Ross
2015-01-01
Previous research revealed that stress wave nondestructive testing techniques could be used to evaluate the tensile and flexural properties of wood-based composite materials. Regression models were developed that related stress wave transmission characteristics (velocity and attenuation) to modulus of elasticity and strength. The developed regression models accounted...
Recent literature has shown that bioavailability-based techniques, such as Tenax extraction, can estimate sediment exposure to benthos. In a previous study by the authors,Tenax extraction was used to create and validate a literature-based Tenax model to predict oligochaete bioac...
Predicting the Performance of Chain Saw Machines Based on Shore Scleroscope Hardness
NASA Astrophysics Data System (ADS)
Tumac, Deniz
2014-03-01
Shore hardness has been used to estimate several physical and mechanical properties of rocks over the last few decades. However, the number of researches correlating Shore hardness with rock cutting performance is quite limited. Also, rather limited researches have been carried out on predicting the performance of chain saw machines. This study differs from the previous investigations in the way that Shore hardness values (SH1, SH2, and deformation coefficient) are used to determine the field performance of chain saw machines. The measured Shore hardness values are correlated with the physical and mechanical properties of natural stone samples, cutting parameters (normal force, cutting force, and specific energy) obtained from linear cutting tests in unrelieved cutting mode, and areal net cutting rate of chain saw machines. Two empirical models developed previously are improved for the prediction of the areal net cutting rate of chain saw machines. The first model is based on a revised chain saw penetration index, which uses SH1, machine weight, and useful arm cutting depth as predictors. The second model is based on the power consumed for only cutting the stone, arm thickness, and specific energy as a function of the deformation coefficient. While cutting force has a strong relationship with Shore hardness values, the normal force has a weak or moderate correlation. Uniaxial compressive strength, Cerchar abrasivity index, and density can also be predicted by Shore hardness values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tran, A; Ruan, D; Woods, K
Purpose: The predictive power of knowledge based planning (KBP) has considerable potential in the development of automated treatment planning. Here, we examine the predictive capabilities and accuracy of previously reported KBP methods, as well as an artificial neural networks (ANN) method. Furthermore, we compare the predictive accuracy of these methods on coplanar volumetric-modulated arc therapy (VMAT) and non-coplanar 4π radiotherapy. Methods: 30 liver SBRT patients previously treated using coplanar VMAT were selected for this study. The patients were re-planned using 4π radiotherapy, which involves 20 optimally selected non-coplanar IMRT fields. ANNs were used to incorporate enhanced geometric information including livermore » and PTV size, prescription dose, patient girth, and proximity to beams. The performance of ANN was compared to three methods from statistical voxel dose learning (SVDL), wherein the doses of voxels sharing the same distance to the PTV are approximated by either taking the median of the distribution, non-parametric fitting, or skew-normal fitting. These three methods were shown to be capable of predicting DVH, but only median approximation can predict 3D dose. Prediction methods were tested using leave-one-out cross-validation tests and evaluated using residual sum of squares (RSS) for DVH and 3D dose predictions. Results: DVH prediction using non-parametric fitting had the lowest average RSS with 0.1176(4π) and 0.1633(VMAT), compared to 0.4879(4π) and 1.8744(VMAT) RSS for ANN. 3D dose prediction with median approximation had lower RSS with 12.02(4π) and 29.22(VMAT), compared to 27.95(4π) and 130.9(VMAT) for ANN. Conclusion: Paradoxically, although the ANNs included geometric features in addition to the distances to the PTV, it did not perform better in predicting DVH or 3D dose compared to simpler, faster methods based on the distances alone. The study further confirms that the prediction of 4π non-coplanar plans were more accurate than VMAT. NIH R43CA183390 and R01CA188300.« less
NASA Astrophysics Data System (ADS)
Kong, Lingxin; Yang, Bin; Xu, Baoqiang; Li, Yifu
2014-09-01
Based on the molecular interaction volume model (MIVM), the activities of components of Sn-Sb, Sb-Bi, Sn-Zn, Sn-Cu, and Sn-Ag alloys were predicted. The predicted values are in good agreement with the experimental data, which indicate that the MIVM is of better stability and reliability due to its good physical basis. A significant advantage of the MIVM lies in its ability to predict the thermodynamic properties of liquid alloys using only two parameters. The phase equilibria of Sn-Sb and Sn-Bi alloys were calculated based on the properties of pure components and the activity coefficients, which indicates that Sn-Sb and Sn-Bi alloys can be separated thoroughly by vacuum distillation. This study extends previous investigations and provides an effective and convenient model on which to base refining simulations for Sn-based alloys.
Knowledge of Previous Tasks: Task Similarity Influences Bias in Task Duration Predictions
Thomas, Kevin E.; König, Cornelius J.
2018-01-01
Bias in predictions of task duration has been attributed to misremembering previous task duration and using previous task duration as a basis for predictions. This research sought to further examine how previous task information affects prediction bias by manipulating task similarity and assessing the role of previous task duration feedback. Task similarity was examined through participants performing two tasks 1 week apart that were the same or different. Duration feedback was provided to all participants (Experiment 1), its recall was manipulated (Experiment 2), and its provision was manipulated (Experiment 3). In all experiments, task similarity influenced bias on the second task, with predictions being less biased when the first task was the same task. However, duration feedback did not influence bias. The findings highlight the pivotal role of knowledge about previous tasks in task duration prediction and are discussed in relation to the theoretical accounts of task duration prediction bias. PMID:29881362
Phillips, A M B; Depaola, A; Bowers, J; Ladner, S; Grimes, D J
2007-04-01
The U.S. Food and Drug Administration recently published a Vibrio parahaemolyticus risk assessment for consumption of raw oysters that predicts V. parahaemolyticus densities at harvest based on water temperature. We retrospectively compared archived remotely sensed measurements (sea surface temperature, chlorophyll, and turbidity) with previously published data from an environmental study of V. parahaemolyticus in Alabama oysters to assess the utility of the former data for predicting V. parahaemolyticus densities in oysters. Remotely sensed sea surface temperature correlated well with previous in situ measurements (R(2) = 0.86) of bottom water temperature, supporting the notion that remotely sensed sea surface temperature data are a sufficiently accurate substitute for direct measurement. Turbidity and chlorophyll levels were not determined in the previous study, but in comparison with the V. parahaemolyticus data, remotely sensed values for these parameters may explain some of the variation in V. parahaemolyticus levels. More accurate determination of these effects and the temporal and spatial variability of these parameters may further improve the accuracy of prediction models. To illustrate the utility of remotely sensed data as a basis for risk management, predictions based on the U.S. Food and Drug Administration V. parahaemolyticus risk assessment model were integrated with remotely sensed sea surface temperature data to display graphically variations in V. parahaemolyticus density in oysters associated with spatial variations in water temperature. We believe images such as these could be posted in near real time, and that the availability of such information in a user-friendly format could be the basis for timely and informed risk management decisions.
Matrix factorization-based data fusion for gene function prediction in baker's yeast and slime mold.
Zitnik, Marinka; Zupan, Blaž
2014-01-01
The development of effective methods for the characterization of gene functions that are able to combine diverse data sources in a sound and easily-extendible way is an important goal in computational biology. We have previously developed a general matrix factorization-based data fusion approach for gene function prediction. In this manuscript, we show that this data fusion approach can be applied to gene function prediction and that it can fuse various heterogeneous data sources, such as gene expression profiles, known protein annotations, interaction and literature data. The fusion is achieved by simultaneous matrix tri-factorization that shares matrix factors between sources. We demonstrate the effectiveness of the approach by evaluating its performance on predicting ontological annotations in slime mold D. discoideum and on recognizing proteins of baker's yeast S. cerevisiae that participate in the ribosome or are located in the cell membrane. Our approach achieves predictive performance comparable to that of the state-of-the-art kernel-based data fusion, but requires fewer data preprocessing steps.
Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding
Xiao, Rui; Gao, Junbin; Bossomaier, Terry
2016-01-01
A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102
Predicting Patterns of Grammatical Complexity across Language Exam Task Types and Proficiency Levels
ERIC Educational Resources Information Center
Biber, Douglas; Gray, Bethany; Staples, Shelley
2016-01-01
In the present article, we explore the extent to which previous research on register variation can be used to predict spoken/written task-type variation as well as differences across score levels in the context of a major standardized language exam (TOEFL iBT). Specifically, we carry out two sets of linguistic analyses based on a large corpus of…
Lin, Lei; Wang, Qian; Sadek, Adel W
2016-06-01
The duration of freeway traffic accidents duration is an important factor, which affects traffic congestion, environmental pollution, and secondary accidents. Among previous studies, the M5P algorithm has been shown to be an effective tool for predicting incident duration. M5P builds a tree-based model, like the traditional classification and regression tree (CART) method, but with multiple linear regression models as its leaves. The problem with M5P for accident duration prediction, however, is that whereas linear regression assumes that the conditional distribution of accident durations is normally distributed, the distribution for a "time-to-an-event" is almost certainly nonsymmetrical. A hazard-based duration model (HBDM) is a better choice for this kind of a "time-to-event" modeling scenario, and given this, HBDMs have been previously applied to analyze and predict traffic accidents duration. Previous research, however, has not yet applied HBDMs for accident duration prediction, in association with clustering or classification of the dataset to minimize data heterogeneity. The current paper proposes a novel approach for accident duration prediction, which improves on the original M5P tree algorithm through the construction of a M5P-HBDM model, in which the leaves of the M5P tree model are HBDMs instead of linear regression models. Such a model offers the advantage of minimizing data heterogeneity through dataset classification, and avoids the need for the incorrect assumption of normality for traffic accident durations. The proposed model was then tested on two freeway accident datasets. For each dataset, the first 500 records were used to train the following three models: (1) an M5P tree; (2) a HBDM; and (3) the proposed M5P-HBDM, and the remainder of data were used for testing. The results show that the proposed M5P-HBDM managed to identify more significant and meaningful variables than either M5P or HBDMs. Moreover, the M5P-HBDM had the lowest overall mean absolute percentage error (MAPE). Copyright © 2016 Elsevier Ltd. All rights reserved.
Yajima, Airi; Uesawa, Yoshihiro; Ogawa, Chiaki; Yatabe, Megumi; Kondo, Naoki; Saito, Shinichiro; Suzuki, Yoshihiko; Atsuda, Kouichiro; Kagaya, Hajime
2015-05-01
There exist various useful predictive models, such as the Cockcroft-Gault model, for estimating creatinine clearance (CLcr). However, the prediction of renal function is difficult in patients with cancer treated with cisplatin. Therefore, we attempted to construct a new model for predicting CLcr in such patients. Japanese patients with head and neck cancer who had received cisplatin-based chemotherapy were used as subjects. A multiple regression equation was constructed as a model for predicting CLcr values based on background and laboratory data. A model for predicting CLcr, which included body surface area, serum creatinine and albumin, was constructed. The model exhibited good performance prior to cisplatin therapy. In addition, it performed better than previously reported models after cisplatin therapy. The predictive model constructed in the present study displayed excellent potential and was useful for estimating the renal function of patients treated with cisplatin therapy. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Prediction of Protein Configurational Entropy (Popcoen).
Goethe, Martin; Gleixner, Jan; Fita, Ignacio; Rubi, J Miguel
2018-03-13
A knowledge-based method for configurational entropy prediction of proteins is presented; this methodology is extremely fast, compared to previous approaches, because it does not involve any type of configurational sampling. Instead, the configurational entropy of a query fold is estimated by evaluating an artificial neural network, which was trained on molecular-dynamics simulations of ∼1000 proteins. The predicted entropy can be incorporated into a large class of protein software based on cost-function minimization/evaluation, in which configurational entropy is currently neglected for performance reasons. Software of this type is used for all major protein tasks such as structure predictions, proteins design, NMR and X-ray refinement, docking, and mutation effect predictions. Integrating the predicted entropy can yield a significant accuracy increase as we show exemplarily for native-state identification with the prominent protein software FoldX. The method has been termed Popcoen for Prediction of Protein Configurational Entropy. An implementation is freely available at http://fmc.ub.edu/popcoen/ .
Santillan, Arturo O; Cutanda-Henríquez, Vicente
2008-11-01
An investigation on the resonance frequency shift for a plane-wave mode in a cylindrical cavity produced by a rigid sphere is reported in this paper. This change of the resonance frequency has been previously considered as a cause of oscillational instabilities in single-mode acoustic levitation devices. It is shown that the use of the Boltzmann-Ehrenfest principle of adiabatic invariance allows the derivation of an expression for the resonance frequency shift in a simpler and more direct way than a method based on a Green's function reported in literature. The position of the sphere can be any point along the axis of the cavity. Obtained predictions of the resonance frequency shift with the deduced equation agree quite well with numerical simulations based on the boundary element method. The results are also confirmed by experiments. The equation derived from the Boltzmann-Ehrenfest principle appears to be more general, and for large spheres, it gives a better approximation than the equation previously reported.
Percolation of binary disk systems: Modeling and theory
Meeks, Kelsey; Tencer, John; Pantoya, Michelle L.
2017-01-12
The dispersion and connectivity of particles with a high degree of polydispersity is relevant to problems involving composite material properties and reaction decomposition prediction and has been the subject of much study in the literature. This paper utilizes Monte Carlo models to predict percolation thresholds for a two-dimensional systems containing disks of two different radii. Monte Carlo simulations and spanning probability are used to extend prior models into regions of higher polydispersity than those previously considered. A correlation to predict the percolation threshold for binary disk systems is proposed based on the extended dataset presented in this work and comparedmore » to previously published correlations. Finally, a set of boundary conditions necessary for a good fit is presented, and a condition for maximizing percolation threshold for binary disk systems is suggested.« less
Roe, D A
1985-01-01
Drug-nutrient interactions and their adverse outcomes have previously been identified by observation, investigation, and literature reports. Knowing the attributes of the drugs, availability of knowledge base management systems for microcomputer use can facilitate prediction of the mechanism and the effects of drug-nutrient interactions. Examples used to illustrate this approach are prediction of lactose intolerance in drug-induced malabsorption, and prediction of the mechanism responsible for drug-induced flush reactions. In the future we see that there may be many opportunities to use this system further in the investigation of complex drug-nutrient interactions.
Tramontano, A; Macchiato, M F
1986-01-01
An algorithm to determine the probability that a reading frame codifies for a protein is presented. It is based on the results of our previous studies on the thermodynamic characteristics of a translated reading frame. We also develop a prediction procedure to distinguish between coding and non-coding reading frames. The procedure is based on the characteristics of the putative product of the DNA sequence and not on periodicity characteristics of the sequence, so the prediction is not biased by the presence of overlapping translated reading frames or by the presence of translated reading frames on the complementary DNA strand. PMID:3753761
Dankers, Frank; Wijsman, Robin; Troost, Esther G C; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L
2017-05-07
In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC = 0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.
Last, Mark; Rabinowitz, Nitzan; Leonard, Gideon
2016-01-01
This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006-2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year.
2016-01-01
This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006–2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year. PMID:26812351
NASA Astrophysics Data System (ADS)
Dankers, Frank; Wijsman, Robin; Troost, Esther G. C.; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L.
2017-05-01
In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC = 0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.
Bernecker, Samantha L; Rosellini, Anthony J; Nock, Matthew K; Chiu, Wai Tat; Gutierrez, Peter M; Hwang, Irving; Joiner, Thomas E; Naifeh, James A; Sampson, Nancy A; Zaslavsky, Alan M; Stein, Murray B; Ursano, Robert J; Kessler, Ronald C
2018-04-03
High rates of mental disorders, suicidality, and interpersonal violence early in the military career have raised interest in implementing preventive interventions with high-risk new enlistees. The Army Study to Assess Risk and Resilience in Servicemembers (STARRS) developed risk-targeting systems for these outcomes based on machine learning methods using administrative data predictors. However, administrative data omit many risk factors, raising the question whether risk targeting could be improved by adding self-report survey data to prediction models. If so, the Army may gain from routinely administering surveys that assess additional risk factors. The STARRS New Soldier Survey was administered to 21,790 Regular Army soldiers who agreed to have survey data linked to administrative records. As reported previously, machine learning models using administrative data as predictors found that small proportions of high-risk soldiers accounted for high proportions of negative outcomes. Other machine learning models using self-report survey data as predictors were developed previously for three of these outcomes: major physical violence and sexual violence perpetration among men and sexual violence victimization among women. Here we examined the extent to which this survey information increases prediction accuracy, over models based solely on administrative data, for those three outcomes. We used discrete-time survival analysis to estimate a series of models predicting first occurrence, assessing how model fit improved and concentration of risk increased when adding the predicted risk score based on survey data to the predicted risk score based on administrative data. The addition of survey data improved prediction significantly for all outcomes. In the most extreme case, the percentage of reported sexual violence victimization among the 5% of female soldiers with highest predicted risk increased from 17.5% using only administrative predictors to 29.4% adding survey predictors, a 67.9% proportional increase in prediction accuracy. Other proportional increases in concentration of risk ranged from 4.8% to 49.5% (median = 26.0%). Data from an ongoing New Soldier Survey could substantially improve accuracy of risk models compared to models based exclusively on administrative predictors. Depending upon the characteristics of interventions used, the increase in targeting accuracy from survey data might offset survey administration costs.
Dallmann, André; Ince, Ibrahim; Coboeken, Katrin; Eissing, Thomas; Hempel, Georg
2017-09-18
Physiologically based pharmacokinetic modeling is considered a valuable tool for predicting pharmacokinetic changes in pregnancy to subsequently guide in-vivo pharmacokinetic trials in pregnant women. The objective of this study was to extend and verify a previously developed physiologically based pharmacokinetic model for pregnant women for the prediction of pharmacokinetics of drugs metabolized via several cytochrome P450 enzymes. Quantitative information on gestation-specific changes in enzyme activity available in the literature was incorporated in a pregnancy physiologically based pharmacokinetic model and the pharmacokinetics of eight drugs metabolized via one or multiple cytochrome P450 enzymes was predicted. The tested drugs were caffeine, midazolam, nifedipine, metoprolol, ondansetron, granisetron, diazepam, and metronidazole. Pharmacokinetic predictions were evaluated by comparison with in-vivo pharmacokinetic data obtained from the literature. The pregnancy physiologically based pharmacokinetic model successfully predicted the pharmacokinetics of all tested drugs. The observed pregnancy-induced pharmacokinetic changes were qualitatively and quantitatively reasonably well predicted for all drugs. Ninety-seven percent of the mean plasma concentrations predicted in pregnant women fell within a twofold error range and 63% within a 1.25-fold error range. For all drugs, the predicted area under the concentration-time curve was within a 1.25-fold error range. The presented pregnancy physiologically based pharmacokinetic model can quantitatively predict the pharmacokinetics of drugs that are metabolized via one or multiple cytochrome P450 enzymes by integrating prior knowledge of the pregnancy-related effect on these enzymes. This pregnancy physiologically based pharmacokinetic model may thus be used to identify potential exposure changes in pregnant women a priori and to eventually support informed decision making when clinical trials are designed in this special population.
2013-01-01
Background The present study aimed to develop an artificial neural network (ANN) based prediction model for cardiovascular autonomic (CA) dysfunction in the general population. Methods We analyzed a previous dataset based on a population sample consisted of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN analysis. Performances of these prediction models were evaluated in the validation set. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with CA dysfunction (P < 0.05). The mean area under the receiver-operating curve was 0.762 (95% CI 0.732–0.793) for prediction model developed using ANN analysis. The mean sensitivity, specificity, positive and negative predictive values were similar in the prediction models was 0.751, 0.665, 0.330 and 0.924, respectively. All HL statistics were less than 15.0. Conclusion ANN is an effective tool for developing prediction models with high value for predicting CA dysfunction among the general population. PMID:23902963
Silvetti, Massimo; Wiersema, Jan R; Sonuga-Barke, Edmund; Verguts, Tom
2013-10-01
Attention Deficit/Hyperactivity Disorder (ADHD) is a pathophysiologically complex and heterogeneous condition with both cognitive and motivational components. We propose a novel computational hypothesis of motivational deficits in ADHD, drawing together recent evidence on the role of anterior cingulate cortex (ACC) and associated mesolimbic dopamine circuits in both reinforcement learning and ADHD. Based on findings of dopamine dysregulation and ACC involvement in ADHD we simulated a lesion in a previously validated computational model of ACC (Reward Value and Prediction Model, RVPM). We explored the effects of the lesion on the processing of reinforcement signals. We tested specific behavioral predictions about the profile of reinforcement-related deficits in ADHD in three experimental contexts; probability tracking task, partial and continuous reward schedules, and immediate versus delayed rewards. In addition, predictions were made at the neurophysiological level. Behavioral and neurophysiological predictions from the RVPM-based lesion-model of motivational dysfunction in ADHD were confirmed by data from previously published studies. RVPM represents a promising model of ADHD reinforcement learning suggesting that ACC dysregulation might play a role in the pathogenesis of motivational deficits in ADHD. However, more behavioral and neurophysiological studies are required to test core predictions of the model. In addition, the interaction with different brain networks underpinning other aspects of ADHD neuropathology (i.e., executive function) needs to be better understood. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Milos, Frank S.
2011-01-01
In most previous work at NASA Ames Research Center, ablation predictions for carbonaceous materials were obtained using a species thermodynamics database developed by Aerotherm Corporation. This database is derived mostly from the JANAF thermochemical tables. However, the CEA thermodynamics database, also used by NASA, is considered more up to date. In this work, the FIAT code was modified to use CEA-based curve fits for species thermodynamics, then analyses using both the JANAF and CEA thermodynamics were performed for carbon and carbon phenolic materials over a range of test conditions. The ablation predictions are comparable at lower heat fluxes where the dominant mechanism is carbon oxidation. However, the predictions begin to diverge in the sublimation regime, with the CEA model predicting lower recession. The disagreement is more significant for carbon phenolic than for carbon, and this difference is attributed to hydrocarbon species that may contribute to the ablation rate.
Chemical and protein structural basis for biological crosstalk between PPAR α and COX enzymes
NASA Astrophysics Data System (ADS)
Cleves, Ann E.; Jain, Ajay N.
2015-02-01
We have previously validated a probabilistic framework that combined computational approaches for predicting the biological activities of small molecule drugs. Molecule comparison methods included molecular structural similarity metrics and similarity computed from lexical analysis of text in drug package inserts. Here we present an analysis of novel drug/target predictions, focusing on those that were not obvious based on known pharmacological crosstalk. Considering those cases where the predicted target was an enzyme with known 3D structure allowed incorporation of information from molecular docking and protein binding pocket similarity in addition to ligand-based comparisons. Taken together, the combination of orthogonal information sources led to investigation of a surprising predicted relationship between a transcription factor and an enzyme, specifically, PPAR α and the cyclooxygenase enzymes. These predictions were confirmed by direct biochemical experiments which validate the approach and show for the first time that PPAR α agonists are cyclooxygenase inhibitors.
Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks
Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei
2017-01-01
Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction. PMID:28672867
Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks.
Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei
2017-06-26
Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.
NASA Technical Reports Server (NTRS)
Rejmankova, E.; Roberts, D. R.; Pawley, A.; Manguin, S.; Polanco, J.
1995-01-01
Remote sensing is particularly helpful for assessing the location and extent of vegetation formations, such as herbaceous wetlands, that are difficult to examine on the ground. Marshes that are sparsely populated with emergent macrophytes and dense cyanobacterial mats have previously been identified as very productive Anopheles albimanus larval habitats. This type of habitat was detectable on a classified multispectral System Probatoire d'Observation de la Terre image of northern Belize as a mixture of two isoclasses. A similar spectral signature is characteristic for vegetation of river margins consisting of aquatic grasses and water hyacinth, which constitutes another productive larval habitat. Based on the distance between human settlements (sites) of various sizes and the nearest marsh/river exhibiting this particular class combination, we selected two groups of sites: those located closer than 500 m and those located more than 1,500 m from such habitats. Based on previous adult collections near larval habitats, we defined a landing rate of 0.5 mosquitoes/human/min from 6:30 PM to 8:00 PM as the threshold for high (> or = 0.5 mosquitoes/human/min) versus low (< 0.5 mosquitoes/human/min) densities of An. albimanus. Sites located less than 500 m from the habitat were predicted as having values higher than this threshold, while lower values were predicted for sites located greater than 1,500 m from the habitat. Predictions were verified by collections of mosquitoes landing on humans. The predictions were 100% accurate for sites in the > 1,500-m category and 89% accurate for sites in the < 500-m category.
Chuang, Gwo-Yu; Liou, David; Kwong, Peter D.; Georgiev, Ivelin S.
2014-01-01
Delineation of the antigenic site, or epitope, recognized by an antibody can provide clues about functional vulnerabilities and resistance mechanisms, and can therefore guide antibody optimization and epitope-based vaccine design. Previously, we developed an algorithm for antibody-epitope prediction based on antibody neutralization of viral strains with diverse sequences and validated the algorithm on a set of broadly neutralizing HIV-1 antibodies. Here we describe the implementation of this algorithm, NEP (Neutralization-based Epitope Prediction), as a web-based server. The users must supply as input: (i) an alignment of antigen sequences of diverse viral strains; (ii) neutralization data for the antibody of interest against the same set of antigen sequences; and (iii) (optional) a structure of the unbound antigen, for enhanced prediction accuracy. The prediction results can be downloaded or viewed interactively on the antigen structure (if supplied) from the web browser using a JSmol applet. Since neutralization experiments are typically performed as one of the first steps in the characterization of an antibody to determine its breadth and potency, the NEP server can be used to predict antibody-epitope information at no additional experimental costs. NEP can be accessed on the internet at http://exon.niaid.nih.gov/nep. PMID:24782517
Salomon, Joshua A
2003-01-01
Background In survey studies on health-state valuations, ordinal ranking exercises often are used as precursors to other elicitation methods such as the time trade-off (TTO) or standard gamble, but the ranking data have not been used in deriving cardinal valuations. This study reconsiders the role of ordinal ranks in valuing health and introduces a new approach to estimate interval-scaled valuations based on aggregate ranking data. Methods Analyses were undertaken on data from a previously published general population survey study in the United Kingdom that included rankings and TTO values for hypothetical states described using the EQ-5D classification system. The EQ-5D includes five domains (mobility, self-care, usual activities, pain/discomfort and anxiety/depression) with three possible levels on each. Rank data were analysed using a random utility model, operationalized through conditional logit regression. In the statistical model, probabilities of observed rankings were related to the latent utilities of different health states, modeled as a linear function of EQ-5D domain scores, as in previously reported EQ-5D valuation functions. Predicted valuations based on the conditional logit model were compared to observed TTO values for the 42 states in the study and to predictions based on a model estimated directly from the TTO values. Models were evaluated using the intraclass correlation coefficient (ICC) between predictions and mean observations, and the root mean squared error of predictions at the individual level. Results Agreement between predicted valuations from the rank model and observed TTO values was very high, with an ICC of 0.97, only marginally lower than for predictions based on the model estimated directly from TTO values (ICC = 0.99). Individual-level errors were also comparable in the two models, with root mean squared errors of 0.503 and 0.496 for the rank-based and TTO-based predictions, respectively. Conclusions Modeling health-state valuations based on ordinal ranks can provide results that are similar to those obtained from more widely analyzed valuation techniques such as the TTO. The information content in aggregate ranking data is not currently exploited to full advantage. The possibility of estimating cardinal valuations from ordinal ranks could also simplify future data collection dramatically and facilitate wider empirical study of health-state valuations in diverse settings and population groups. PMID:14687419
Yang, Xiaoxia; Wang, Jia; Sun, Jun; Liu, Rong
2015-01-01
Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder) by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.
Defense Waste Processing Facility Nitric- Glycolic Flowsheet Chemical Process Cell Chemistry: Part 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamecnik, J.; Edwards, T.
The conversions of nitrite to nitrate, the destruction of glycolate, and the conversion of glycolate to formate and oxalate were modeled for the Nitric-Glycolic flowsheet using data from Chemical Process Cell (CPC) simulant runs conducted by Savannah River National Laboratory (SRNL) from 2011 to 2016. The goal of this work was to develop empirical correlation models to predict these values from measureable variables from the chemical process so that these quantities could be predicted a-priori from the sludge or simulant composition and measurable processing variables. The need for these predictions arises from the need to predict the REDuction/OXidation (REDOX) statemore » of the glass from the Defense Waste Processing Facility (DWPF) melter. This report summarizes the work on these correlations based on the aforementioned data. Previous work on these correlations was documented in a technical report covering data from 2011-2015. This current report supersedes this previous report. Further refinement of the models as additional data are collected is recommended.« less
Ensemble-based prediction of RNA secondary structures.
Aghaeepour, Nima; Hoos, Holger H
2013-04-24
Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.
Arc Jet Facility Test Condition Predictions Using the ADSI Code
NASA Technical Reports Server (NTRS)
Palmer, Grant; Prabhu, Dinesh; Terrazas-Salinas, Imelda
2015-01-01
The Aerothermal Design Space Interpolation (ADSI) tool is used to interpolate databases of previously computed computational fluid dynamic solutions for test articles in a NASA Ames arc jet facility. The arc jet databases are generated using an Navier-Stokes flow solver using previously determined best practices. The arc jet mass flow rates and arc currents used to discretize the database are chosen to span the operating conditions possible in the arc jet, and are based on previous arc jet experimental conditions where possible. The ADSI code is a database interpolation, manipulation, and examination tool that can be used to estimate the stagnation point pressure and heating rate for user-specified values of arc jet mass flow rate and arc current. The interpolation is performed in the other direction (predicting mass flow and current to achieve a desired stagnation point pressure and heating rate). ADSI is also used to generate 2-D response surfaces of stagnation point pressure and heating rate as a function of mass flow rate and arc current (or vice versa). Arc jet test data is used to assess the predictive capability of the ADSI code.
Ward, Keith W; Erhardt, Paul; Bachmann, Kenneth
2005-01-01
Previous publications from GlaxoSmithKline and University of Toledo laboratories convey our independent attempts to predict the half-lives of xenobiotics in humans using data obtained from rats. The present investigation was conducted to compare the performance of our published models against a common dataset obtained by merging the two sets of rat versus human half-life (hHL) data previously used by each laboratory. After combining data, mathematical analyses were undertaken by deploying both of our previous models, namely the use of an empirical algorithm based on a best-fit model and the use of rat-to-human liver blood flow ratios as a half-life correction factor. Both qualitative and quantitative analyses were performed, as well as evaluation of the impact of molecular properties on predictability. The merged dataset was remarkably diverse with respect to physiochemical and pharmacokinetic (PK) properties. Application of both models revealed similar predictability, depending upon the measure of stipulated accuracy. Certain molecular features, particularly rotatable bond count and pK(a), appeared to influence the accuracy of prediction. This collaborative effort has resulted in an improved understanding and appreciation of the value of rats to serve as a surrogate for the prediction of xenobiotic half-lives in humans when clinical pharmacokinetic studies are not possible or practicable.
Predicting subcontractor performance using web-based Evolutionary Fuzzy Neural Networks.
Ko, Chien-Ho
2013-01-01
Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism.
Predicting Subcontractor Performance Using Web-Based Evolutionary Fuzzy Neural Networks
2013-01-01
Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism. PMID:23864830
Improved regulatory element prediction based on tissue-specific local epigenomic signatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Yupeng; Gorkin, David U.; Dickel, Diane E.
Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulator y element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared withmore » existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types.« less
Improved regulatory element prediction based on tissue-specific local epigenomic signatures
He, Yupeng; Gorkin, David U.; Dickel, Diane E.; ...
2017-02-13
Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulator y element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared withmore » existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types.« less
Behavioral-Based Predictors of Workplace Violence in the Army STARRS
2014-10-01
Dawes RM, Faust D, Meehl PE. Clinical versus actuarial judgment. Science . 1989;243(4899): 1668-1674. 46. Grove WM, Zald DH, Lebow BS, Snitz BE, Nelson...develop an actuarial risk algorithm predicting suicide in the 12 months after US Army soldier inpatient treatment of a psychiatric disorder to target...generate an actuarial post- hospitalization suicide risk algorithm. Previous research has revealed that actuarial suicide prediction is much more
NASA Astrophysics Data System (ADS)
Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Santos-Filho, Osvaldo A.; Esposito, Emilio X.; Hopfinger, Anton J.; Tseng, Yufeng J.
2008-06-01
In previous studies we have developed categorical QSAR models for predicting skin-sensitization potency based on 4D-fingerprint (4D-FP) descriptors and in vivo murine local lymph node assay (LLNA) measures. Only 4D-FP derived from the ground state (GMAX) structures of the molecules were used to build the QSAR models. In this study we have generated 4D-FP descriptors from the first excited state (EMAX) structures of the molecules. The GMAX, EMAX and the combined ground and excited state 4D-FP descriptors (GEMAX) were employed in building categorical QSAR models. Logistic regression (LR) and partial least square coupled logistic regression (PLS-CLR), found to be effective model building for the LLNA skin-sensitization measures in our previous studies, were used again in this study. This also permitted comparison of the prior ground state models to those involving first excited state 4D-FP descriptors. Three types of categorical QSAR models were constructed for each of the GMAX, EMAX and GEMAX datasets: a binary model (2-state), an ordinal model (3-state) and a binary-binary model (two-2-state). No significant differences exist among the LR 2-state model constructed for each of the three datasets. However, the PLS-CLR 3-state and 2-state models based on the EMAX and GEMAX datasets have higher predictivity than those constructed using only the GMAX dataset. These EMAX and GMAX categorical models are also more significant and predictive than corresponding models built in our previous QSAR studies of LLNA skin-sensitization measures.
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Hulbert, C.; Riviere, J.; Lubbers, N.; Barros, K.; Marone, C.; Johnson, P. A.
2016-12-01
Forecasting failure is a primary goal in diverse domains that include earthquake physics, materials science, nondestructive evaluation of materials and other engineering applications. Due to the highly complex physics of material failure and limitations on gathering data in the failure nucleation zone, this goal has often appeared out of reach; however, recent advances in instrumentation sensitivity, instrument density and data analysis show promise toward forecasting failure times. Here, we show that we can predict frictional failure times of both slow and fast stick slip failure events in the laboratory. This advance is made possible by applying a machine learning approach known as Random Forests1(RF) to the continuous acoustic emission (AE) time series recorded by detectors located on the fault blocks. The RF is trained using a large number of statistical features derived from the AE time series signal. The model is then applied to data not previously analyzed. Remarkably, we find that the RF method predicts upcoming failure time far in advance of a stick slip event, based only on a short time window of data. Further, the algorithm accurately predicts the time of the beginning and end of the next slip event. The predicted time improves as failure is approached, as other data features add to prediction. Our results show robust predictions of slow and dynamic failure based on acoustic emissions from the fault zone throughout the laboratory seismic cycle. The predictions are based on previously unidentified tremor-like acoustic signals that occur during stress build up and the onset of macroscopic frictional weakening. We suggest that the tremor-like signals carry information about fault zone processes and allow precise predictions of failure at any time in the slow slip or stick slip cycle2. If the laboratory experiments represent Earth frictional conditions, it could well be that signals are being missed that contain highly useful predictive information. 1Breiman, L. Random forests. Machine Learning 45, 5-32 (2001). 2Rouet-Leduc, B. C. Hulbert, N. Lubbers, K. Barros and P. A. Johnson, Learning the physics of failure, in review (2016).
NASA Astrophysics Data System (ADS)
Ionita, M.; Grosfeld, K.; Scholz, P.; Lohmann, G.
2016-12-01
Sea ice in both Polar Regions is an important indicator for the expression of global climate change and its polar amplification. Consequently, a broad information interest exists on sea ice, its coverage, variability and long term change. Knowledge on sea ice requires high quality data on ice extent, thickness and its dynamics. However, its predictability depends on various climate parameters and conditions. In order to provide insights into the potential development of a monthly/seasonal signal, we developed a robust statistical model based on ocean heat content, sea surface temperature and atmospheric variables to calculate an estimate of the September minimum sea ice extent for every year. Although previous statistical attempts at monthly/seasonal forecasts of September sea ice minimum show a relatively reduced skill, here it is shown that more than 97% (r = 0.98) of the September sea ice extent can predicted three months in advance by using previous months conditions via a multiple linear regression model based on global sea surface temperature (SST), mean sea level pressure (SLP), air temperature at 850hPa (TT850), surface winds and sea ice extent persistence. The statistical model is based on the identification of regions with stable teleconnections between the predictors (climatological parameters) and the predictand (here sea ice extent). The results based on our statistical model contribute to the sea ice prediction network for the sea ice outlook report (https://www.arcus.org/sipn) and could provide a tool for identifying relevant regions and climate parameters that are important for the sea ice development in the Arctic and for detecting sensitive and critical regions in global coupled climate models with focus on sea ice formation.
Prediction of β-turns in proteins from multiple alignment using neural network
Kaur, Harpreet; Raghava, Gajendra Pal Singh
2003-01-01
A neural network-based method has been developed for the prediction of β-turns in proteins by using multiple sequence alignment. Two feed-forward back-propagation networks with a single hidden layer are used where the first-sequence structure network is trained with the multiple sequence alignment in the form of PSI-BLAST–generated position-specific scoring matrices. The initial predictions from the first network and PSIPRED-predicted secondary structure are used as input to the second structure-structure network to refine the predictions obtained from the first net. A significant improvement in prediction accuracy has been achieved by using evolutionary information contained in the multiple sequence alignment. The final network yields an overall prediction accuracy of 75.5% when tested by sevenfold cross-validation on a set of 426 nonhomologous protein chains. The corresponding Qpred, Qobs, and Matthews correlation coefficient values are 49.8%, 72.3%, and 0.43, respectively, and are the best among all the previously published β-turn prediction methods. The Web server BetaTPred2 (http://www.imtech.res.in/raghava/betatpred2/) has been developed based on this approach. PMID:12592033
New Equation for Prediction of Martensite Start Temperature in High Carbon Ferrous Alloys
NASA Astrophysics Data System (ADS)
Park, Jihye; Shim, Jae-Hyeok; Lee, Seok-Jae
2018-02-01
Since previous equations fail to predict M S temperature of high carbon ferrous alloys, we first propose an equation for prediction of M S temperature of ferrous alloys containing > 2 wt pct C. The presence of carbides (Fe3C and Cr-rich M 7C3) is thermodynamically considered to estimate the C concentration in austenite. Especially, equations individually specialized for lean and high Cr alloys very accurately reproduce experimental results. The chemical driving force for martensitic transformation is quantitatively analyzed based on the calculation of T 0 temperature.
Neural network-based run-to-run controller using exposure and resist thickness adjustment
NASA Astrophysics Data System (ADS)
Geary, Shane; Barry, Ronan
2003-06-01
This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.
Wolfe, Marnin D; Kulakow, Peter; Rabbi, Ismail Y; Jannink, Jean-Luc
2016-08-31
In clonally propagated crops, non-additive genetic effects can be effectively exploited by the identification of superior genetic individuals as varieties. Cassava (Manihot esculenta Crantz) is a clonally propagated staple food crop that feeds hundreds of millions. We quantified the amount and nature of non-additive genetic variation for three key traits in a breeding population of cassava from sub-Saharan Africa using additive and non-additive genome-wide marker-based relationship matrices. We then assessed the accuracy of genomic prediction for total (additive plus non-additive) genetic value. We confirmed previous findings based on diallel populations, that non-additive genetic variation is significant for key cassava traits. Specifically, we found that dominance is particularly important for root yield and epistasis contributes strongly to variation in CMD resistance. Further, we showed that total genetic value predicted observed phenotypes more accurately than additive only models for root yield but not for dry matter content, which is mostly additive or for CMD resistance, which has high narrow-sense heritability. We address the implication of these results for cassava breeding and put our work in the context of previous results in cassava, and other plant and animal species. Copyright © 2016 Author et al.
NASA Technical Reports Server (NTRS)
Sebok, Angelia; Wickens, Christopher; Sargent, Robert
2015-01-01
One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.
Kutzner, Florian; Vogel, Tobias; Freytag, Peter; Fiedler, Klaus
2011-01-01
In the present research, we argue for the robustness of illusory correlations (ICs, Hamilton & Gifford, 1976) regarding two boundary conditions suggested in previous research. First, we argue that ICs are maintained under extended experience. Using simulations, we derive conflicting predictions. Whereas noise-based accounts predict ICs to be maintained (Fielder, 2000; Smith, 1991), a prominent account based on discrepancy-reducing feedback learning predicts ICs to disappear (Van Rooy et al., 2003). An experiment involving 320 observations with majority and minority members supports the claim that ICs are maintained. Second, we show that actively using the stereotype to make predictions that are met with reward and punishment does not eliminate the bias. In addition, participants' operant reactions afford a novel online measure of ICs. In sum, our findings highlight the robustness of ICs that can be explained as a result of unbiased but noisy learning.
Improved inter-layer prediction for light field content coding with display scalability
NASA Astrophysics Data System (ADS)
Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo
2016-09-01
Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.
Evidence-based pathology in its second decade: toward probabilistic cognitive computing.
Marchevsky, Alberto M; Walts, Ann E; Wick, Mark R
2017-03-01
Evidence-based pathology advocates using a combination of best available data ("evidence") from the literature and personal experience for the diagnosis, estimation of prognosis, and assessment of other variables that impact individual patient care. Evidence-based pathology relies on systematic reviews of the literature, evaluation of the quality of evidence as categorized by evidence levels and statistical tools such as meta-analyses, estimates of probabilities and odds, and others. However, it is well known that previously "statistically significant" information usually does not accurately forecast the future for individual patients. There is great interest in "cognitive computing" in which "data mining" is combined with "predictive analytics" designed to forecast future events and estimate the strength of those predictions. This study demonstrates the use of IBM Watson Analytics software to evaluate and predict the prognosis of 101 patients with typical and atypical pulmonary carcinoid tumors in which Ki-67 indices have been determined. The results obtained with this system are compared with those previously reported using "routine" statistical software and the help of a professional statistician. IBM Watson Analytics interactively provides statistical results that are comparable to those obtained with routine statistical tools but much more rapidly, with considerably less effort and with interactive graphics that are intuitively easy to apply. It also enables analysis of natural language variables and yields detailed survival predictions for patient subgroups selected by the user. Potential applications of this tool and basic concepts of cognitive computing are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.
Peterson, Lenna X; Shin, Woong-Hee; Kim, Hyungrae; Kihara, Daisuke
2018-03-01
We report our group's performance for protein-protein complex structure prediction and scoring in Round 37 of the Critical Assessment of PRediction of Interactions (CAPRI), an objective assessment of protein-protein complex modeling. We demonstrated noticeable improvement in both prediction and scoring compared to previous rounds of CAPRI, with our human predictor group near the top of the rankings and our server scorer group at the top. This is the first time in CAPRI that a server has been the top scorer group. To predict protein-protein complex structures, we used both multi-chain template-based modeling (TBM) and our protein-protein docking program, LZerD. LZerD represents protein surfaces using 3D Zernike descriptors (3DZD), which are based on a mathematical series expansion of a 3D function. Because 3DZD are a soft representation of the protein surface, LZerD is tolerant to small conformational changes, making it well suited to docking unbound and TBM structures. The key to our improved performance in CAPRI Round 37 was to combine multi-chain TBM and docking. As opposed to our previous strategy of performing docking for all target complexes, we used TBM when multi-chain templates were available and docking otherwise. We also describe the combination of multiple scoring functions used by our server scorer group, which achieved the top rank for the scorer phase. © 2017 Wiley Periodicals, Inc.
Investigation of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams
NASA Technical Reports Server (NTRS)
Davis, Brian A.
2005-01-01
Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical model. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. Excellent agreement is achieved between the predicted and measured results, thereby quantitatively validating the numerical tool.
NASA Astrophysics Data System (ADS)
Qattan, I. A.
2017-06-01
I present a prediction of the e± elastic scattering cross-section ratio, Re+e-, as determined using a new parametrization of the two-photon exchange (TPE) corrections to electron-proton elastic scattering cross section σR. The extracted ratio is compared to several previous phenomenological extractions, TPE hadronic calculations, and direct measurements from the comparison of electron and positron scattering. The TPE corrections and the ratio Re+e- show a clear change of sign at low Q2, which is necessary to explain the high-Q2 form factors discrepancy while being consistent with the known Q2→0 limit. While my predictions are in generally good agreement with previous extractions, TPE hadronic calculations, and existing world data including the recent two measurements from the CLAS and VEPP-3 Novosibirsk experiments, they are larger than the new OLYMPUS measurements at larger Q2 values.
Improved Method for Prediction of Attainable Wing Leading-Edge Thrust
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; McElroy, Marcus O.; Lessard, Wendy B.; McCullers, L. Arnold
1996-01-01
Prediction of the loss of wing leading-edge thrust and the accompanying increase in drag due to lift, when flow is not completely attached, presents a difficult but commonly encountered problem. A method (called the previous method) for the prediction of attainable leading-edge thrust and the resultant effect on airplane aerodynamic performance has been in use for more than a decade. Recently, the method has been revised to enhance its applicability to current airplane design and evaluation problems. The improved method (called the present method) provides for a greater range of airfoil shapes from very sharp to very blunt leading edges. It is also based on a wider range of Reynolds numbers than was available for the previous method. The present method, when employed in computer codes for aerodynamic analysis, generally results in improved correlation with experimental wing-body axial-force data and provides reasonable estimates of the measured drag.
Administrative database code accuracy did not vary notably with changes in disease prevalence.
van Walraven, Carl; English, Shane; Austin, Peter C
2016-11-01
Previous mathematical analyses of diagnostic tests based on the categorization of a continuous measure have found that test sensitivity and specificity varies significantly by disease prevalence. This study determined if the accuracy of diagnostic codes varied by disease prevalence. We used data from two previous studies in which the true status of renal disease and primary subarachnoid hemorrhage, respectively, had been determined. In multiple stratified random samples from the two previous studies having varying disease prevalence, we measured the accuracy of diagnostic codes for each disease using sensitivity, specificity, and positive and negative predictive value. Diagnostic code sensitivity and specificity did not change notably within clinically sensible disease prevalence. In contrast, positive and negative predictive values changed significantly with disease prevalence. Disease prevalence had no important influence on the sensitivity and specificity of diagnostic codes in administrative databases. Copyright © 2016 Elsevier Inc. All rights reserved.
Ma, Chuang; Xin, Mingming; Feldmann, Kenneth A.; Wang, Xiangfeng
2014-01-01
Machine learning (ML) is an intelligent data mining technique that builds a prediction model based on the learning of prior knowledge to recognize patterns in large-scale data sets. We present an ML-based methodology for transcriptome analysis via comparison of gene coexpression networks, implemented as an R package called machine learning–based differential network analysis (mlDNA) and apply this method to reanalyze a set of abiotic stress expression data in Arabidopsis thaliana. The mlDNA first used a ML-based filtering process to remove nonexpressed, constitutively expressed, or non-stress-responsive “noninformative” genes prior to network construction, through learning the patterns of 32 expression characteristics of known stress-related genes. The retained “informative” genes were subsequently analyzed by ML-based network comparison to predict candidate stress-related genes showing expression and network differences between control and stress networks, based on 33 network topological characteristics. Comparative evaluation of the network-centric and gene-centric analytic methods showed that mlDNA substantially outperformed traditional statistical testing–based differential expression analysis at identifying stress-related genes, with markedly improved prediction accuracy. To experimentally validate the mlDNA predictions, we selected 89 candidates out of the 1784 predicted salt stress–related genes with available SALK T-DNA mutagenesis lines for phenotypic screening and identified two previously unreported genes, mutants of which showed salt-sensitive phenotypes. PMID:24520154
Frequency, probability, and prediction: easy solutions to cognitive illusions?
Griffin, D; Buehler, R
1999-02-01
Many errors in probabilistic judgment have been attributed to people's inability to think in statistical terms when faced with information about a single case. Prior theoretical analyses and empirical results imply that the errors associated with case-specific reasoning may be reduced when people make frequentistic predictions about a set of cases. In studies of three previously identified cognitive biases, we find that frequency-based predictions are different from-but no better than-case-specific judgments of probability. First, in studies of the "planning fallacy, " we compare the accuracy of aggregate frequency and case-specific probability judgments in predictions of students' real-life projects. When aggregate and single-case predictions are collected from different respondents, there is little difference between the two: Both are overly optimistic and show little predictive validity. However, in within-subject comparisons, the aggregate judgments are significantly more conservative than the single-case predictions, though still optimistically biased. Results from studies of overconfidence in general knowledge and base rate neglect in categorical prediction underline a general conclusion. Frequentistic predictions made for sets of events are no more statistically sophisticated, nor more accurate, than predictions made for individual events using subjective probability. Copyright 1999 Academic Press.
2016-01-01
The renewed urgency to develop new treatments for Mycobacterium tuberculosis (Mtb) infection has resulted in large-scale phenotypic screening and thousands of new active compounds in vitro. The next challenge is to identify candidates to pursue in a mouse in vivo efficacy model as a step to predicting clinical efficacy. We previously analyzed over 70 years of this mouse in vivo efficacy data, which we used to generate and validate machine learning models. Curation of 60 additional small molecules with in vivo data published in 2014 and 2015 was undertaken to further test these models. This represents a much larger test set than for the previous models. Several computational approaches have now been applied to analyze these molecules and compare their molecular properties beyond those attempted previously. Our previous machine learning models have been updated, and a novel aspect has been added in the form of mouse liver microsomal half-life (MLM t1/2) and in vitro-based Mtb models incorporating cytotoxicity data that were used to predict in vivo activity for comparison. Our best Mtbin vivo models possess fivefold ROC values > 0.7, sensitivity > 80%, and concordance > 60%, while the best specificity value is >40%. Use of an MLM t1/2 Bayesian model affords comparable results for scoring the 60 compounds tested. Combining MLM stability and in vitroMtb models in a novel consensus workflow in the best cases has a positive predicted value (hit rate) > 77%. Our results indicate that Bayesian models constructed with literature in vivoMtb data generated by different laboratories in various mouse models can have predictive value and may be used alongside MLM t1/2 and in vitro-based Mtb models to assist in selecting antitubercular compounds with desirable in vivo efficacy. We demonstrate for the first time that consensus models of any kind can be used to predict in vivo activity for Mtb. In addition, we describe a new clustering method for data visualization and apply this to the in vivo training and test data, ultimately making the method accessible in a mobile app. PMID:27335215
Ding, Feng; Sharma, Shantanu; Chalasani, Poornima; Demidov, Vadim V.; Broude, Natalia E.; Dokholyan, Nikolay V.
2008-01-01
RNA molecules with novel functions have revived interest in the accurate prediction of RNA three-dimensional (3D) structure and folding dynamics. However, existing methods are inefficient in automated 3D structure prediction. Here, we report a robust computational approach for rapid folding of RNA molecules. We develop a simplified RNA model for discrete molecular dynamics (DMD) simulations, incorporating base-pairing and base-stacking interactions. We demonstrate correct folding of 150 structurally diverse RNA sequences. The majority of DMD-predicted 3D structures have <4 Å deviations from experimental structures. The secondary structures corresponding to the predicted 3D structures consist of 94% native base-pair interactions. Folding thermodynamics and kinetics of tRNAPhe, pseudoknots, and mRNA fragments in DMD simulations are in agreement with previous experimental findings. Folding of RNA molecules features transient, non-native conformations, suggesting non-hierarchical RNA folding. Our method allows rapid conformational sampling of RNA folding, with computational time increasing linearly with RNA length. We envision this approach as a promising tool for RNA structural and functional analyses. PMID:18456842
MATRIX FACTORIZATION-BASED DATA FUSION FOR GENE FUNCTION PREDICTION IN BAKER’S YEAST AND SLIME MOLD
ŽITNIK, MARINKA; ZUPAN, BLAŽ
2014-01-01
The development of effective methods for the characterization of gene functions that are able to combine diverse data sources in a sound and easily-extendible way is an important goal in computational biology. We have previously developed a general matrix factorization-based data fusion approach for gene function prediction. In this manuscript, we show that this data fusion approach can be applied to gene function prediction and that it can fuse various heterogeneous data sources, such as gene expression profiles, known protein annotations, interaction and literature data. The fusion is achieved by simultaneous matrix tri-factorization that shares matrix factors between sources. We demonstrate the effectiveness of the approach by evaluating its performance on predicting ontological annotations in slime mold D. discoideum and on recognizing proteins of baker’s yeast S. cerevisiae that participate in the ribosome or are located in the cell membrane. Our approach achieves predictive performance comparable to that of the state-of-the-art kernel-based data fusion, but requires fewer data preprocessing steps. PMID:24297565
Krogh-Jespersen, Sheila; Woodward, Amanda L
2014-01-01
Previous research has shown that young infants perceive others' actions as structured by goals. One open question is whether the recruitment of this understanding when predicting others' actions imposes a cognitive challenge for young infants. The current study explored infants' ability to utilize their knowledge of others' goals to rapidly predict future behavior in complex social environments and distinguish goal-directed actions from other kinds of movements. Fifteen-month-olds (N = 40) viewed videos of an actor engaged in either a goal-directed (grasping) or an ambiguous (brushing the back of her hand) action on a Tobii eye-tracker. At test, critical elements of the scene were changed and infants' predictive fixations were examined to determine whether they relied on goal information to anticipate the actor's future behavior. Results revealed that infants reliably generated goal-based visual predictions for the grasping action, but not for the back-of-hand behavior. Moreover, response latencies were longer for goal-based predictions than for location-based predictions, suggesting that goal-based predictions are cognitively taxing. Analyses of areas of interest indicated that heightened attention to the overall scene, as opposed to specific patterns of attention, was the critical indicator of successful judgments regarding an actor's future goal-directed behavior. These findings shed light on the processes that support "smart" social behavior in infants, as it may be a challenge for young infants to use information about others' intentions to inform rapid predictions.
Model-based analysis of N-glycosylation in Chinese hamster ovary cells
Krambeck, Frederick J.; Bennun, Sandra V.; Betenbaugh, Michael J.
2017-01-01
The Chinese hamster ovary (CHO) cell is the gold standard for manufacturing of glycosylated recombinant proteins for production of biotherapeutics. The similarity of its glycosylation patterns to the human versions enable the products of this cell line favorable pharmacokinetic properties and lower likelihood of causing immunogenic responses. Because glycan structures are the product of the concerted action of intracellular enzymes, it is difficult to predict a priori how the effects of genetic manipulations alter glycan structures of cells and therapeutic properties. For that reason, quantitative models able to predict glycosylation have emerged as promising tools to deal with the complexity of glycosylation processing. For example, an earlier version of the same model used in this study was used by others to successfully predict changes in enzyme activities that could produce a desired change in glycan structure. In this study we utilize an updated version of this model to provide a comprehensive analysis of N-glycosylation in ten Chinese hamster ovary (CHO) cell lines that include a wild type parent and nine mutants of CHO, through interpretation of previously published mass spectrometry data. The updated N-glycosylation mathematical model contains up to 50,605 glycan structures. Adjusting the enzyme activities in this model to match N-glycan mass spectra produces detailed predictions of the glycosylation process, enzyme activity profiles and complete glycosylation profiles of each of the cell lines. These profiles are consistent with biochemical and genetic data reported previously. The model-based results also predict glycosylation features of the cell lines not previously published, indicating more complex changes in glycosylation enzyme activities than just those resulting directly from gene mutations. The model predicts that the CHO cell lines possess regulatory mechanisms that allow them to adjust glycosylation enzyme activities to mitigate side effects of the primary loss or gain of glycosylation function known to exist in these mutant cell lines. Quantitative models of CHO cell glycosylation have the potential for predicting how glycoengineering manipulations might affect glycoform distributions to improve the therapeutic performance of glycoprotein products. PMID:28486471
O’Mahoney, Thomas G.; Kitchener, Andrew C.; Manning, Phillip L.; Sellers, William I.
2016-01-01
The external appearance of the dodo (Raphus cucullatus, Linnaeus, 1758) has been a source of considerable intrigue, as contemporaneous accounts or depictions are rare. The body mass of the dodo has been particularly contentious, with the flightless pigeon alternatively reconstructed as slim or fat depending upon the skeletal metric used as the basis for mass prediction. Resolving this dichotomy and obtaining a reliable estimate for mass is essential before future analyses regarding dodo life history, physiology or biomechanics can be conducted. Previous mass estimates of the dodo have relied upon predictive equations based upon hind limb dimensions of extant pigeons. Yet the hind limb proportions of dodo have been found to differ considerably from those of their modern relatives, particularly with regards to midshaft diameter. Therefore, application of predictive equations to unusually robust fossil skeletal elements may bias mass estimates. We present a whole-body computed tomography (CT) -based mass estimation technique for application to the dodo. We generate 3D volumetric renders of the articulated skeletons of 20 species of extant pigeons, and wrap minimum-fit ‘convex hulls’ around their bony extremities. Convex hull volume is subsequently regressed against mass to generate predictive models based upon whole skeletons. Our best-performing predictive model is characterized by high correlation coefficients and low mean squared error (a = − 2.31, b = 0.90, r2 = 0.97, MSE = 0.0046). When applied to articulated composite skeletons of the dodo (National Museums Scotland, NMS.Z.1993.13; Natural History Museum, NHMUK A.9040 and S/1988.50.1), we estimate eviscerated body masses of 8–10.8 kg. When accounting for missing soft tissues, this may equate to live masses of 10.6–14.3 kg. Mass predictions presented here overlap at the lower end of those previously published, and support recent suggestions of a relatively slim dodo. CT-based reconstructions provide a means of objectively estimating mass and body segment properties of extinct species using whole articulated skeletons. PMID:26788418
Memory Binding Test Predicts Incident Amnestic Mild Cognitive Impairment.
Mowrey, Wenzhu B; Lipton, Richard B; Katz, Mindy J; Ramratan, Wendy S; Loewenstein, David A; Zimmerman, Molly E; Buschke, Herman
2016-07-14
The Memory Binding Test (MBT), previously known as Memory Capacity Test, has demonstrated discriminative validity for distinguishing persons with amnestic mild cognitive impairment (aMCI) and dementia from cognitively normal elderly. We aimed to assess the predictive validity of the MBT for incident aMCI. In a longitudinal, community-based study of adults aged 70+, we administered the MBT to 246 cognitively normal elderly adults at baseline and followed them annually. Based on previous work, a subtle reduction in memory binding at baseline was defined by a Total Items in the Paired (TIP) condition score of ≤22 on the MBT. Cox proportional hazards models were used to assess the predictive validity of the MBT for incident aMCI accounting for the effects of covariates. The hazard ratio of incident aMCI was also assessed for different prediction time windows ranging from 4 to 7 years of follow-up, separately. Among 246 controls who were cognitively normal at baseline, 48 developed incident aMCI during follow-up. A baseline MBT reduction was associated with an increased risk for developing incident aMCI (hazard ratio (HR) = 2.44, 95% confidence interval: 1.30-4.56, p = 0.005). When varying the prediction window from 4-7 years, the MBT reduction remained significant for predicting incident aMCI (HR range: 2.33-3.12, p: 0.0007-0.04). Persons with poor performance on the MBT are at significantly greater risk for developing incident aMCI. High hazard ratios up to seven years of follow-up suggest that the MBT is sensitive to early disease.
NASA Astrophysics Data System (ADS)
Hao, Zhenhua; Drake, V. Alistair; Sidhu, Leesa; Taylor, John R.
2017-12-01
Based on previous investigations, adult Australian plague locusts are believed to migrate on warm nights (with evening temperatures >25 °C), provided daytime flight is suppressed by surface winds greater than the locusts' flight speed, which has been shown to be 3.1 m s-1. Moreover, adult locusts are believed to undertake briefer `dispersal' flights on nights with evening temperature >20 °C. To reassess the utility of these conditions for forecasting locust flight, contingency tests were conducted comparing the nights selected on these bases (predicted nights) for the months of November, January, and March and the nights when locust migration were detected with an insect monitoring radar (actual nights) over a 7-year period. In addition, the wind direction distributions and mean wind directions on all predicted nights and actual nights were compared. Observations at around 395 m above ground level (AGL), the height at which radar observations have shown that the greatest number of locusts fly, were used to determine the actual nights. Tests and comparisons were also made for a second height, 990 m AGL, as this was used in the previous investigation. Our analysis shows that the proposed criteria are successful from predicting migratory flight only in March, when the surface temperature is effective as a predicting factor. Surface wind speed has no predicting power. It is suggested that a strong daytime surface wind speed requirement should not be considered and other meteorological variables need to be added to the requirement of a warm surface temperature around dusk for the predictions to have much utility.
RandomForest4Life: a Random Forest for predicting ALS disease progression.
Hothorn, Torsten; Jung, Hans H
2014-09-01
We describe a method for predicting disease progression in amyotrophic lateral sclerosis (ALS) patients. The method was developed as a submission to the DREAM Phil Bowen ALS Prediction Prize4Life Challenge of summer 2012. Based on repeated patient examinations over a three- month period, we used a random forest algorithm to predict future disease progression. The procedure was set up and internally evaluated using data from 1197 ALS patients. External validation by an expert jury was based on undisclosed information of an additional 625 patients; all patient data were obtained from the PRO-ACT database. In terms of prediction accuracy, the approach described here ranked third best. Our interpretation of the prediction model confirmed previous reports suggesting that past disease progression is a strong predictor of future disease progression measured on the ALS functional rating scale (ALSFRS). We also found that larger variability in initial ALSFRS scores is linked to faster future disease progression. The results reported here furthermore suggested that approaches taking the multidimensionality of the ALSFRS into account promise some potential for improved ALS disease prediction.
Near Real-Time Optimal Prediction of Adverse Events in Aviation Data
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander; Das, Santanu
2010-01-01
The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we demonstrate how to recast the anomaly prediction problem into a form whose solution is accessible as a level-crossing prediction problem. The level-crossing prediction problem has an elegant, optimal, yet untested solution under certain technical constraints, and only when the appropriate modeling assumptions are made. As such, we will thoroughly investigate the resilience of these modeling assumptions, and show how they affect final performance. Finally, the predictive capability of this method will be assessed by quantitative means, using both validation and test data containing anomalies or adverse events from real aviation data sets that have previously been identified as operationally significant by domain experts. It will be shown that the formulation proposed yields a lower false alarm rate on average than competing methods based on similarly advanced concepts, and a higher correct detection rate than a standard method based upon exceedances that is commonly used for prediction.
Depta, Jeremiah P; Patel, Jayendrakumar S; Novak, Eric; Gage, Brian F; Masrani, Shriti K; Raymer, David; Facey, Gabrielle; Patel, Yogesh; Zajarias, Alan; Lasala, John M; Amin, Amit P; Kurz, Howard I; Singh, Jasvindar; Bach, Richard G
2015-02-21
Although lesions deferred revascularization following fractional flow reserve (FFR) assessment have a low risk of adverse cardiac events, variability in risk for deferred lesion intervention (DLI) has not been previously evaluated. The aim of this study was to develop a prediction model to estimate 1-year risk of DLI for coronary lesions where revascularization was not performed following FFR assessment. A prediction model for DLI was developed from a cohort of 721 patients with 882 coronary lesions where revascularization was deferred based on FFR between 10/2002 and 7/2010. Deferred lesion intervention was defined as any revascularization of a lesion previously deferred following FFR. The final DLI model was developed using stepwise Cox regression and validated using bootstrapping techniques. An algorithm was constructed to predict the 1-year risk of DLI. During a mean (±SD) follow-up period of 4.0 ± 2.3 years, 18% of lesions deferred after FFR underwent DLI; the 1-year incidence of DLI was 5.3%, while the predicted risk of DLI varied from 1 to 40%. The final Cox model included the FFR value, age, current or former smoking, history of coronary artery disease (CAD) or prior percutaneous coronary intervention, multi-vessel CAD, and serum creatinine. The c statistic for the DLI prediction model was 0.66 (95% confidence interval, CI: 0.61-0.70). Patients deferred revascularization based on FFR have variation in their risk for DLI. A clinical prediction model consisting of five clinical variables and the FFR value can help predict the risk of DLI in the first year following FFR assessment. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.
Predicting drug-target interactions by dual-network integrated logistic matrix factorization
NASA Astrophysics Data System (ADS)
Hao, Ming; Bryant, Stephen H.; Wang, Yanli
2017-01-01
In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.
Nolan, Richard C; Richmond, Peter; Prescott, Susan L; Mallon, Dominic F; Gong, Grace; Franzmann, Annkathrin M; Naidoo, Rama; Loh, Richard K S
2007-05-01
Peanut allergy is transient in some children but it is not clear whether quantitating peanut-specific IgE by Skin Prick Test (SPT) adds additional information to fluorescent-enzyme immunoassay (FEIA) in discriminating between allergic and tolerant children. To investigate whether SPT with a commercial extract or fresh foods adds additional predictive information for peanut challenge in children with a low FEIA (<10 k UA/L) who were previously sensitized, or allergic to peanuts. Children from a hospital-based allergy service who were previously sensitized or allergic to peanuts were invited to undergo a peanut challenge unless they had a serum peanut-specific IgE>10 k UA/L, a previous severe reaction, or a recent reaction to peanuts (within two years). SPT with a commercial extract, raw and roasted saline soaked peanuts was performed immediately prior to open challenge in hospital with increasing quantity of peanuts until total of 26.7 g of peanut was consumed. A positive challenge consisted of an objective IgE mediated reaction occurring during the observation period. 54 children (median age of 6.3 years) were admitted for a challenge. Nineteen challenges were positive, 27 negative, five were indeterminate and three did not proceed after SPT. Commercial and fresh food extracts provided similar diagnostic information. A wheal diameter of >or=7 mm of the commercial extract predicted an allergic outcome with specificity 97%, positive predictive value 93% and sensitivity 83%. There was a tendency for an increase in SPT wheal since initial diagnosis in children who remained allergic to peanuts while it decreased in those with a negative challenge. The outcome of a peanut challenge in peanut sensitized or previously allergic children with a low FEIA can be predicted by SPT. In this cohort, not challenging children with a SPT wheal of >or=7 mm would have avoided 15 of 18 positive challenges and denied a challenge to one out of 27 tolerant children.
Carlsson, Sigrid V; Peltola, Mari T; Sjoberg, Daniel; Schröder, Fritz H; Hugosson, Jonas; Pettersson, Kim; Scardino, Peter T; Vickers, Andrew J; Lilja, Hans; Roobol, Monique J
2013-09-01
To explore whether a panel of kallikrein markers in blood: total, free and intact prostate-specific antigen (PSA) and kallikrein-related peptidase 2, could be used as a non-invasive alternative for predicting prostate cancer on biopsy in a screening setting. The study cohort comprised previously unscreened men who underwent sextant biopsy owing to elevated PSA (≥3 ng/mL) in two different centres of the European Randomized Study of Screening for Prostate Cancer, Rotterdam (n = 2914) and Göteborg (n = 740). A statistical model, based on kallikrein markers, was compared with one based on established clinical factors for the prediction of biopsy outcome. The clinical tests were found to be no better than blood markers, with an area under the curve in favour of the blood measurements of 0.766 vs. 0.763 in Rotterdam and 0.809 vs. 0.774 in Göteborg. Adding digital rectal examination (DRE) or DRE plus transrectal ultrasonography (TRUS) volume to the markers improved discrimination, although the increases were small. Results were similar for predicting high-grade cancer. There was a strong correlation between the blood measurements and TRUS-estimated prostate volume (Spearman's correlation 0.60 in Rotterdam and 0.57 in Göteborg). In previously unscreened men, each with indication for biopsy, a statistical model based on kallikrein levels was similar to a clinical model in predicting prostate cancer in a screening setting, outside the day-to-day clinical practice. Whether a clinical approach can be replaced by laboratory analyses or used in combination with decision models (nomograms) is a clinical judgment that may vary from clinician to clinician depending on how they weigh the different advantages and disadvantages (harms, costs, time, invasiveness) of both approaches. © 2013 BJU International.
Iyer, Lakshminarayan M; Zhang, Dapeng; Burroughs, A Maxwell; Aravind, L
2013-09-01
Discovery of the TET/JBP family of dioxygenases that modify bases in DNA has sparked considerable interest in novel DNA base modifications and their biological roles. Using sensitive sequence and structure analyses combined with contextual information from comparative genomics, we computationally characterize over 12 novel biochemical systems for DNA modifications. We predict previously unidentified enzymes, such as the kinetoplastid J-base generating glycosyltransferase (and its homolog GREB1), the catalytic specificity of bacteriophage TET/JBP proteins and their role in complex DNA base modifications. We also predict the enzymes involved in synthesis of hypermodified bases such as alpha-glutamylthymine and alpha-putrescinylthymine that have remained enigmatic for several decades. Moreover, the current analysis suggests that bacteriophages and certain nucleo-cytoplasmic large DNA viruses contain an unexpectedly diverse range of DNA modification systems, in addition to those using previously characterized enzymes such as Dam, Dcm, TET/JBP, pyrimidine hydroxymethylases, Mom and glycosyltransferases. These include enzymes generating modified bases such as deazaguanines related to queuine and archaeosine, pyrimidines comparable with lysidine, those derived using modified S-adenosyl methionine derivatives and those using TET/JBP-generated hydroxymethyl pyrimidines as biosynthetic starting points. We present evidence that some of these modification systems are also widely dispersed across prokaryotes and certain eukaryotes such as basidiomycetes, chlorophyte and stramenopile alga, where they could serve as novel epigenetic marks for regulation or discrimination of self from non-self DNA. Our study extends the role of the PUA-like fold domains in recognition of modified nucleic acids and predicts versions of the ASCH and EVE domains to be novel 'readers' of modified bases in DNA. These results open opportunities for the investigation of the biology of these systems and their use in biotechnology.
Iyer, Lakshminarayan M.; Zhang, Dapeng; Maxwell Burroughs, A.; Aravind, L.
2013-01-01
Discovery of the TET/JBP family of dioxygenases that modify bases in DNA has sparked considerable interest in novel DNA base modifications and their biological roles. Using sensitive sequence and structure analyses combined with contextual information from comparative genomics, we computationally characterize over 12 novel biochemical systems for DNA modifications. We predict previously unidentified enzymes, such as the kinetoplastid J-base generating glycosyltransferase (and its homolog GREB1), the catalytic specificity of bacteriophage TET/JBP proteins and their role in complex DNA base modifications. We also predict the enzymes involved in synthesis of hypermodified bases such as alpha-glutamylthymine and alpha-putrescinylthymine that have remained enigmatic for several decades. Moreover, the current analysis suggests that bacteriophages and certain nucleo-cytoplasmic large DNA viruses contain an unexpectedly diverse range of DNA modification systems, in addition to those using previously characterized enzymes such as Dam, Dcm, TET/JBP, pyrimidine hydroxymethylases, Mom and glycosyltransferases. These include enzymes generating modified bases such as deazaguanines related to queuine and archaeosine, pyrimidines comparable with lysidine, those derived using modified S-adenosyl methionine derivatives and those using TET/JBP-generated hydroxymethyl pyrimidines as biosynthetic starting points. We present evidence that some of these modification systems are also widely dispersed across prokaryotes and certain eukaryotes such as basidiomycetes, chlorophyte and stramenopile alga, where they could serve as novel epigenetic marks for regulation or discrimination of self from non-self DNA. Our study extends the role of the PUA-like fold domains in recognition of modified nucleic acids and predicts versions of the ASCH and EVE domains to be novel ‘readers’ of modified bases in DNA. These results open opportunities for the investigation of the biology of these systems and their use in biotechnology. PMID:23814188
Predicting clicks of PubMed articles.
Mao, Yuqing; Lu, Zhiyong
2013-01-01
Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed.
Predicting clicks of PubMed articles
Mao, Yuqing; Lu, Zhiyong
2013-01-01
Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed. PMID:24551386
Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM
NASA Astrophysics Data System (ADS)
Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan
2018-02-01
The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.
Wang, Xun-Heng; Jiao, Yun; Li, Lihua
2017-10-24
Attention deficit hyperactivity disorder (ADHD) is a common brain disorder with high prevalence in school-age children. Previously developed machine learning-based methods have discriminated patients with ADHD from normal controls by providing label information of the disease for individuals. Inattention and impulsivity are the two most significant clinical symptoms of ADHD. However, predicting clinical symptoms (i.e., inattention and impulsivity) is a challenging task based on neuroimaging data. The goal of this study is twofold: to build predictive models for clinical symptoms of ADHD based on resting-state fMRI and to mine brain networks for predictive patterns of inattention and impulsivity. To achieve this goal, a cohort of 74 boys with ADHD and a cohort of 69 age-matched normal controls were recruited from the ADHD-200 Consortium. Both structural and resting-state fMRI images were obtained for each participant. Temporal patterns between and within intrinsic connectivity networks (ICNs) were applied as raw features in the predictive models. Specifically, sample entropy was taken asan intra-ICN feature, and phase synchronization (PS) was used asan inter-ICN feature. The predictive models were based on the least absolute shrinkage and selectionator operator (LASSO) algorithm. The performance of the predictive model for inattention is r=0.79 (p<10 -8 ), and the performance of the predictive model for impulsivity is r=0.48 (p<10 -8 ). The ICN-related predictive patterns may provide valuable information for investigating the brain network mechanisms of ADHD. In summary, the predictive models for clinical symptoms could be beneficial for personalizing ADHD medications. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Gender differences in the causal direction between workplace harassment and drinking.
Freels, Sally A; Richman, Judith A; Rospenda, Kathleen M
2005-08-01
Data from a longitudinal study of university employees across four waves is used to determine the extent to which workplace harassment predicts drinking or conversely the extent to which drinking predicts workplace harassment, and to address gender differences in these relationships. Mixed effects regression models are used to test the effects of 1) harassment at the previous wave on drinking at the current wave, adjusting for drinking at the previous wave, and 2) drinking at the previous wave on harassment at the current wave, adjusting for harassment at the previous wave. For males, drinking at the previous wave predicts sexual harassment at the current wave, whereas for females, sexual harassment at the previous wave predicts drinking at the current wave.
Semantic contextual cuing and visual attention.
Goujon, Annabelle; Didierjean, André; Marmèche, Evelyne
2009-02-01
Since M. M. Chun and Y. Jiang's (1998) original study, a large body of research based on the contextual cuing paradigm has shown that the visuocognitive system is capable of capturing certain regularities in the environment in an implicit way. The present study investigated whether regularities based on the semantic category membership of the context can be learned implicitly and whether that learning depends on attention. The contextual cuing paradigm was used with lexical displays in which the semantic category of the contextual words either did or did not predict the target location. Experiments 1 and 2 revealed that implicit contextual cuing effects can be extended to semantic category regularities. Experiments 3 and 4 indicated an implicit contextual cuing effect when the predictive context appeared in an attended color but not when the predictive context appeared in an ignored color. However, when the previously ignored context suddenly became attended, it immediately facilitated performance. In contrast, when the previously attended context suddenly became ignored, no benefit was observed. Results suggest that the expression of implicit semantic knowledge depends on attention but that latent learning can nevertheless take place outside the attentional field. Copyright 2009 APA, all rights reserved.
A call for tiger management using "reserves" of genetic diversity.
Bay, Rachael A; Ramakrishnan, Uma; Hadly, Elizabeth A
2014-01-01
Tigers (Panthera tigris), like many large carnivores, are threatened by anthropogenic impacts, primarily habitat loss and poaching. Current conservation plans for tigers focus on population expansion, with the goal of doubling census size in the next 10 years. Previous studies have shown that because the demographic decline was recent, tiger populations still retain a large amount of genetic diversity. Although maintaining this diversity is extremely important to avoid deleterious effects of inbreeding, management plans have yet to consider predictive genetic models. We used coalescent simulations based on previously sequenced mitochondrial fragments (n = 125) from 5 of 6 extant subspecies to predict the population growth needed to maintain current genetic diversity over the next 150 years. We found that the level of gene flow between populations has a large effect on the local population growth necessary to maintain genetic diversity, without which tigers may face decreases in fitness. In the absence of gene flow, we demonstrate that maintaining genetic diversity is impossible based on known demographic parameters for the species. Thus, managing for the genetic diversity of the species should be prioritized over the riskier preservation of distinct subspecies. These predictive simulations provide unique management insights, hitherto not possible using existing analytical methods.
Discriminative Prediction of A-To-I RNA Editing Events from DNA Sequence
Sun, Jiangming; Singh, Pratibha; Bagge, Annika; Valtat, Bérengère; Vikman, Petter; Spégel, Peter; Mulder, Hindrik
2016-01-01
RNA editing is a post-transcriptional alteration of RNA sequences that, via insertions, deletions or base substitutions, can affect protein structure as well as RNA and protein expression. Recently, it has been suggested that RNA editing may be more frequent than previously thought. A great impediment, however, to a deeper understanding of this process is the paramount sequencing effort that needs to be undertaken to identify RNA editing events. Here, we describe an in silico approach, based on machine learning, that ameliorates this problem. Using 41 nucleotide long DNA sequences, we show that novel A-to-I RNA editing events can be predicted from known A-to-I RNA editing events intra- and interspecies. The validity of the proposed method was verified in an independent experimental dataset. Using our approach, 203 202 putative A-to-I RNA editing events were predicted in the whole human genome. Out of these, 9% were previously reported. The remaining sites require further validation, e.g., by targeted deep sequencing. In conclusion, the approach described here is a useful tool to identify potential A-to-I RNA editing events without the requirement of extensive RNA sequencing. PMID:27764195
Tian, Weidong; Zhang, Lan V; Taşan, Murat; Gibbons, Francis D; King, Oliver D; Park, Julie; Wunderlich, Zeba; Cherry, J Michael; Roth, Frederick P
2008-01-01
Background: Learning the function of genes is a major goal of computational genomics. Methods for inferring gene function have typically fallen into two categories: 'guilt-by-profiling', which exploits correlation between function and other gene characteristics; and 'guilt-by-association', which transfers function from one gene to another via biological relationships. Results: We have developed a strategy ('Funckenstein') that performs guilt-by-profiling and guilt-by-association and combines the results. Using a benchmark set of functional categories and input data for protein-coding genes in Saccharomyces cerevisiae, Funckenstein was compared with a previous combined strategy. Subsequently, we applied Funckenstein to 2,455 Gene Ontology terms. In the process, we developed 2,455 guilt-by-profiling classifiers based on 8,848 gene characteristics and 12 functional linkage graphs based on 23 biological relationships. Conclusion: Funckenstein outperforms a previous combined strategy using a common benchmark dataset. The combination of 'guilt-by-profiling' and 'guilt-by-association' gave significant improvement over the component classifiers, showing the greatest synergy for the most specific functions. Performance was evaluated by cross-validation and by literature examination of the top-scoring novel predictions. These quantitative predictions should help prioritize experimental study of yeast gene functions. PMID:18613951
NASA Astrophysics Data System (ADS)
Foster, Regina
Online education has exploded in popularity. While there is ample research on predictors of traditional college student success, little research has been done on effective methods of predicting student success in online education. In this study, a number of demographic variables including GPA, ACT, gender, age and others were examined to determine what, if any, role they play in successfully predicting student success in an online, lab-based biology for non-majors course. Within course variables such as participation in specific categories of assignment and frequency of online visits were also examined. Groups of students including Native American/Non-Native American and Digital Immigrants and Digital Natives and others were also examined to determine if overall course success differed significantly. Good predictors of online success were found to be GPA, ACT, previous course experience and frequency of online visits with the course materials. Additionally, students who completed more of the online assignments within the course were more successful. Native American and Non-Native American students were found to differ in overall course success significantly as well. Findings indicate student academic background, previous college experience and time spent with course materials are the most important factors in course success. Recommendations include encouraging enrollment advisors to advise students about the importance of maintaining high academic levels, previous course experience and spending time with course materials may impact students' choices for online courses. A need for additional research in several areas is indicated, including Native American and Non-Native American differences. A more detailed examination of students' previous coursework would also be valuable. A study involving more courses, a larger number of students and surveys from faculty who teach online courses would help improve the generalizability of the conclusions.
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-05-11
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.
Optimal Growth in Hypersonic Boundary Layers
NASA Technical Reports Server (NTRS)
Paredes, Pedro; Choudhari, Meelan M.; Li, Fei; Chang, Chau-Lyan
2016-01-01
The linear form of the parabolized linear stability equations is used in a variational approach to extend the previous body of results for the optimal, nonmodal disturbance growth in boundary-layer flows. This paper investigates the optimal growth characteristics in the hypersonic Mach number regime without any high-enthalpy effects. The influence of wall cooling is studied, with particular emphasis on the role of the initial disturbance location and the value of the spanwise wave number that leads to the maximum energy growth up to a specified location. Unlike previous predictions that used a basic state obtained from a self-similar solution to the boundary-layer equations, mean flow solutions based on the full Navier-Stokes equations are used in select cases to help account for the viscous- inviscid interaction near the leading edge of the plate and for the weak shock wave emanating from that region. Using the full Navier-Stokes mean flow is shown to result in further reduction with Mach number in the magnitude of optimal growth relative to the predictions based on the self-similar approximation to the base flow.
Product Deformulation to Inform High-throughput Exposure Predictions (SOT)
The health risks posed by the thousands of chemicals in our environment depends on both chemical hazard and exposure. However, relatively few chemicals have estimates of exposure intake, limiting the understanding of risks. We have previously developed a heuristics-based exposur...
COPRED: prediction of fold, GO molecular function and functional residues at the domain level.
López, Daniel; Pazos, Florencio
2013-07-15
Only recently the first resources devoted to the functional annotation of proteins at the domain level started to appear. The next step is to develop specific methodologies for predicting function at the domain level based on these resources, and to implement them in web servers to be used by the community. In this work, we present COPRED, a web server for the concomitant prediction of fold, molecular function and functional sites at the domain level, based on a methodology for domain molecular function prediction and a resource of domain functional annotations previously developed and benchmarked. COPRED can be freely accessed at http://csbg.cnb.csic.es/copred. The interface works in all standard web browsers. WebGL (natively supported by most browsers) is required for the in-line preview and manipulation of protein 3D structures. The website includes a detailed help section and usage examples. pazos@cnb.csic.es.
Ontology-based prediction of surgical events in laparoscopic surgery
NASA Astrophysics Data System (ADS)
Katić, Darko; Wekerle, Anna-Laura; Gärtner, Fabian; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie
2013-03-01
Context-aware technologies have great potential to help surgeons during laparoscopic interventions. Their underlying idea is to create systems which can adapt their assistance functions automatically to the situation in the OR, thus relieving surgeons from the burden of managing computer assisted surgery devices manually. To this purpose, a certain kind of understanding of the current situation in the OR is essential. Beyond that, anticipatory knowledge of incoming events is beneficial, e.g. for early warnings of imminent risk situations. To achieve the goal of predicting surgical events based on previously observed ones, we developed a language to describe surgeries and surgical events using Description Logics and integrated it with methods from computational linguistics. Using n-Grams to compute probabilities of followup events, we are able to make sensible predictions of upcoming events in real-time. The system was evaluated on professionally recorded and labeled surgeries and showed an average prediction rate of 80%.
Binding ligand prediction for proteins using partial matching of local surface patches.
Sael, Lee; Kihara, Daisuke
2010-01-01
Functional elucidation of uncharacterized protein structures is an important task in bioinformatics. We report our new approach for structure-based function prediction which captures local surface features of ligand binding pockets. Function of proteins, specifically, binding ligands of proteins, can be predicted by finding similar local surface regions of known proteins. To enable partial comparison of binding sites in proteins, a weighted bipartite matching algorithm is used to match pairs of surface patches. The surface patches are encoded with the 3D Zernike descriptors. Unlike the existing methods which compare global characteristics of the protein fold or the global pocket shape, the local surface patch method can find functional similarity between non-homologous proteins and binding pockets for flexible ligand molecules. The proposed method improves prediction results over global pocket shape-based method which was previously developed by our group.
Binding Ligand Prediction for Proteins Using Partial Matching of Local Surface Patches
Sael, Lee; Kihara, Daisuke
2010-01-01
Functional elucidation of uncharacterized protein structures is an important task in bioinformatics. We report our new approach for structure-based function prediction which captures local surface features of ligand binding pockets. Function of proteins, specifically, binding ligands of proteins, can be predicted by finding similar local surface regions of known proteins. To enable partial comparison of binding sites in proteins, a weighted bipartite matching algorithm is used to match pairs of surface patches. The surface patches are encoded with the 3D Zernike descriptors. Unlike the existing methods which compare global characteristics of the protein fold or the global pocket shape, the local surface patch method can find functional similarity between non-homologous proteins and binding pockets for flexible ligand molecules. The proposed method improves prediction results over global pocket shape-based method which was previously developed by our group. PMID:21614188
Concomitant prediction of function and fold at the domain level with GO-based profiles.
Lopez, Daniel; Pazos, Florencio
2013-01-01
Predicting the function of newly sequenced proteins is crucial due to the pace at which these raw sequences are being obtained. Almost all resources for predicting protein function assign functional terms to whole chains, and do not distinguish which particular domain is responsible for the allocated function. This is not a limitation of the methodologies themselves but it is due to the fact that in the databases of functional annotations these methods use for transferring functional terms to new proteins, these annotations are done on a whole-chain basis. Nevertheless, domains are the basic evolutionary and often functional units of proteins. In many cases, the domains of a protein chain have distinct molecular functions, independent from each other. For that reason resources with functional annotations at the domain level, as well as methodologies for predicting function for individual domains adapted to these resources are required.We present a methodology for predicting the molecular function of individual domains, based on a previously developed database of functional annotations at the domain level. The approach, which we show outperforms a standard method based on sequence searches in assigning function, concomitantly predicts the structural fold of the domains and can give hints on the functionally important residues associated to the predicted function.
Brown, Gregory P; Phillips, Benjamin L; Shine, Richard
2011-02-01
Predicting which species will be affected by an invasive taxon is critical to developing conservation priorities, but this is a difficult task. A previous study on the impact of invasive cane toads (Bufo marinus) on Australian snakes attempted to predict vulnerability a priori based on the assumptions that any snake species that eats frogs, and is vulnerable to toad toxins, may be at risk from the toad invasion. We used time-series analyses to evaluate the accuracy of that prediction, based on >3600 standardized nocturnal surveys over a 138-month period on 12 species of snakes and lizards on a floodplain in the Australian wet-dry tropics, bracketing the arrival of cane toads at this site. Contrary to prediction, encounter rates with most species were unaffected by toad arrival, and some taxa predicted to be vulnerable to toads increased rather than declined (e.g., death adder Acanthophis praelongus; Children's python Antaresia childreni). Indirect positive effects of toad invasion (perhaps mediated by toad-induced mortality of predatory varanid lizards) and stochastic weather events outweighed effects of toad invasion for most snake species. Our study casts doubt on the ability of a priori desktop studies, or short-term field surveys, to predict or document the ecological impact of invasive species.
Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models
2017-01-01
We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927
NASA Astrophysics Data System (ADS)
Junker, Philipp; Jaeger, Stefanie; Kastner, Oliver; Eggeler, Gunther; Hackl, Klaus
2015-07-01
In this work, we present simulations of shape memory alloys which serve as first examples demonstrating the predicting character of energy-based material models. We begin with a theoretical approach for the derivation of the caloric parts of the Helmholtz free energy. Afterwards, experimental results for DSC measurements are presented. Then, we recall a micromechanical model based on the principle of the minimum of the dissipation potential for the simulation of polycrystalline shape memory alloys. The previously determined caloric parts of the Helmholtz free energy close the set of model parameters without the need of parameter fitting. All quantities are derived directly from experiments. Finally, we compare finite element results for tension tests to experimental data and show that the model identified by thermal measurements can predict mechanically induced phase transformations and thus rationalize global material behavior without any further assumptions.
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Ahn, Byung Tae
2003-01-01
A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.
USDA-ARS?s Scientific Manuscript database
Previously we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative enabling exploitation...
USDA-ARS?s Scientific Manuscript database
We have shown previously that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative enabling exploitation...
Retreatment Predictions in Odontology by means of CBR Systems.
Campo, Livia; Aliaga, Ignacio J; De Paz, Juan F; García, Alvaro Enrique; Bajo, Javier; Villarubia, Gabriel; Corchado, Juan M
2016-01-01
The field of odontology requires an appropriate adjustment of treatments according to the circumstances of each patient. A follow-up treatment for a patient experiencing problems from a previous procedure such as endodontic therapy, for example, may not necessarily preclude the possibility of extraction. It is therefore necessary to investigate new solutions aimed at analyzing data and, with regard to the given values, determine whether dental retreatment is required. In this work, we present a decision support system which applies the case-based reasoning (CBR) paradigm, specifically designed to predict the practicality of performing or not performing a retreatment. Thus, the system uses previous experiences to provide new predictions, which is completely innovative in the field of odontology. The proposed prediction technique includes an innovative combination of methods that minimizes false negatives to the greatest possible extent. False negatives refer to a prediction favoring a retreatment when in fact it would be ineffective. The combination of methods is performed by applying an optimization problem to reduce incorrect classifications and takes into account different parameters, such as precision, recall, and statistical probabilities. The proposed system was tested in a real environment and the results obtained are promising.
Retreatment Predictions in Odontology by means of CBR Systems
Campo, Livia; Aliaga, Ignacio J.; García, Alvaro Enrique; Villarubia, Gabriel; Corchado, Juan M.
2016-01-01
The field of odontology requires an appropriate adjustment of treatments according to the circumstances of each patient. A follow-up treatment for a patient experiencing problems from a previous procedure such as endodontic therapy, for example, may not necessarily preclude the possibility of extraction. It is therefore necessary to investigate new solutions aimed at analyzing data and, with regard to the given values, determine whether dental retreatment is required. In this work, we present a decision support system which applies the case-based reasoning (CBR) paradigm, specifically designed to predict the practicality of performing or not performing a retreatment. Thus, the system uses previous experiences to provide new predictions, which is completely innovative in the field of odontology. The proposed prediction technique includes an innovative combination of methods that minimizes false negatives to the greatest possible extent. False negatives refer to a prediction favoring a retreatment when in fact it would be ineffective. The combination of methods is performed by applying an optimization problem to reduce incorrect classifications and takes into account different parameters, such as precision, recall, and statistical probabilities. The proposed system was tested in a real environment and the results obtained are promising. PMID:26884749
Least-Squares Support Vector Machine Approach to Viral Replication Origin Prediction
Cruz-Cano, Raul; Chew, David S.H.; Kwok-Pui, Choi; Ming-Ying, Leung
2010-01-01
Replication of their DNA genomes is a central step in the reproduction of many viruses. Procedures to find replication origins, which are initiation sites of the DNA replication process, are therefore of great importance for controlling the growth and spread of such viruses. Existing computational methods for viral replication origin prediction have mostly been tested within the family of herpesviruses. This paper proposes a new approach by least-squares support vector machines (LS-SVMs) and tests its performance not only on the herpes family but also on a collection of caudoviruses coming from three viral families under the order of caudovirales. The LS-SVM approach provides sensitivities and positive predictive values superior or comparable to those given by the previous methods. When suitably combined with previous methods, the LS-SVM approach further improves the prediction accuracy for the herpesvirus replication origins. Furthermore, by recursive feature elimination, the LS-SVM has also helped find the most significant features of the data sets. The results suggest that the LS-SVMs will be a highly useful addition to the set of computational tools for viral replication origin prediction and illustrate the value of optimization-based computing techniques in biomedical applications. PMID:20729987
Least-Squares Support Vector Machine Approach to Viral Replication Origin Prediction.
Cruz-Cano, Raul; Chew, David S H; Kwok-Pui, Choi; Ming-Ying, Leung
2010-06-01
Replication of their DNA genomes is a central step in the reproduction of many viruses. Procedures to find replication origins, which are initiation sites of the DNA replication process, are therefore of great importance for controlling the growth and spread of such viruses. Existing computational methods for viral replication origin prediction have mostly been tested within the family of herpesviruses. This paper proposes a new approach by least-squares support vector machines (LS-SVMs) and tests its performance not only on the herpes family but also on a collection of caudoviruses coming from three viral families under the order of caudovirales. The LS-SVM approach provides sensitivities and positive predictive values superior or comparable to those given by the previous methods. When suitably combined with previous methods, the LS-SVM approach further improves the prediction accuracy for the herpesvirus replication origins. Furthermore, by recursive feature elimination, the LS-SVM has also helped find the most significant features of the data sets. The results suggest that the LS-SVMs will be a highly useful addition to the set of computational tools for viral replication origin prediction and illustrate the value of optimization-based computing techniques in biomedical applications.
Størset, Elisabet; Holford, Nick; Hennig, Stefanie; Bergmann, Troels K; Bergan, Stein; Bremer, Sara; Åsberg, Anders; Midtvedt, Karsten; Staatz, Christine E
2014-09-01
The aim was to develop a theory-based population pharmacokinetic model of tacrolimus in adult kidney transplant recipients and to externally evaluate this model and two previous empirical models. Data were obtained from 242 patients with 3100 tacrolimus whole blood concentrations. External evaluation was performed by examining model predictive performance using Bayesian forecasting. Pharmacokinetic disposition parameters were estimated based on tacrolimus plasma concentrations, predicted from whole blood concentrations, haematocrit and literature values for tacrolimus binding to red blood cells. Disposition parameters were allometrically scaled to fat free mass. Tacrolimus whole blood clearance/bioavailability standardized to haematocrit of 45% and fat free mass of 60 kg was estimated to be 16.1 l h−1 [95% CI 12.6, 18.0 l h−1]. Tacrolimus clearance was 30% higher (95% CI 13, 46%) and bioavailability 18% lower (95% CI 2, 29%) in CYP3A5 expressers compared with non-expressers. An Emax model described decreasing tacrolimus bioavailability with increasing prednisolone dose. The theory-based model was superior to the empirical models during external evaluation displaying a median prediction error of −1.2% (95% CI −3.0, 0.1%). Based on simulation, Bayesian forecasting led to 65% (95% CI 62, 68%) of patients achieving a tacrolimus average steady-state concentration within a suggested acceptable range. A theory-based population pharmacokinetic model was superior to two empirical models for prediction of tacrolimus concentrations and seemed suitable for Bayesian prediction of tacrolimus doses early after kidney transplantation.
Holland, E Penelope; James, Alex; Ruscoe, Wendy A; Pech, Roger P; Byrom, Andrea E
2015-01-01
Accurate predictions of the timing and magnitude of consumer responses to episodic seeding events (masts) are important for understanding ecosystem dynamics and for managing outbreaks of invasive species generated by masts. While models relating consumer populations to resource fluctuations have been developed successfully for a range of natural and modified ecosystems, a critical gap that needs addressing is better prediction of resource pulses. A recent model used change in summer temperature from one year to the next (ΔT) for predicting masts for forest and grassland plants in New Zealand. We extend this climate-based method in the framework of a model for consumer-resource dynamics to predict invasive house mouse (Mus musculus) outbreaks in forest ecosystems. Compared with previous mast models based on absolute temperature, the ΔT method for predicting masts resulted in an improved model for mouse population dynamics. There was also a threshold effect of ΔT on the likelihood of an outbreak occurring. The improved climate-based method for predicting resource pulses and consumer responses provides a straightforward rule of thumb for determining, with one year's advance warning, whether management intervention might be required in invaded ecosystems. The approach could be applied to consumer-resource systems worldwide where climatic variables are used to model the size and duration of resource pulses, and may have particular relevance for ecosystems where global change scenarios predict increased variability in climatic events.
Predictability of uncontrollable multifocal seizures - towards new treatment options
NASA Astrophysics Data System (ADS)
Lehnertz, Klaus; Dickten, Henning; Porz, Stephan; Helmstaedter, Christoph; Elger, Christian E.
2016-04-01
Drug-resistant, multifocal, non-resectable epilepsies are among the most difficult epileptic disorders to manage. An approach to control previously uncontrollable seizures in epilepsy patients would consist of identifying seizure precursors in critical brain areas combined with delivering a counteracting influence to prevent seizure generation. Predictability of seizures with acceptable levels of sensitivity and specificity, even in an ambulatory setting, has been repeatedly shown, however, in patients with a single seizure focus only. We did a study to assess feasibility of state-of-the-art, electroencephalogram-based seizure-prediction techniques in patients with uncontrollable multifocal seizures. We obtained significant predictive information about upcoming seizures in more than two thirds of patients. Unexpectedly, the emergence of seizure precursors was confined to non-affected brain areas. Our findings clearly indicate that epileptic networks, spanning lobes and hemispheres, underlie generation of seizures. Our proof-of-concept study is an important milestone towards new therapeutic strategies based on seizure-prediction techniques for clinical practice.
An analytical method to predict efficiency of aircraft gearboxes
NASA Technical Reports Server (NTRS)
Anderson, N. E.; Loewenthal, S. H.; Black, J. D.
1984-01-01
A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.
Prediction of Turbulent Temperature Fluctuations in Hot Jets
NASA Technical Reports Server (NTRS)
Debonis, James R.
2017-01-01
Large-eddy simulations were used to investigate turbulent temperature fluctuations and turbulent heat flux in hot jets. A high-resolution finite-difference Navier-Stokes solver, WRLES, was used to compute the flow from a 2-inch round nozzle. Several different flow conditions, consisting of different jet Mach numbers and temperature ratios, were examined. Predictions of mean and fluctuating velocities were compared to previously obtained particle image velocimetry data. Predictions of mean and fluctuating temperature were compared to new data obtained using Raman spectroscopy. Based on the good agreement with experimental data for the individual quantities, the combined quantity turbulent heat flux was examined.
Correlation and prediction of gaseous diffusion coefficients.
NASA Technical Reports Server (NTRS)
Marrero, T. R.; Mason, E. A.
1973-01-01
A new correlation method for binary gaseous diffusion coefficients from very low temperatures to 10,000 K is proposed based on an extended principle of corresponding states, and having greater range and accuracy than previous correlations. There are two correlation parameters that are related to other physical quantities and that are predictable in the absence of diffusion measurements. Quantum effects and composition dependence are included, but high-pressure effects are not. The results are directly applicable to multicomponent mixtures.
Ramis, Mary-Anne; Chang, Anne; Nissen, Lisa
2018-04-01
Incorporating evidence-based practice (EBP) into clinical decision making and professional practice is a requirement for many health disciplines, yet research across health disciplines on factors that influence and predict student intention to use EBP following graduation has not been previously synthesized. To synthesize research on factors that influence development of EBP behaviors and subsequently predict undergraduate students' intention toward EBP uptake. A systematic review of prediction modeling studies was conducted according to a protocol previously published on the Prospero database: https://www.crd.york.ac.uk/PROSPERO/. The outcome variable was undergraduate students' future use or intention to use EBP. Evidence synthesis methods were guided by resources from the Cochrane Methods Prognosis Group Web site (https://prognosismethods.cochrane.org). Only three studies were found to meet inclusion criteria for the review. Factors relating to EBP capability, EBP attitudes, as well as clinical and academic support were identified as influential toward students' intention to use evidence in practice. Heterogeneity limited data pooling, consequently, results are presented in narrative and tabular form. Although using a developing method, this review presents a unique contribution to further discussions regarding students' intention to use EBP following graduation. Despite limitations, consideration of identified factors for undergraduate curriculum could support student's intention to use EBP in their respective clinical environments. © 2017 Sigma Theta Tau International.
“Pathotyping” Multiplex PCR Assay for Haemophilus parasuis: a Tool for Prediction of Virulence
Weinert, Lucy A.; Peters, Sarah E.; Wang, Jinhong; Hernandez-Garcia, Juan; Chaudhuri, Roy R.; Luan, Shi-Lu; Angen, Øystein; Aragon, Virginia; Williamson, Susanna M.; Rycroft, Andrew N.; Wren, Brendan W.; Maskell, Duncan J.; Tucker, Alexander W.
2017-01-01
ABSTRACT Haemophilus parasuis is a diverse bacterial species that is found in the upper respiratory tracts of pigs and can also cause Glässer's disease and pneumonia. A previous pangenome study of H. parasuis identified 48 genes that were associated with clinical disease. Here, we describe the development of a generalized linear model (termed a pathotyping model) to predict the potential virulence of isolates of H. parasuis based on a subset of 10 genes from the pangenome. A multiplex PCR (mPCR) was constructed based on these genes, the results of which were entered into the pathotyping model to yield a prediction of virulence. This new diagnostic mPCR was tested on 143 field isolates of H. parasuis that had previously been whole-genome sequenced and a further 84 isolates from the United Kingdom from cases of H. parasuis-related disease in pigs collected between 2013 and 2014. The combination of the mPCR and the pathotyping model predicted the virulence of an isolate with 78% accuracy for the original isolate collection and 90% for the additional isolate collection, providing an overall accuracy of 83% (81% sensitivity and 93% specificity) compared with that of the “current standard” of detailed clinical metadata. This new pathotyping assay has the potential to aid surveillance and disease control in addition to serotyping data. PMID:28615466
Route Optimization for Offloading Congested Meter Fixes
NASA Technical Reports Server (NTRS)
Xue, Min; Zelinski, Shannon
2016-01-01
The Optimized Route Capability (ORC) concept proposed by the FAA facilitates traffic managers to identify and resolve arrival flight delays caused by bottlenecks formed at arrival meter fixes when there exists imbalance between arrival fixes and runways. ORC makes use of the prediction capability of existing automation tools, monitors the traffic delays based on these predictions, and searches the best reroutes upstream of the meter fixes based on the predictions and estimated arrival schedules when delays are over a predefined threshold. Initial implementation and evaluation of the ORC concept considered only reroutes available at the time arrival congestion was first predicted. This work extends previous work by introducing an additional dimension in reroute options such that ORC can find the best time to reroute and overcome the 'firstcome- first-reroute' phenomenon. To deal with the enlarged reroute solution space, a genetic algorithm was developed to solve this problem. Experiments were conducted using the same traffic scenario used in previous work, when an arrival rush was created for one of the four arrival meter fixes at George Bush Intercontinental Houston Airport. Results showed the new approach further improved delay savings. The suggested route changes from the new approach were on average 30 minutes later than those using other approaches, and fewer numbers of reroutes were required. Fewer numbers of reroutes reduce operational complexity and later reroutes help decision makers deal with uncertain situations.
NASA Astrophysics Data System (ADS)
Aghakouchak, Amir; Tourian, Mohammad J.
2015-04-01
Development of reliable drought monitoring, prediction and recovery assessment tools are fundamental to water resources management. This presentation focuses on how gravimetry information can improve drought assessment. First, we provide an overview of the Global Integrated Drought Monitoring and Prediction System (GIDMaPS) which offers near real-time drought information using remote sensing observations and model simulations. Then, we present a framework for integration of satellite gravimetry information for improving drought prediction and recovery assessment. The input data include satellite-based and model-based precipitation, soil moisture estimates and equivalent water height. Previous studies show that drought assessment based on one single indicator may not be sufficient. For this reason, GIDMaPS provides drought information based on multiple drought indicators including Standardized Precipitation Index (SPI), Standardized Soil Moisture Index (SSI) and the Multivariate Standardized Drought Index (MSDI) which combines SPI and SSI probabilistically. MSDI incorporates the meteorological and agricultural drought conditions and provides composite multi-index drought information for overall characterization of droughts. GIDMaPS includes a seasonal prediction component based on a statistical persistence-based approach. The prediction component of GIDMaPS provides the empirical probability of drought for different severity levels. In this presentation we present a new component in which the drought prediction information based on SPI, SSI and MSDI are conditioned on equivalent water height obtained from the Gravity Recovery and Climate Experiment (GRACE). Using a Bayesian approach, GRACE information is used to evaluate persistence of drought. Finally, the deficit equivalent water height based on GRACE is used for assessing drought recovery. In this presentation, both monitoring and prediction components of GIDMaPS will be discussed, and the results from 2014 California Drought will be presented. Further Reading: Hao Z., AghaKouchak A., Nakhjiri N., Farahmand A., 2014, Global Integrated Drought Monitoring and Prediction System, Scientific Data, 1:140001, 1-10, doi: 10.1038/sdata.2014.1.
Online Bayesian Learning with Natural Sequential Prior Distribution Used for Wind Speed Prediction
NASA Astrophysics Data System (ADS)
Cheggaga, Nawal
2017-11-01
Predicting wind speed is one of the most important and critic tasks in a wind farm. All approaches, which directly describe the stochastic dynamics of the meteorological data are facing problems related to the nature of its non-Gaussian statistics and the presence of seasonal effects .In this paper, Online Bayesian learning has been successfully applied to online learning for three-layer perceptron's used for wind speed prediction. First a conventional transition model based on the squared norm of the difference between the current parameter vector and the previous parameter vector has been used. We noticed that the transition model does not adequately consider the difference between the current and the previous wind speed measurement. To adequately consider this difference, we use a natural sequential prior. The proposed transition model uses a Fisher information matrix to consider the difference between the observation models more naturally. The obtained results showed a good agreement between both series, measured and predicted. The mean relative error over the whole data set is not exceeding 5 %.
Increased genomic prediction accuracy in wheat breeding using a large Australian panel.
Norman, Adam; Taylor, Julian; Tanaka, Emi; Telfer, Paul; Edwards, James; Martinant, Jean-Pierre; Kuchel, Haydn
2017-12-01
Genomic prediction accuracy within a large panel was found to be substantially higher than that previously observed in smaller populations, and also higher than QTL-based prediction. In recent years, genomic selection for wheat breeding has been widely studied, but this has typically been restricted to population sizes under 1000 individuals. To assess its efficacy in germplasm representative of commercial breeding programmes, we used a panel of 10,375 Australian wheat breeding lines to investigate the accuracy of genomic prediction for grain yield, physical grain quality and other physiological traits. To achieve this, the complete panel was phenotyped in a dedicated field trial and genotyped using a custom Axiom TM Affymetrix SNP array. A high-quality consensus map was also constructed, allowing the linkage disequilibrium present in the germplasm to be investigated. Using the complete SNP array, genomic prediction accuracies were found to be substantially higher than those previously observed in smaller populations and also more accurate compared to prediction approaches using a finite number of selected quantitative trait loci. Multi-trait genetic correlations were also assessed at an additive and residual genetic level, identifying a negative genetic correlation between grain yield and protein as well as a positive genetic correlation between grain size and test weight.
Palmer-Hague, Jaime L; Zilioli, Samuele; Watson, Neil V
2013-08-13
Previous research has identified physical and behavioral differences between parents who produce sons and those who produce daughters. However, the possibility that men and women have predictions about the sexes of their offspring based on these differences, or any other interoceptive cues, has not been investigated. We compared the dominance, sociosexual orientation, estradiol, testosterone, and 2D:4D ratios of men and women who predicted they would conceive a boy as their first child with those who predicted a girl. Women who predicted they would have a boy were more dominant and less sociosexually restricted than those who predicted they would have a girl. Men who predicted they would have a girl had higher salivary estradiol and higher (more feminine) 2D:4D ratios than those who predicted they would have a boy. Possible implications of these results are discussed in the context of evolutionary theory.
Not just the norm: exemplar-based models also predict face aftereffects.
Ross, David A; Deroche, Mickael; Palmeri, Thomas J
2014-02-01
The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted toward a face with attributes opposite to those of the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here, we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation.
Not Just the Norm: Exemplar-Based Models also Predict Face Aftereffects
Ross, David A.; Deroche, Mickael; Palmeri, Thomas J.
2014-01-01
The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted towards a face with opposite attributes to the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation. PMID:23690282
Predicting the Unbeaten Path through Syntactic Priming
ERIC Educational Resources Information Center
Arai, Manabu; Nakamura, Chie; Mazuka, Reiko
2015-01-01
A number of previous studies showed that comprehenders make use of lexically based constraints such as subcategorization frequency in processing structurally ambiguous sentences. One piece of such evidence is lexically specific syntactic priming in comprehension; following the costly processing of a temporarily ambiguous sentence, comprehenders…
*A FASTER METHOD OF MEASURING RECREATIONAL WATER QUALITY FOR BETTER PROTECTION OF SWIMMER'S HEALTH
We previously reported that a faster method (< 2 hours) of measuring fecal indicator bacteria (FIB), based on Quantitative Polymerase Chain Reaction (QPCR), was predictive of swimming associated gastrointestinal illness. Using data from two additional beaches, we examined the re...
Ohtomo, Shoji; Hirose, Yukio
2014-02-01
This study examined psychological processes of consumers that had determined hoarding and avoidant purchasing behaviors after the Tohoku earthquake within a dual-process model. The model hypothesized that both intentional motivation based on reflective decision and reactive motivation based on non-reflective decision predicted the behaviors. This study assumed that attitude, subjective norm and descriptive norm in relation to hoarding and avoidant purchasing were determinants of motivations. Residents in the Tokyo metropolitan area (n = 667) completed internet longitudinal surveys at three times (April, June, and November, 2011). The results indicated that intentional and reactive motivation determined avoidant purchasing behaviors in June; only intentional motivation determined the behaviors in November. Attitude was a main determinant of the motivations each time. Moreover, previous behaviors predicted future behaviors. In conclusion, purchasing behaviors were intentional rather than reactive behaviors. Furthermore, attitude and previous behaviors were important determinants in the dual-process model. Attitude and behaviors formed in April continued to strengthen the subsequent decisions of purchasing behavior.
NASA Astrophysics Data System (ADS)
Guo, Yiqing; Jia, Xiuping; Paull, David
2018-06-01
The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.
Improved regulatory element prediction based on tissue-specific local epigenomic signatures
He, Yupeng; Gorkin, David U.; Dickel, Diane E.; Nery, Joseph R.; Castanon, Rosa G.; Lee, Ah Young; Shen, Yin; Visel, Axel; Pennacchio, Len A.; Ren, Bing; Ecker, Joseph R.
2017-01-01
Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulatory element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared with existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types. REPTILE is available at https://github.com/yupenghe/REPTILE/. PMID:28193886
NASA Astrophysics Data System (ADS)
Winder, Anthony J.; Siemonsen, Susanne; Flottmann, Fabian; Fiehler, Jens; Forkert, Nils D.
2017-03-01
Voxel-based tissue outcome prediction in acute ischemic stroke patients is highly relevant for both clinical routine and research. Previous research has shown that features extracted from baseline multi-parametric MRI datasets have a high predictive value and can be used for the training of classifiers, which can generate tissue outcome predictions for both intravenous and conservative treatments. However, with the recent advent and popularization of intra-arterial thrombectomy treatment, novel research specifically addressing the utility of predictive classi- fiers for thrombectomy intervention is necessary for a holistic understanding of current stroke treatment options. The aim of this work was to develop three clinically viable tissue outcome prediction models using approximate nearest-neighbor, generalized linear model, and random decision forest approaches and to evaluate the accuracy of predicting tissue outcome after intra-arterial treatment. Therefore, the three machine learning models were trained, evaluated, and compared using datasets of 42 acute ischemic stroke patients treated with intra-arterial thrombectomy. Classifier training utilized eight voxel-based features extracted from baseline MRI datasets and five global features. Evaluation of classifier-based predictions was performed via comparison to the known tissue outcome, which was determined in follow-up imaging, using the Dice coefficient and leave-on-patient-out cross validation. The random decision forest prediction model led to the best tissue outcome predictions with a mean Dice coefficient of 0.37. The approximate nearest-neighbor and generalized linear model performed equally suboptimally with average Dice coefficients of 0.28 and 0.27 respectively, suggesting that both non-linearity and machine learning are desirable properties of a classifier well-suited to the intra-arterial tissue outcome prediction problem.
Learning Temporal Statistics for Sensory Predictions in Aging.
Luft, Caroline Di Bernardi; Baker, Rosalind; Goldstone, Aimee; Zhang, Yang; Kourtzi, Zoe
2016-03-01
Predicting future events based on previous knowledge about the environment is critical for successful everyday interactions. Here, we ask which brain regions support our ability to predict the future based on implicit knowledge about the past in young and older age. Combining behavioral and fMRI measurements, we test whether training on structured temporal sequences improves the ability to predict upcoming sensory events; we then compare brain regions involved in learning predictive structures between young and older adults. Our behavioral results demonstrate that exposure to temporal sequences without feedback facilitates the ability of young and older adults to predict the orientation of an upcoming stimulus. Our fMRI results provide evidence for the involvement of corticostriatal regions in learning predictive structures in both young and older learners. In particular, we showed learning-dependent fMRI responses for structured sequences in frontoparietal regions and the striatum (putamen) for young adults. However, for older adults, learning-dependent activations were observed mainly in subcortical (putamen, thalamus) regions but were weaker in frontoparietal regions. Significant correlations of learning-dependent behavioral and fMRI changes in these regions suggest a strong link between brain activations and behavioral improvement rather than general overactivation. Thus, our findings suggest that predicting future events based on knowledge of temporal statistics engages brain regions involved in implicit learning in both young and older adults.
Salgado, J Cristian; Andrews, Barbara A; Ortuzar, Maria Fernanda; Asenjo, Juan A
2008-01-18
The prediction of the partition behaviour of proteins in aqueous two-phase systems (ATPS) using mathematical models based on their amino acid composition was investigated. The predictive models are based on the average surface hydrophobicity (ASH). The ASH was estimated by means of models that use the three-dimensional structure of proteins and by models that use only the amino acid composition of proteins. These models were evaluated for a set of 11 proteins with known experimental partition coefficient in four-phase systems: polyethylene glycol (PEG) 4000/phosphate, sulfate, citrate and dextran and considering three levels of NaCl concentration (0.0% w/w, 0.6% w/w and 8.8% w/w). The results indicate that such prediction is feasible even though the quality of the prediction depends strongly on the ATPS and its operational conditions such as the NaCl concentration. The ATPS 0 model which use the three-dimensional structure obtains similar results to those given by previous models based on variables measured in the laboratory. In addition it maintains the main characteristics of the hydrophobic resolution and intrinsic hydrophobicity reported before. Three mathematical models, ATPS I-III, based only on the amino acid composition were evaluated. The best results were obtained by the ATPS I model which assumes that all of the amino acids are completely exposed. The performance of the ATPS I model follows the behaviour reported previously, i.e. its correlation coefficients improve as the NaCl concentration increases in the system and, therefore, the effect of the protein hydrophobicity prevails over other effects such as charge or size. Its best predictive performance was obtained for the PEG/dextran system at high NaCl concentration. An increase in the predictive capacity of at least 54.4% with respect to the models which use the three-dimensional structure of the protein was obtained for that system. In addition, the ATPS I model exhibits high correlation coefficients in that system being higher than 0.88 on average. The ATPS I model exhibited correlation coefficients higher than 0.67 for the rest of the ATPS at high NaCl concentration. Finally, we tested our best model, the ATPS I model, on the prediction of the partition coefficient of the protein invertase. We found that the predictive capacities of the ATPS I model are better in PEG/dextran systems, where the relative error of the prediction with respect to the experimental value is 15.6%.
Risk factors predict post-traumatic stress disorder differently in men and women
Christiansen, Dorte M; Elklit, Ask
2008-01-01
Background About twice as many women as men develop post-traumatic stress disorder (PTSD), even though men as a group are exposed to more traumatic events. Exposure to different trauma types does not sufficiently explain why women are more vulnerable. Methods The present work examines the effect of age, previous trauma, negative affectivity (NA), anxiety, depression, persistent dissociation, and social support on PTSD separately in men and women. Subjects were exposed to either a series of explosions in a firework factory near a residential area or to a high school stabbing incident. Results Some gender differences were found in the predictive power of well known risk factors for PTSD. Anxiety predicted PTSD in men, but not in women, whereas the opposite was found for depression. Dissociation was a better predictor for PTSD in women than in men in the explosion sample but not in the stabbing sample. Initially, NA predicted PTSD better in women than men in the explosion sample, but when compared only to other significant risk factors, it significantly predicted PTSD for both men and women in both studies. Previous traumatic events and age did not significantly predict PTSD in either gender. Conclusion Gender differences in the predictive value of social support on PTSD appear to be very complex, and no clear conclusions can be made based on the two studies included in this article. PMID:19017412
Thermodynamic characterization of tandem mismatches found in naturally occurring RNA
Christiansen, Martha E.; Znosko, Brent M.
2009-01-01
Although all sequence symmetric tandem mismatches and some sequence asymmetric tandem mismatches have been thermodynamically characterized and a model has been proposed to predict the stability of previously unmeasured sequence asymmetric tandem mismatches [Christiansen,M.E. and Znosko,B.M. (2008) Biochemistry, 47, 4329–4336], experimental thermodynamic data for frequently occurring tandem mismatches is lacking. Since experimental data is preferred over a predictive model, the thermodynamic parameters for 25 frequently occurring tandem mismatches were determined. These new experimental values, on average, are 1.0 kcal/mol different from the values predicted for these mismatches using the previous model. The data for the sequence asymmetric tandem mismatches reported here were then combined with the data for 72 sequence asymmetric tandem mismatches that were published previously, and the parameters used to predict the thermodynamics of previously unmeasured sequence asymmetric tandem mismatches were updated. The average absolute difference between the measured values and the values predicted using these updated parameters is 0.5 kcal/mol. This updated model improves the prediction for tandem mismatches that were predicted rather poorly by the previous model. This new experimental data and updated predictive model allow for more accurate calculations of the free energy of RNA duplexes containing tandem mismatches, and, furthermore, should allow for improved prediction of secondary structure from sequence. PMID:19509311
Improving consensus contact prediction via server correlation reduction.
Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming
2009-05-06
Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.
Decadal prediction of Sahel rainfall: where does the skill (or lack thereof) come from?
NASA Astrophysics Data System (ADS)
Mohino, Elsa; Keenlyside, Noel; Pohlmann, Holger
2016-12-01
Previous works suggest decadal predictions of Sahel rainfall could be skillful. However, the sources of such skill are still under debate. In addition, previous results are based on short validation periods (i.e. less than 50 years). In this work we propose a framework based on multi-linear regression analysis to study the potential sources of skill for predicting Sahel trends several years ahead. We apply it to an extended decadal hindcast performed with the MPI-ESM-LR model that span from 1901 to 2010 with 1 year sampling interval. Our results show that the skill mainly depends on how well we can predict the timing of the global warming (GW), the Atlantic multidecadal variability (AMV) and, to a lesser extent, the inter-decadal Pacific oscillation signals, and on how well the system simulates the associated SST and West African rainfall response patterns. In the case of the MPI-ESM-LR decadal extended hindcast, the observed timing is well reproduced only for the GW and AMV signals. However, only the West African rainfall response to the AMV is correctly reproduced. Thus, for most of the lead times the main source of skill in the decadal hindcast of West African rainfall is from the AMV. The GW signal degrades skill because the response of West African rainfall to GW is incorrectly captured. Our results also suggest that initialized decadal predictions of West African rainfall can be further improved by better simulating the response of global SST to GW and AMV. Furthermore, our approach may be applied to understand and attribute prediction skill for other variables and regions.
Webb, Christian A.; Olson, Elizabeth A.; Killgore, William D.S.; Pizzagalli, Diego A.; Rauch, Scott L.; Rosso, Isabelle M.
2018-01-01
Background Rostral and subgenual anterior cingulate cortex (rACC and sgACC) activity and, to a lesser extent, volume have been shown to predict depressive symptom improvement across different antidepressant treatments. This study extends prior work by examining whether rACC and/or sgACC morphology predicts treatment response to internet-based cognitive behavioral therapy (iCBT) for major depressive disorder (MDD). This is the first study to examine neural predictors of response to iCBT. Methods Hierarchical linear modeling tested whether pre-treatment rACC and sgACC volumes predicted depressive symptom improvement during a 6-session (10-week) randomized clinical trial of iCBT (n = 35) vs. a monitored attention control (MAC; n = 38). Analyses also tested whether pre-treatment rACC and sgACC volumes differed between patients who achieved depression remission versus those who did not remit. Results Larger pre-treatment right rACC volume was a significant predictor of greater depressive symptom improvement in iCBT, even when controlling for demographic (age, gender, race) and clinical (baseline depression, anhedonia and anxiety) variables previously linked to treatment response. In addition, pre-treatment right rACC volume was larger among iCBT patients whose depression eventually remitted relative to those who did not remit. Corresponding analyses in the MAC group and for the sgACC were not significant. Conclusions rACC volume prior to iCBT demonstrated incremental predictive validity beyond clinical and demographic variables previously found to predict symptom improvement. Such findings may help inform our understanding of the mediating anatomy of iCBT and, if replicated, may suggest neural targets to augment treatment response (e.g., via modulation of rACC function). ClinicalTrials.gov Identifier NCT01598922 PMID:29486867
Accurate Binding Free Energy Predictions in Fragment Optimization.
Steinbrecher, Thomas B; Dahlgren, Markus; Cappel, Daniel; Lin, Teng; Wang, Lingle; Krilov, Goran; Abel, Robert; Friesner, Richard; Sherman, Woody
2015-11-23
Predicting protein-ligand binding free energies is a central aim of computational structure-based drug design (SBDD)--improved accuracy in binding free energy predictions could significantly reduce costs and accelerate project timelines in lead discovery and optimization. The recent development and validation of advanced free energy calculation methods represents a major step toward this goal. Accurately predicting the relative binding free energy changes of modifications to ligands is especially valuable in the field of fragment-based drug design, since fragment screens tend to deliver initial hits of low binding affinity that require multiple rounds of synthesis to gain the requisite potency for a project. In this study, we show that a free energy perturbation protocol, FEP+, which was previously validated on drug-like lead compounds, is suitable for the calculation of relative binding strengths of fragment-sized compounds as well. We study several pharmaceutically relevant targets with a total of more than 90 fragments and find that the FEP+ methodology, which uses explicit solvent molecular dynamics and physics-based scoring with no parameters adjusted, can accurately predict relative fragment binding affinities. The calculations afford R(2)-values on average greater than 0.5 compared to experimental data and RMS errors of ca. 1.1 kcal/mol overall, demonstrating significant improvements over the docking and MM-GBSA methods tested in this work and indicating that FEP+ has the requisite predictive power to impact fragment-based affinity optimization projects.
RKNNMDA: Ranking-based KNN for MiRNA-Disease Association prediction.
Chen, Xing; Wu, Qiao-Feng; Yan, Gui-Ying
2017-07-03
Cumulative verified experimental studies have demonstrated that microRNAs (miRNAs) could be closely related with the development and progression of human complex diseases. Based on the assumption that functional similar miRNAs may have a strong correlation with phenotypically similar diseases and vice versa, researchers developed various effective computational models which combine heterogeneous biologic data sets including disease similarity network, miRNA similarity network, and known disease-miRNA association network to identify potential relationships between miRNAs and diseases in biomedical research. Considering the limitations in previous computational study, we introduced a novel computational method of Ranking-based KNN for miRNA-Disease Association prediction (RKNNMDA) to predict potential related miRNAs for diseases, and our method obtained an AUC of 0.8221 based on leave-one-out cross validation. In addition, RKNNMDA was applied to 3 kinds of important human cancers for further performance evaluation. The results showed that 96%, 80% and 94% of predicted top 50 potential related miRNAs for Colon Neoplasms, Esophageal Neoplasms, and Prostate Neoplasms have been confirmed by experimental literatures, respectively. Moreover, RKNNMDA could be used to predict potential miRNAs for diseases without any known miRNAs, and it is anticipated that RKNNMDA would be of great use for novel miRNA-disease association identification.
RKNNMDA: Ranking-based KNN for MiRNA-Disease Association prediction
Chen, Xing; Yan, Gui-Ying
2017-01-01
ABSTRACT Cumulative verified experimental studies have demonstrated that microRNAs (miRNAs) could be closely related with the development and progression of human complex diseases. Based on the assumption that functional similar miRNAs may have a strong correlation with phenotypically similar diseases and vice versa, researchers developed various effective computational models which combine heterogeneous biologic data sets including disease similarity network, miRNA similarity network, and known disease-miRNA association network to identify potential relationships between miRNAs and diseases in biomedical research. Considering the limitations in previous computational study, we introduced a novel computational method of Ranking-based KNN for miRNA-Disease Association prediction (RKNNMDA) to predict potential related miRNAs for diseases, and our method obtained an AUC of 0.8221 based on leave-one-out cross validation. In addition, RKNNMDA was applied to 3 kinds of important human cancers for further performance evaluation. The results showed that 96%, 80% and 94% of predicted top 50 potential related miRNAs for Colon Neoplasms, Esophageal Neoplasms, and Prostate Neoplasms have been confirmed by experimental literatures, respectively. Moreover, RKNNMDA could be used to predict potential miRNAs for diseases without any known miRNAs, and it is anticipated that RKNNMDA would be of great use for novel miRNA-disease association identification. PMID:28421868
Rekik, Islem; Li, Gang; Lin, Weili; Shen, Dinggang
2016-02-01
Longitudinal neuroimaging analysis methods have remarkably advanced our understanding of early postnatal brain development. However, learning predictive models to trace forth the evolution trajectories of both normal and abnormal cortical shapes remains broadly absent. To fill this critical gap, we pioneered the first prediction model for longitudinal developing cortical surfaces in infants using a spatiotemporal current-based learning framework solely from the baseline cortical surface. In this paper, we detail this prediction model and even further improve its performance by introducing two key variants. First, we use the varifold metric to overcome the limitations of the current metric for surface registration that was used in our preliminary study. We also extend the conventional varifold-based surface registration model for pairwise registration to a spatiotemporal surface regression model. Second, we propose a morphing process of the baseline surface using its topographic attributes such as normal direction and principal curvature sign. Specifically, our method learns from longitudinal data both the geometric (vertices positions) and dynamic (temporal evolution trajectories) features of the infant cortical surface, comprising a training stage and a prediction stage. In the training stage, we use the proposed varifold-based shape regression model to estimate geodesic cortical shape evolution trajectories for each training subject. We then build an empirical mean spatiotemporal surface atlas. In the prediction stage, given an infant, we select the best learnt features from training subjects to simultaneously predict the cortical surface shapes at all later timepoints, based on similarity metrics between this baseline surface and the learnt baseline population average surface atlas. We used a leave-one-out cross validation method to predict the inner cortical surface shape at 3, 6, 9 and 12 months of age from the baseline cortical surface shape at birth. Our method attained a higher prediction accuracy and better captured the spatiotemporal dynamic change of the highly folded cortical surface than the previous proposed prediction method. Copyright © 2015 Elsevier B.V. All rights reserved.
Personality, Demographics, and Acculturation in North American Refugees.
ERIC Educational Resources Information Center
Smither, Robert; Rodriquez-Giegling, Marta
This study predicts willingness of refugees to acculturate to North American society based on selected demographic and psychological variables. The hypothesis is that most previous research on refugee adaptation has overemphasized sociological variables such as age, time in the country, and level of education and underemphasized psychological…
Jo, ByungWan
2018-01-01
The implementation of wireless sensor networks (WSNs) for monitoring the complex, dynamic, and harsh environment of underground coal mines (UCMs) is sought around the world to enhance safety. However, previously developed smart systems are limited to monitoring or, in a few cases, can report events. Therefore, this study introduces a reliable, efficient, and cost-effective internet of things (IoT) system for air quality monitoring with newly added features of assessment and pollutant prediction. This system is comprised of sensor modules, communication protocols, and a base station, running Azure Machine Learning (AML) Studio over it. Arduino-based sensor modules with eight different parameters were installed at separate locations of an operational UCM. Based on the sensed data, the proposed system assesses mine air quality in terms of the mine environment index (MEI). Principal component analysis (PCA) identified CH4, CO, SO2, and H2S as the most influencing gases significantly affecting mine air quality. The results of PCA were fed into the ANN model in AML studio, which enabled the prediction of MEI. An optimum number of neurons were determined for both actual input and PCA-based input parameters. The results showed a better performance of the PCA-based ANN for MEI prediction, with R2 and RMSE values of 0.6654 and 0.2104, respectively. Therefore, the proposed Arduino and AML-based system enhances mine environmental safety by quickly assessing and predicting mine air quality. PMID:29561777
Jo, ByungWan; Khan, Rana Muhammad Asad
2018-03-21
The implementation of wireless sensor networks (WSNs) for monitoring the complex, dynamic, and harsh environment of underground coal mines (UCMs) is sought around the world to enhance safety. However, previously developed smart systems are limited to monitoring or, in a few cases, can report events. Therefore, this study introduces a reliable, efficient, and cost-effective internet of things (IoT) system for air quality monitoring with newly added features of assessment and pollutant prediction. This system is comprised of sensor modules, communication protocols, and a base station, running Azure Machine Learning (AML) Studio over it. Arduino-based sensor modules with eight different parameters were installed at separate locations of an operational UCM. Based on the sensed data, the proposed system assesses mine air quality in terms of the mine environment index (MEI). Principal component analysis (PCA) identified CH₄, CO, SO₂, and H₂S as the most influencing gases significantly affecting mine air quality. The results of PCA were fed into the ANN model in AML studio, which enabled the prediction of MEI. An optimum number of neurons were determined for both actual input and PCA-based input parameters. The results showed a better performance of the PCA-based ANN for MEI prediction, with R ² and RMSE values of 0.6654 and 0.2104, respectively. Therefore, the proposed Arduino and AML-based system enhances mine environmental safety by quickly assessing and predicting mine air quality.
A multi-model framework for simulating wildlife population response to land-use and climate change
McRae, B.H.; Schumaker, N.H.; McKane, R.B.; Busing, R.T.; Solomon, A.M.; Burdick, C.A.
2008-01-01
Reliable assessments of how human activities will affect wildlife populations are essential for making scientifically defensible resource management decisions. A principle challenge of predicting effects of proposed management, development, or conservation actions is the need to incorporate multiple biotic and abiotic factors, including land-use and climate change, that interact to affect wildlife habitat and populations through time. Here we demonstrate how models of land-use, climate change, and other dynamic factors can be integrated into a coherent framework for predicting wildlife population trends. Our framework starts with land-use and climate change models developed for a region of interest. Vegetation changes through time under alternative future scenarios are predicted using an individual-based plant community model. These predictions are combined with spatially explicit animal habitat models to map changes in the distribution and quality of wildlife habitat expected under the various scenarios. Animal population responses to habitat changes and other factors are then projected using a flexible, individual-based animal population model. As an example application, we simulated animal population trends under three future land-use scenarios and four climate change scenarios in the Cascade Range of western Oregon. We chose two birds with contrasting habitat preferences for our simulations: winter wrens (Troglodytes troglodytes), which are most abundant in mature conifer forests, and song sparrows (Melospiza melodia), which prefer more open, shrubby habitats. We used climate and land-use predictions from previously published studies, as well as previously published predictions of vegetation responses using FORCLIM, an individual-based forest dynamics simulator. Vegetation predictions were integrated with other factors in PATCH, a spatially explicit, individual-based animal population simulator. Through incorporating effects of landscape history and limited dispersal, our framework predicted population changes that typically exceeded those expected based on changes in mean habitat suitability alone. Although land-use had greater impacts on habitat quality than did climate change in our simulations, we found that small changes in vital rates resulting from climate change or other stressors can have large consequences for population trajectories. The ability to integrate bottom-up demographic processes like these with top-down constraints imposed by climate and land-use in a dynamic modeling environment is a key advantage of our approach. The resulting framework should allow researchers to synthesize existing empirical evidence, and to explore complex interactions that are difficult or impossible to capture through piecemeal modeling approaches. ?? 2008 Elsevier B.V.
Aircraft Noise Prediction Program (ANOPP) Fan Noise Prediction for Small Engines
NASA Technical Reports Server (NTRS)
Hough, Joe W.; Weir, Donald S.
1996-01-01
The Fan Noise Module of ANOPP is used to predict the broadband noise and pure tones for axial flow compressors or fans. The module, based on the method developed by M. F. Heidmann, uses empirical functions to predict fan noise spectra as a function of frequency and polar directivity. Previous studies have determined the need to modify the module to better correlate measurements of fan noise from engines in the 3000- to 6000-pound thrust class. Additional measurements made by AlliedSignal have confirmed the need to revise the ANOPP fan noise method for smaller engines. This report describes the revisions to the fan noise method which have been verified with measured data from three separate AlliedSignal fan engines. Comparisons of the revised prediction show a significant improvement in overall and spectral noise predictions.
Immediate effects of form-class constraints on spoken word recognition
Magnuson, James S.; Tanenhaus, Michael K.; Aslin, Richard N.
2008-01-01
In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar “nouns” and “adjectives” did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration. PMID:18675408
Chuang, Gwo-Yu; Liou, David; Kwong, Peter D; Georgiev, Ivelin S
2014-07-01
Delineation of the antigenic site, or epitope, recognized by an antibody can provide clues about functional vulnerabilities and resistance mechanisms, and can therefore guide antibody optimization and epitope-based vaccine design. Previously, we developed an algorithm for antibody-epitope prediction based on antibody neutralization of viral strains with diverse sequences and validated the algorithm on a set of broadly neutralizing HIV-1 antibodies. Here we describe the implementation of this algorithm, NEP (Neutralization-based Epitope Prediction), as a web-based server. The users must supply as input: (i) an alignment of antigen sequences of diverse viral strains; (ii) neutralization data for the antibody of interest against the same set of antigen sequences; and (iii) (optional) a structure of the unbound antigen, for enhanced prediction accuracy. The prediction results can be downloaded or viewed interactively on the antigen structure (if supplied) from the web browser using a JSmol applet. Since neutralization experiments are typically performed as one of the first steps in the characterization of an antibody to determine its breadth and potency, the NEP server can be used to predict antibody-epitope information at no additional experimental costs. NEP can be accessed on the internet at http://exon.niaid.nih.gov/nep. Published by Oxford University Press on behalf of Nucleic Acids Research 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
New theory of transport due to like-particle collisions
NASA Technical Reports Server (NTRS)
Oneil, T. M.
1985-01-01
Cross-magnetic-field transport due to like-particle collisions is discussed for the parameter regime lambda sub D much greater than r sub L, where lambda sub D is the Debye length and r sub L is the characteristic Larmor radius of the colliding particles. A new theory based on collisionally produced E x B drifts predicts a particle flux which exceeds the flux predicted previously, by the factor (lambda sub D/r sub L)-squared much greater than 1.
Deep learning improves prediction of CRISPR-Cpf1 guide RNA activity.
Kim, Hui Kwon; Min, Seonwoo; Song, Myungjae; Jung, Soobin; Choi, Jae Woo; Kim, Younggwang; Lee, Sangeun; Yoon, Sungroh; Kim, Hyongbum Henry
2018-03-01
We present two algorithms to predict the activity of AsCpf1 guide RNAs. Indel frequencies for 15,000 target sequences were used in a deep-learning framework based on a convolutional neural network to train Seq-deepCpf1. We then incorporated chromatin accessibility information to create the better-performing DeepCpf1 algorithm for cell lines for which such information is available and show that both algorithms outperform previous machine learning algorithms on our own and published data sets.
SU-E-T-629: Prediction of the ViewRay Radiotherapy Treatment Time for Clinical Logistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S; Wooten, H; Wu, Y
Purpose: An algorithm is developed in our clinic, given a new treatment plan, to predict treatment delivery time for radiation therapy (RT) treatments of patients on ViewRay magnetic resonance-image guided radiation therapy (MR-IGRT) delivery system. This algorithm is necessary for managing patient treatment appointments, and is useful as an indicator to assess the treatment plan complexity. Methods: A patient’s total treatment delivery time, not including time required for localization, may be described as the sum of four components: (1) the treatment initialization time; (2) the total beam-on time; (3) the gantry rotation time; and (4) the multileaf collimator (MLC) motionmore » time. Each of the four components is predicted separately. The total beam-on time can be calculated using both the planned beam-on time and the decay-corrected delivery dose rate. To predict the remaining components, we quantitatively analyze the patient treatment delivery record files. The initialization time is demonstrated to be random since it depends on the final gantry angle and MLC leaf positions of the previous treatment. Based on modeling the relationships between the gantry rotation angles and the corresponding rotation time, and between the furthest MLC leaf moving distance and the corresponding MLC motion time, the total delivery time is predicted using linear regression. Results: The proposed algorithm has demonstrated the feasibility of predicting the ViewRay treatment delivery time for any treatment plan of any patient. The average prediction error is 0.89 minutes or 5.34%, and the maximal prediction error is 2.09 minutes or 13.87%. Conclusion: We have developed a treatment delivery time prediction algorithm based on the analysis of previous patients’ treatment delivery records. The accuracy of our prediction is sufficient for guiding and arranging patient treatment appointments on a daily basis. The predicted delivery time could also be used as an indicator to assess the treatment plan complexity. This work was supported by a research grant from Viewray Inc.« less
NASA Technical Reports Server (NTRS)
Succi, G. P.
1983-01-01
The techniques of helicopter rotor noise prediction attempt to describe precisely the details of the noise field and remove the empiricisms and restrictions inherent in previous methods. These techniques require detailed inputs of the rotor geometry, operating conditions, and blade surface pressure distribution. The Farassat noise prediction techniques was studied, and high speed helicopter noise prediction using more detailed representations of the thickness and loading noise sources was investigated. These predictions were based on the measured blade surface pressures on an AH-1G rotor and compared to the measured sound field. Although refinements in the representation of the thickness and loading noise sources improve the calculation, there are still discrepancies between the measured and predicted sound field. Analysis of the blade surface pressure data indicates shocks on the blades, which are probably responsible for these discrepancies.
Postoperative refraction in the second eye having cataract surgery.
Leffler, Christopher T; Wilkes, Martin; Reeves, Juliana; Mahmood, Muneera A
2011-01-01
Introduction. Previous cataract surgery studies assumed that first-eye predicted and observed postoperative refractions are equally important for predicting second-eye postoperative refraction. Methods. In a retrospective analysis of 173 patients having bilateral sequential phacoemulsification, multivariable linear regression was used to predict the second-eye postoperative refraction based on refractions predicted by the SRK-T formula for both eyes, the first-eye postoperative refraction, and the difference in IOL selected between eyes. Results. The first-eye observed postoperative refraction was an independent predictor of the second eye postoperative refraction (P < 0.001) and was weighted more heavily than the first-eye predicted refraction. Compared with the SRK-T formula, this model reduced the root-mean-squared (RMS) error of the predicted refraction by 11.3%. Conclusions. The first-eye postoperative refraction is an independent predictor of the second-eye postoperative refraction. The first-eye predicted refraction is less important. These findings may be due to interocular symmetry.
Brosowsky, Nicholaus P; Crump, Matthew J C
2016-08-01
Recent work suggests that environmental cues associated with previous attentional control settings can rapidly and involuntarily adjust attentional priorities. The current study tests predictions from adaptive-learning and memory-based theories of contextual control about the role of intentions for setting attentional priorities. To extend the empirical boundaries of contextual control phenomena, and to determine whether theoretical principles of contextual control are generalizable we used a novel bi-dimensional stimulus sampling task. Subjects viewed briefly presented arrays of letters and colors presented above or below fixation, and identified specific stimuli according to a dimensional (letter or color) and positional cue. Location was predictive of the cued dimension, but not the position or identity. In contrast to previous findings, contextual control failed to develop through automatic, adaptive-learning processes. Instead, previous experience with intentionally changing attentional sampling priorities between different contexts was required for contextual control to develop. Copyright © 2016 Elsevier Inc. All rights reserved.
Tang, Haiming; Thomas, Paul D
2016-07-15
PANTHER-PSEP is a new software tool for predicting non-synonymous genetic variants that may play a causal role in human disease. Several previous variant pathogenicity prediction methods have been proposed that quantify evolutionary conservation among homologous proteins from different organisms. PANTHER-PSEP employs a related but distinct metric based on 'evolutionary preservation': homologous proteins are used to reconstruct the likely sequences of ancestral proteins at nodes in a phylogenetic tree, and the history of each amino acid can be traced back in time from its current state to estimate how long that state has been preserved in its ancestors. Here, we describe the PSEP tool, and assess its performance on standard benchmarks for distinguishing disease-associated from neutral variation in humans. On these benchmarks, PSEP outperforms not only previous tools that utilize evolutionary conservation, but also several highly used tools that include multiple other sources of information as well. For predicting pathogenic human variants, the trace back of course starts with a human 'reference' protein sequence, but the PSEP tool can also be applied to predicting deleterious or pathogenic variants in reference proteins from any of the ∼100 other species in the PANTHER database. PANTHER-PSEP is freely available on the web at http://pantherdb.org/tools/csnpScoreForm.jsp Users can also download the command-line based tool at ftp://ftp.pantherdb.org/cSNP_analysis/PSEP/ CONTACT: pdthomas@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Psachoulias, Dimitrios; Vertzoni, Maria; Butler, James; Busby, David; Symillides, Moira; Dressman, Jennifer; Reppas, Christos
2012-12-01
To develop an in vitro methodology for prediction of concentrations and potential precipitation of highly permeable, lipophilic weak bases in fasted upper small intestine based on ketoconazole and dipyridamole luminal data. Evaluate usefulness of methodology in predicting luminal precipitation of AZD0865 and SB705498 based on plasma data. A three-compartment in vitro setup was used. Depending on the dosage form administered in in vivo studies, a solution or a suspension was placed in the gastric compartment. A medium simulating the luminal environment (FaSSIF-V2plus) was initially placed in the duodenal compartment. Concentrated FaSSIF-V2plus was placed in the reservoir compartment. In vitro ketoconazole and dipyridamole concentrations and precipitated fractions adequately reflected luminal data. Unlike luminal precipitates, in vitro ketoconazole precipitates were crystalline. In vitro AZD0865 data confirmed previously published human pharmacokinetic data suggesting that absorption rates are not affected by luminal precipitation. In vitro SB705498 data predicted that significant luminal precipitation occurs after a 100 mg or 400 mg but not after a 10 mg dose, consistent with human pharmacokinetic data. An in vitro methodology for predicting concentrations and potential precipitation in fasted upper small intestine, after administration of highly permeable, lipophilic weak bases in fasted upper small intestine was developed and evaluated for its predictability in regard to luminal precipitation.
Zeng, Fangfang; Li, Zhongtao; Yu, Xiaoling; Zhou, Linuo
2013-01-01
Background This study aimed to develop the artificial neural network (ANN) and multivariable logistic regression (LR) analyses for prediction modeling of cardiovascular autonomic (CA) dysfunction in the general population, and compare the prediction models using the two approaches. Methods and Materials We analyzed a previous dataset based on a Chinese population sample consisting of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN and LR analysis, and were tested in the validation set. Performances of these prediction models were then compared. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with the prevalence of CA dysfunction (P<0.05). The mean area under the receiver-operating curve was 0.758 (95% CI 0.724–0.793) for LR and 0.762 (95% CI 0.732–0.793) for ANN analysis, but noninferiority result was found (P<0.001). The similar results were found in comparisons of sensitivity, specificity, and predictive values in the prediction models between the LR and ANN analyses. Conclusion The prediction models for CA dysfunction were developed using ANN and LR. ANN and LR are two effective tools for developing prediction models based on our dataset. PMID:23940593
Shear wave prediction using committee fuzzy model constrained by lithofacies, Zagros basin, SW Iran
NASA Astrophysics Data System (ADS)
Shiroodi, Sadjad Kazem; Ghafoori, Mohammad; Ansari, Hamid Reza; Lashkaripour, Golamreza; Ghanadian, Mostafa
2017-02-01
The main purpose of this study is to introduce the geological controlling factors in improving an intelligence-based model to estimate shear wave velocity from seismic attributes. The proposed method includes three main steps in the framework of geological events in a complex sedimentary succession located in the Persian Gulf. First, the best attributes were selected from extracted seismic data. Second, these attributes were transformed into shear wave velocity using fuzzy inference systems (FIS) such as Sugeno's fuzzy inference (SFIS), adaptive neuro-fuzzy inference (ANFIS) and optimized fuzzy inference (OFIS). Finally, a committee fuzzy machine (CFM) based on bat-inspired algorithm (BA) optimization was applied to combine previous predictions into an enhanced solution. In order to show the geological effect on improving the prediction, the main classes of predominate lithofacies in the reservoir of interest including shale, sand, and carbonate were selected and then the proposed algorithm was performed with and without lithofacies constraint. The results showed a good agreement between real and predicted shear wave velocity in the lithofacies-based model compared to the model without lithofacies especially in sand and carbonate.
Long-Term Prediction of the Arctic Ionospheric TEC Based on Time-Varying Periodograms
Liu, Jingbin; Chen, Ruizhi; Wang, Zemin; An, Jiachun; Hyyppä, Juha
2014-01-01
Knowledge of the polar ionospheric total electron content (TEC) and its future variations is of scientific and engineering relevance. In this study, a new method is developed to predict Arctic mean TEC on the scale of a solar cycle using previous data covering 14 years. The Arctic TEC is derived from global positioning system measurements using the spherical cap harmonic analysis mapping method. The study indicates that the variability of the Arctic TEC results in highly time-varying periodograms, which are utilized for prediction in the proposed method. The TEC time series is divided into two components of periodic oscillations and the average TEC. The newly developed method of TEC prediction is based on an extrapolation method that requires no input of physical observations of the time interval of prediction, and it is performed in both temporally backward and forward directions by summing the extrapolation of the two components. The backward prediction indicates that the Arctic TEC variability includes a 9 years period for the study duration, in addition to the well-established periods. The long-term prediction has an uncertainty of 4.8–5.6 TECU for different period sets. PMID:25369066
A Survey of Computational Intelligence Techniques in Protein Function Prediction
Tiwari, Arvind Kumar; Srivastava, Rajeev
2014-01-01
During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational intelligence techniques have been proposed in the recent past. This paper presents a state-of-the-art comprehensive review of various computational intelligence techniques for protein function predictions using sequence, structure, protein-protein interaction network, and gene expression data used in wide areas of applications such as prediction of DNA and RNA binding sites, subcellular localization, enzyme functions, signal peptides, catalytic residues, nuclear/G-protein coupled receptors, membrane proteins, and pathway analysis from gene expression datasets. This paper also summarizes the result obtained by many researchers to solve these problems by using computational intelligence techniques with appropriate datasets to improve the prediction performance. The summary shows that ensemble classifiers and integration of multiple heterogeneous data are useful for protein function prediction. PMID:25574395
Martens, Astrid L; Bolte, John F B; Beekhuizen, Johan; Kromhout, Hans; Smid, Tjabe; Vermeulen, Roel C H
2015-10-01
Epidemiological studies on the potential health effects of RF-EMF from mobile phone base stations require efficient and accurate exposure assessment methods. Previous studies have demonstrated that the 3D geospatial model NISMap is able to rank locations by indoor and outdoor RF-EMF exposure levels. This study extends on previous work by evaluating the suitability of using NISMap to estimate indoor RF-EMF exposure levels at home as a proxy for personal exposure to RF-EMF from mobile phone base stations. For 93 individuals in the Netherlands we measured personal exposure to RF-EMF from mobile phone base stations during a 24h period using an EME-SPY 121 exposimeter. Each individual kept a diary from which we extracted the time spent at home and in the bedroom. We used NISMap to model exposure at the home address of the participant (at bedroom height). We then compared model predictions with measurements for the 24h period, when at home, and in the bedroom by the Spearman correlation coefficient (rsp) and by calculating specificity and sensitivity using the 90th percentile of the exposure distribution as a cutpoint for high exposure. We found a low to moderate rsp of 0.36 for the 24h period, 0.51 for measurements at home, and 0.41 for measurements in the bedroom. The specificity was high (0.9) but with a low sensitivity (0.3). These results indicate that a meaningful ranking of personal RF-EMF can be achieved, even though the correlation between model predictions and 24h personal RF-EMF measurements is lower than with at home measurements. However, the use of at home RF-EMF field predictions from mobile phone base stations in epidemiological studies leads to significant exposure misclassification that will result in a loss of statistical power to detect health effects. Copyright © 2015 Elsevier Inc. All rights reserved.
A CBR-Based and MAHP-Based Customer Value Prediction Model for New Product Development
Zhao, Yu-Jie; Luo, Xin-xing; Deng, Li
2014-01-01
In the fierce market environment, the enterprise which wants to meet customer needs and boost its market profit and share must focus on the new product development. To overcome the limitations of previous research, Chan et al. proposed a dynamic decision support system to predict the customer lifetime value (CLV) for new product development. However, to better meet the customer needs, there are still some deficiencies in their model, so this study proposes a CBR-based and MAHP-based customer value prediction model for a new product (C&M-CVPM). CBR (case based reasoning) can reduce experts' workload and evaluation time, while MAHP (multiplicative analytic hierarchy process) can use actual but average influencing factor's effectiveness in stimulation, and at same time C&M-CVPM uses dynamic customers' transition probability which is more close to reality. This study not only introduces the realization of CBR and MAHP, but also elaborates C&M-CVPM's three main modules. The application of the proposed model is illustrated and confirmed to be sensible and convincing through a stimulation experiment. PMID:25162050
A CBR-based and MAHP-based customer value prediction model for new product development.
Zhao, Yu-Jie; Luo, Xin-xing; Deng, Li
2014-01-01
In the fierce market environment, the enterprise which wants to meet customer needs and boost its market profit and share must focus on the new product development. To overcome the limitations of previous research, Chan et al. proposed a dynamic decision support system to predict the customer lifetime value (CLV) for new product development. However, to better meet the customer needs, there are still some deficiencies in their model, so this study proposes a CBR-based and MAHP-based customer value prediction model for a new product (C&M-CVPM). CBR (case based reasoning) can reduce experts' workload and evaluation time, while MAHP (multiplicative analytic hierarchy process) can use actual but average influencing factor's effectiveness in stimulation, and at same time C&M-CVPM uses dynamic customers' transition probability which is more close to reality. This study not only introduces the realization of CBR and MAHP, but also elaborates C&M-CVPM's three main modules. The application of the proposed model is illustrated and confirmed to be sensible and convincing through a stimulation experiment.
Ungvari, Gabor S; Xiang, Yu-Tao; Tang, Wai-Kwong; Shum, David
2008-09-01
Prospective memory (PM) is the ability to remember to do something in the future without explicit prompts. Extending the number of subjects and the scope of our previously published study, this investigation examined the relationship between PM and socio-demographic and clinical factors, activities of daily living (ADL) and frontal lobe functions in patients with chronic schizophrenia. One hundred and ten Chinese schizophrenia patients, 60 from the previous study and 50 additional patients recruited for this study, and 110 matched healthy comparison subjects (HC) formed the study sample. Patients' clinical condition and activity of daily living were evaluated with the Brief Psychiatric Rating Scale (BPRS) and the Functional Needs Assessment (FNA). Time- and event-based PM tasks and three tests of prefrontal lobe functions (Design Fluency Test [DFT], Tower of London [TOL], Wisconsin Card Sorting Test [WCST]) were also administered. Patients' level of ADL and psychopathology were not associated with PM functions and only anticholinergic medications (ACM) showed a significant negative correlational relationship with PM tasks. Confirming the findings of the previous study, patients performed significantly more poorly on all two PM tasks than HC. Performance on time-based PM task significantly correlated with age, education level and DFT in HC and with age, DFT, TOL and WCST in patients. Patients' performance on the event-based PM correlated with DFT and one measure of WCST. In patients, TOL and age predicted the performance on time-based PM task; DFT and WCST predicted the event-based task. Involving a large sample of patients with matched controls, this study confirmed that PM is impaired in chronic schizophrenia. Deficient PM functions were related to prefrontal lobe dysfunction in both HC and patients but not to the patients' clinical condition, nor did they significantly affect ADL. ACMs determined certain aspects of PM.
Das, Anirban; Trehan, Amita; Oberoi, Sapna; Bansal, Deepak
2017-06-01
The study aims to validate a score predicting risk of complications in pediatric patients with chemotherapy-related febrile neutropenia (FN) and evaluate the performance of previously published models for risk stratification. Children diagnosed with cancer and presenting with FN were evaluated in a prospective single-center study. A score predicting the risk of complications, previously derived in the unit, was validated on a prospective cohort. Performance of six predictive models published from geographically distinct settings was assessed on the same cohort. Complications were observed in 109 (26.3%) of 414 episodes of FN over 15 months. A risk score based on undernutrition (two points), time from last chemotherapy (<7 days = two points), presence of a nonupper respiratory focus of infection (two points), C-reactive protein (>60 mg/l = five points), and absolute neutrophil count (<100 per μl = two points) was used to stratify patients into "low risk" (score <7, n = 208) and assessed using the following parameters: overall performance (Nagelkerke R 2 = 34.4%), calibration (calibration slope = 0.39; P = 0.25 in Hosmer-Lemeshow test), discrimination (c-statistic = 0.81), overall sensitivity (86%), negative predictive value (93%), and clinical net benefit (0.43). Six previously published rules demonstrated inferior performance in this cohort. An indigenous decision rule using five simple predefined variables was successful in identifying children at risk for complications. Prediction models derived in developed nations may not be appropriate for low-middle-income settings and need to be validated before use. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Branger, E.; Grape, S.; Jansson, P.; Jacobsson Svärd, S.
2018-02-01
The Digital Cherenkov Viewing Device (DCVD) is a tool used by nuclear safeguards inspectors to verify irradiated nuclear fuel assemblies in wet storage based on the recording of Cherenkov light produced by the assemblies. One type of verification involves comparing the measured light intensity from an assembly with a predicted intensity, based on assembly declarations. Crucial for such analyses is the performance of the prediction model used, and recently new modelling methods have been introduced to allow for enhanced prediction capabilities by taking the irradiation history into account, and by including the cross-talk radiation from neighbouring assemblies in the predictions. In this work, the performance of three models for Cherenkov-light intensity prediction is evaluated by applying them to a set of short-cooled PWR 17x17 assemblies for which experimental DCVD measurements and operator-declared irradiation data was available; (1) a two-parameter model, based on total burnup and cooling time, previously used by the safeguards inspectors, (2) a newly introduced gamma-spectrum-based model, which incorporates cycle-wise burnup histories, and (3) the latter gamma-spectrum-based model with the addition to account for contributions from neighbouring assemblies. The results show that the two gamma-spectrum-based models provide significantly higher precision for the measured inventory compared to the two-parameter model, lowering the standard deviation between relative measured and predicted intensities from 15.2 % to 8.1 % respectively 7.8 %. The results show some systematic differences between assemblies of different designs (produced by different manufacturers) in spite of their similar PWR 17x17 geometries, and possible ways are discussed to address such differences, which may allow for even higher prediction capabilities. Still, it is concluded that the gamma-spectrum-based models enable confident verification of the fuel assembly inventory at the currently used detection limit for partial defects, being a 30 % discrepancy between measured and predicted intensities, while some false detection occurs with the two-parameter model. The results also indicate that the gamma-spectrum-based prediction methods are accurate enough that the 30 % discrepancy limit could potentially be lowered.
The development of appropriate equilibrium sorption relationships for anthropogenic organic contaminants with soils and sediments is essential to predicting the extents and rates of solid-water interactions in the environment. In this context, we previously reported results that ...
Mental Layout Extrapolations Prime Spatial Processing of Scenes
ERIC Educational Resources Information Center
Gottesman, Carmela V.
2011-01-01
Four experiments examined whether scene processing is facilitated by layout representation, including layout that was not perceived but could be predicted based on a previous partial view (boundary extension). In a priming paradigm (after Sanocki, 2003), participants judged objects' distances in photographs. In Experiment 1, full scenes (target),…
Difficulties associated with predicting forage intake by grazing beef cows
USDA-ARS?s Scientific Manuscript database
The current National Research Council (NRC) model is based on a single equation that relates dry matter intake (DMI) to metabolic size and net energy density of the diet offered and was a significant improvement over previous models. However, observed DMI by grazing animals can be conceptualized by...
Wheat mill stream properties for discrete element method modeling
USDA-ARS?s Scientific Manuscript database
A discrete phase approach based on individual wheat kernel characteristics is needed to overcome the limitations of previous statistical models and accurately predict the milling behavior of wheat. As a first step to develop a discrete element method (DEM) model for the wheat milling process, this s...
Personality and Sociodemographic Variables as Sources of Variation in Environmental Perception.
ERIC Educational Resources Information Center
Feimer, Nickolaus R.
This research paper examines the relationship between individual differences in environmental perception, and variables which may be important in predicting, if not explaining those variations. The analyses reported were based upon an environmental perception research study previously conducted at the University of California at Berkeley during…
Relapse and Recurrence Prevention in the Treatment for Adolescents with Depression Study
ERIC Educational Resources Information Center
Simons, Anne D.; Rohde, Paul; Kennard, Betsy D.; Robins, Michele
2005-01-01
Relapse and recurrence in adolescent depression are important problems. Much less is known about relapse prevention compared to the acute treatment of depression in adolescents. Based on previous research, theoretical predictions, and clinical experience, the Treatment for Adolescents With Depression Study (TADS) protocol was designed to determine…
Imitation in Fragile X Syndrome: Implications for Autism
ERIC Educational Resources Information Center
Macedoni-Luksic, Marta; Greiss-Hess, Laura; Rogers, Sally J.; Gosar, David; Lemons-Chitwood, Kerrie; Hagerman, Randi
2009-01-01
To address the specific impairment of imitation in autism, the imitation abilities of 22 children with fragile X syndrome (FXS) with and without autism were compared. Based on previous research, we predicted that children with FXS and autism would have significantly more difficulty with non-meaningful imitation tasks. After controlling for…
QSPR using MOLGEN-QSPR: the challenge of fluoroalkane boiling points.
Rücker, Christoph; Meringer, Markus; Kerber, Adalbert
2005-01-01
By means of the new software MOLGEN-QSPR, a multilinear regression model for the boiling points of lower fluoroalkanes is established. The model is based exclusively on simple descriptors derived directly from molecular structure and nevertheless describes a broader set of data more precisely than previous attempts that used either more demanding (quantum chemical) descriptors or more demanding (nonlinear) statistical methods such as neural networks. The model's internal consistency was confirmed by leave-one-out cross-validation. The model was used to predict all unknown boiling points of fluorobutanes, and the quality of predictions was estimated by means of comparison with boiling point predictions for fluoropentanes.
Thermodynamic properties of gases dissolved in electrolyte solutions.
NASA Technical Reports Server (NTRS)
Tiepel, E. W.; Gubbins, K. E.
1973-01-01
A method based on perturbation theory for mixtures is applied to the prediction of thermodynamic properties of gases dissolved in electrolyte solutions. The theory is compared with experimental data for the dependence of the solute activity coefficient on concentration, temperature, and pressure; calculations are included for partial molal enthalpy and volume of the dissolved gas. The theory is also compared with previous theories for salt effects and found to be superior. The calculations are best for salting-out systems. The qualitative feature of salting-in is predicted by the theory, but quantitative predictions are not satisfactory for such systems; this is attributed to approximations made in evaluating the perturbation terms.
Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds
Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark
2009-01-01
Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San Bernardino, Los Angeles, and Ventura Counties. This model predicts sediment yield as a function of the peak 1-hour rainfall, the watershed area burned by the most recent fire (at all severities), the time since the most recent fire, watershed area, average gradient, and relief ratio. The model that reflects conditions specific to Ventura County watersheds consistently under-predicted sediment yields and is not recommended for application. Some previously-published models performed reasonably well, while others either under-predicted sediment yields or had a larger range of errors in the predicted sediment yields.
Intrusion-based reasoning and depression: cross-sectional and prospective relationships.
Berle, David; Moulds, Michelle L
2014-01-01
Intrusion-based reasoning refers to the tendency to form interpretations about oneself or a situation based on the occurrence of a negative intrusive autobiographical memory. Intrusion-based reasoning characterises post-traumatic stress disorder, but has not yet been investigated in depression. We report two studies that aimed to investigate this. In Study 1 both high (n = 42) and low (n = 28) dysphoric participants demonstrated intrusion-based reasoning. High-dysphoric individuals engaged in self-referent intrusion-based reasoning to a greater extent than did low-dysphoric participants. In Study 2 there were no significant differences in intrusion-based reasoning between currently depressed (n = 27) and non-depressed (n = 51) participants, and intrusion-based reasoning did not predict depressive symptoms at 6-month follow-up. Interestingly, previously (n = 26) but not currently (n = 27) depressed participants engaged in intrusion-based reasoning to a greater extent than never-depressed participants (n = 25), indicating the possibility that intrusion-based reasoning may serve as a "scar" from previous episodes. The implications of these findings are discussed.
Carpenter, Gail A; Gaddam, Sai Chaitanya
2010-04-01
Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Two-dimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/. Copyright 2009 Elsevier Ltd. All rights reserved.
How glitter relates to gold: similarity-dependent reward prediction errors in the human striatum.
Kahnt, Thorsten; Park, Soyoung Q; Burke, Christopher J; Tobler, Philippe N
2012-11-14
Optimal choices benefit from previous learning. However, it is not clear how previously learned stimuli influence behavior to novel but similar stimuli. One possibility is to generalize based on the similarity between learned and current stimuli. Here, we use neuroscientific methods and a novel computational model to inform the question of how stimulus generalization is implemented in the human brain. Behavioral responses during an intradimensional discrimination task showed similarity-dependent generalization. Moreover, a peak shift occurred, i.e., the peak of the behavioral generalization gradient was displaced from the rewarded conditioned stimulus in the direction away from the unrewarded conditioned stimulus. To account for the behavioral responses, we designed a similarity-based reinforcement learning model wherein prediction errors generalize across similar stimuli and update their value. We show that this model predicts a similarity-dependent neural generalization gradient in the striatum as well as changes in responding during extinction. Moreover, across subjects, the width of generalization was negatively correlated with functional connectivity between the striatum and the hippocampus. This result suggests that hippocampus-striatal connections contribute to stimulus-specific value updating by controlling the width of generalization. In summary, our results shed light onto the neurobiology of a fundamental, similarity-dependent learning principle that allows learning the value of stimuli that have never been encountered.
NASA Astrophysics Data System (ADS)
Kallepalli, Akhil; Kakani, Nageswara Rao; James, David B.; Richardson, Mark A.
2017-07-01
Coastal regions are highly vulnerable to rising sea levels due to global warming. Previous Intergovernmental Panel on Climate Change (2013) predictions of 26 to 82 cm global sea level rise are now considered conservative. Subsequent investigations predict much higher levels which would displace 10% of the world's population living less than 10 m above sea level. Remote sensing and GIS technologies form the mainstay of models on coastal retreat and inundation to future sea-level rise. This study estimates the varying trends along the Krishna-Godavari (K-G) delta region. The rate of shoreline shift along the 330-km long K-G delta coast was estimated using satellite images between 1977 and 2008. With reference to a selected baseline from along an inland position, end point rate and net shoreline movement were calculated using a GIS-based digital shoreline analysis system. The results indicated a net loss of about 42.1 km2 area during this 31-year period, which is in agreement with previous literature. Considering the nature of landforms and EPR, the future hazard line (or coastline) is predicted for the area; the predication indicates a net erosion of about 57.6 km2 along the K-G delta coast by 2050 AD.
Wilson, Preston S; Dunton, Kenneth H
2009-04-01
Previous in situ investigations of seagrass have revealed acoustic phenomena that depend on plant density, tissue gas content, and free bubbles produced by photosynthetic activity, but corresponding predictive models that could be used to optimize acoustic remote sensing, shallow water sonar, and mine hunting applications have not appeared. To begin to address this deficiency, low frequency (0.5-2.5 kHz) acoustic laboratory experiments were conducted on three freshly collected Texas Gulf Coast seagrass species. A one-dimensional acoustic resonator technique was used to assess the biomass and effective acoustic properties of the leaves and rhizomes of Thalassia testudinum (turtle grass), Syringodium filiforme (manatee grass), and Halodule wrightii (shoal grass). Independent biomass and gas content estimates were obtained via microscopic cross-section imagery. The acoustic results were compared to model predictions based on Wood's equation for a two-phase medium. The effective sound speed in the plant-filled resonator was strongly dependent on plant biomass, but the Wood's equation model (based on tissue gas content alone) could not predict the effective sound speed for the low irradiance conditions of the experiment, in which no free bubbles were generated by photosynthesis. The results corroborate previously published results obtained in situ for another seagrass species, Posidonia oceanica.
InterProSurf: a web server for predicting interacting sites on protein surfaces
Negi, Surendra S.; Schein, Catherine H.; Oezguen, Numan; Power, Trevor D.; Braun, Werner
2009-01-01
Summary A new web server, InterProSurf, predicts interacting amino acid residues in proteins that are most likely to interact with other proteins, given the 3D structures of subunits of a protein complex. The prediction method is based on solvent accessible surface area of residues in the isolated subunits, a propensity scale for interface residues and a clustering algorithm to identify surface regions with residues of high interface propensities. Here we illustrate the application of InterProSurf to determine which areas of Bacillus anthracis toxins and measles virus hemagglutinin protein interact with their respective cell surface receptors. The computationally predicted regions overlap with those regions previously identified as interface regions by sequence analysis and mutagenesis experiments. PMID:17933856
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
Kuniya, Toshikazu; Sano, Hideki
2016-05-10
In mathematical epidemiology, age-structured epidemic models have usually been formulated as the boundary-value problems of the partial differential equations. On the other hand, in engineering, the backstepping method has recently been developed and widely studied by many authors. Using the backstepping method, we obtained a boundary feedback control which plays the role of the threshold criteria for the prediction of increase or decrease of newly infected population. Under an assumption that the period of infectiousness is same for all infected individuals (that is, the recovery rate is given by the Dirac delta function multiplied by a sufficiently large positive constant), the prediction method is simplified to the comparison of the numbers of reported cases at the current and previous time steps. Our prediction method was applied to the reported cases per sentinel of influenza in Japan from 2006 to 2015 and its accuracy was 0.81 (404 correct predictions to the total 500 predictions). It was higher than that of the ARIMA models with different orders of the autoregressive part, differencing and moving-average process. In addition, a proposed method for the estimation of the number of reported cases, which is consistent with our prediction method, was better than that of the best-fitted ARIMA model ARIMA(1,1,0) in the sense of mean square error. Our prediction method based on the backstepping method can be simplified to the comparison of the numbers of reported cases of the current and previous time steps. In spite of its simplicity, it can provide a good prediction for the spread of influenza in Japan.
Predicting survival across chronic interstitial lung disease: the ILD-GAP model.
Ryerson, Christopher J; Vittinghoff, Eric; Ley, Brett; Lee, Joyce S; Mooney, Joshua J; Jones, Kirk D; Elicker, Brett M; Wolters, Paul J; Koth, Laura L; King, Talmadge E; Collard, Harold R
2014-04-01
Risk prediction is challenging in chronic interstitial lung disease (ILD) because of heterogeneity in disease-specific and patient-specific variables. Our objective was to determine whether mortality is accurately predicted in patients with chronic ILD using the GAP model, a clinical prediction model based on sex, age, and lung physiology, that was previously validated in patients with idiopathic pulmonary fibrosis. Patients with idiopathic pulmonary fibrosis (n=307), chronic hypersensitivity pneumonitis (n=206), connective tissue disease-associated ILD (n=281), idiopathic nonspecific interstitial pneumonia (n=45), or unclassifiable ILD (n=173) were selected from an ongoing database (N=1,012). Performance of the previously validated GAP model was compared with novel prediction models in each ILD subtype and the combined cohort. Patients with follow-up pulmonary function data were used for longitudinal model validation. The GAP model had good performance in all ILD subtypes (c-index, 74.6 in the combined cohort), which was maintained at all stages of disease severity and during follow-up evaluation. The GAP model had similar performance compared with alternative prediction models. A modified ILD-GAP Index was developed for application across all ILD subtypes to provide disease-specific survival estimates using a single risk prediction model. This was done by adding a disease subtype variable that accounted for better adjusted survival in connective tissue disease-associated ILD, chronic hypersensitivity pneumonitis, and idiopathic nonspecific interstitial pneumonia. The GAP model accurately predicts risk of death in chronic ILD. The ILD-GAP model accurately predicts mortality in major chronic ILD subtypes and at all stages of disease.
Memory and prediction in natural gaze control
Diaz, Gabriel; Cooper, Joseph; Hayhoe, Mary
2013-01-01
In addition to stimulus properties and task factors, memory is an important determinant of the allocation of attention and gaze in the natural world. One way that the role of memory is revealed is by predictive eye movements. Both smooth pursuit and saccadic eye movements demonstrate predictive effects based on previous experience. We have previously shown that unskilled subjects make highly accurate predictive saccades to the anticipated location of a ball prior to a bounce in a virtual racquetball setting. In this experiment, we examined this predictive behaviour. We asked whether the period after the bounce provides subjects with visual information about the ball trajectory that is used to programme the pursuit movement initiated when the ball passes through the fixation point. We occluded a 100 ms period of the ball's trajectory immediately after the bounce, and found very little effect on the subsequent pursuit movement. Subjects did not appear to modify their strategy to prolong the fixation. Neither were we able to find an effect on interception performance. Thus, it is possible that the occluded trajectory information is not critical for subsequent pursuit, and subjects may use an estimate of the ball's trajectory to programme pursuit. These results provide further support for the role of memory in eye movements. PMID:24018726
Cross-validation of Peak Oxygen Consumption Prediction Models From OMNI Perceived Exertion.
Mays, R J; Goss, F L; Nagle, E F; Gallagher, M; Haile, L; Schafer, M A; Kim, K H; Robertson, R J
2016-09-01
This study cross-validated statistical models for prediction of peak oxygen consumption using ratings of perceived exertion from the Adult OMNI Cycle Scale of Perceived Exertion. 74 participants (men: n=36; women: n=38) completed a graded cycle exercise test. Ratings of perceived exertion for the overall body, legs, and chest/breathing were recorded each test stage and entered into previously developed 3-stage peak oxygen consumption prediction models. There were no significant differences (p>0.05) between measured and predicted peak oxygen consumption from ratings of perceived exertion for the overall body, legs, and chest/breathing within men (mean±standard deviation: 3.16±0.52 vs. 2.92±0.33 vs. 2.90±0.29 vs. 2.90±0.26 L·min(-1)) and women (2.17±0.29 vs. 2.02±0.22 vs. 2.03±0.19 vs. 2.01±0.19 L·min(-1)) participants. Previously developed statistical models for prediction of peak oxygen consumption based on subpeak OMNI ratings of perceived exertion responses were similar to measured peak oxygen consumption in a separate group of participants. These findings provide practical implications for the use of the original statistical models in standard health-fitness settings. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Walby, Fredrik A; Odegaard, Erik; Mehlum, Lars
2006-06-01
To investigate the differential impact of DSM-IV axis-I and axis-II disorders on completed suicide and to study if psychiatric comorbidity increases the risk of suicide in currently and previously hospitalized psychiatric patients. A nested case-control design based on case notes from 136 suicides and 166 matched controls. All cases and controls were rediagnosed using the SCID-CV for axis-I and the DSM-IV criteria for axis-II disorders and the inter-rater reliability was satisfactory. Raters were blind to the case and control status and the original hospital diagnoses. Depressive disorders and bipolar disorders were associated with an increased risk of suicide. No such effect was found for comorbidity between axis-I disorders and for comorbidity between axis-I and axis-II disorders. Psychiatric diagnoses, although made using a structured and criteria-based approach, was based on information recorded in case notes. Axis-II comorbidity could only be investigated at an aggregated level. Psychiatric comorbidity did not predict suicide in this sample. Mood disorders did, however, increase the risk significantly independent of history of previous suicide attempts. Both findings can inform identification and treatment of patients at high risk for completed suicide.
Pekkala, Timo; Hall, Anette; Lötjönen, Jyrki; Mattila, Jussi; Soininen, Hilkka; Ngandu, Tiia; Laatikainen, Tiina; Kivipelto, Miia; Solomon, Alina
2017-01-01
This study aimed to develop a late-life dementia prediction model using a novel validated supervised machine learning method, the Disease State Index (DSI), in the Finnish population-based CAIDE study. The CAIDE study was based on previous population-based midlife surveys. CAIDE participants were re-examined twice in late-life, and the first late-life re-examination was used as baseline for the present study. The main study population included 709 cognitively normal subjects at first re-examination who returned to the second re-examination up to 10 years later (incident dementia n = 39). An extended population (n = 1009, incident dementia 151) included non-participants/non-survivors (national registers data). DSI was used to develop a dementia index based on first re-examination assessments. Performance in predicting dementia was assessed as area under the ROC curve (AUC). AUCs for DSI were 0.79 and 0.75 for main and extended populations. Included predictors were cognition, vascular factors, age, subjective memory complaints, and APOE genotype. The supervised machine learning method performed well in identifying comprehensive profiles for predicting dementia development up to 10 years later. DSI could thus be useful for identifying individuals who are most at risk and may benefit from dementia prevention interventions.
Peters, Rachel L.; Gurrin, Lyle C.; Dharmage, Shyamali C.; Koplin, Jennifer J.; Allen, Katrina J.
2013-01-01
IgE-mediated food allergy is a transient condition for some children, however there are few indices to predict when and in whom food allergy will resolve. Skin prick test (SPT) and serum-specific IgE levels (sIgE) are usually monitored in the management of food allergy and are used to predict the development of tolerance or persistence of food allergy. The aim of this article is to review the published literature that investigated the predictive value of SPT and sIgE in development of tolerance in children with a previous diagnosis of peanut, egg and milk allergy. A systematic search identified twenty-six studies, of which most reported SPT or sIgE thresholds which predicted persistent or resolved allergy. However, results were inconsistent between studies. Previous research was hampered by several limitations including the absence of gold standard test to diagnose food allergy or tolerance, biased samples in retrospective audits and lack of systematic protocols for triggering re-challenges. There is a need for population-based, prospective studies that use the gold standard oral food challenge (OFC) to diagnose food allergy at baseline and follow-up to develop SPT and sIgE thresholds that predict the course of food allergy. PMID:24132133
An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.
Fout, N; Ma, Kwan-Liu
2012-12-01
In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.
Vaidya, Avinash R; Fellows, Lesley K
2015-09-16
Adaptively interacting with our environment requires extracting information that will allow us to successfully predict reward. This can be a challenge, particularly when there are many candidate cues, and when rewards are probabilistic. Recent work has demonstrated that visual attention is allocated to stimulus features that have been associated with reward on previous trials. The ventromedial frontal lobe (VMF) has been implicated in learning in dynamic environments of this kind, but the mechanism by which this region influences this process is not clear. Here, we hypothesized that the VMF plays a critical role in guiding attention to reward-predictive stimulus features based on feedback. We tested the effects of VMF damage in human subjects on a visual search task in which subjects were primed to attend to task-irrelevant colors associated with different levels of reward, incidental to the search task. Consistent with previous work, we found that distractors had a greater influence on reaction time when they appeared in colors associated with high reward in the previous trial compared with colors associated with low reward in healthy control subjects and patients with prefrontal damage sparing the VMF. However, this reward modulation of attentional priming was absent in patients with VMF damage. Thus, an intact VMF is necessary for directing attention based on experience with cue-reward associations. We suggest that this region plays a role in selecting reward-predictive cues to facilitate future learning. There has been a swell of interest recently in the ventromedial frontal cortex (VMF), a brain region critical to associative learning. However, the underlying mechanism by which this region guides learning is not well understood. Here, we tested the effects of damage to this region in humans on a task in which rewards were linked incidentally to visual features, resulting in trial-by-trial attentional priming. Controls and subjects with prefrontal damage sparing the VMF showed normal reward priming, but VMF-damaged patients did not. This work sheds light on a potential mechanism through which this region influences behavior. We suggest that the VMF is necessary for directing attention to reward-predictive visual features based on feedback, facilitating future learning and decision-making. Copyright © 2015 the authors 0270-6474/15/3512813-11$15.00/0.
Effects of nonverbal behavior on perceptions of a female employee's power bases.
Aguinis, H; Henle, C A
2001-08-01
The authors extended a previous examination of the effects of nonverbal behavior on perceptions of a male employee's power bases (H. Aguinis, M. M. Simonsen, & C. A. Pierce, 1998) by examining the effects of nonverbal behavior on perceptions of a female employee's power bases. U.S. undergraduates read vignettes describing a female employee engaging in 3 types of nonverbal behavior (i.e., eye contact, facial expression, body posture) and rated their perceptions of the woman's power bases (i.e., reward, coercive, legitimate, referent, expert, credibility). As predicted, (a) direct eye contact increased perceptions of coercive power, and (b) a relaxed facial expression decreased perceptions of all 6 power bases. Also as predicted, the present results differed markedly from those of Aguinis et al. (1998) regarding a male employee. The authors discuss implications for theory, future research, and the advancement of female employees.
Condorhuamán-Alvarado, Patricia Ysabel; Menéndez-Colino, Rocío; Mauleón-Ladrero, Coro; Díez-Sebastián, Jesús; Alarcón, Teresa; González-Montalvo, Juan Ignacio
To compare baseline characteristics and those found during hospitalisation as predictors of functional decline at discharge (FDd) in elderly patients hospitalised due to acute illness. A review was made of the computerized records of patients admitted to a Geriatric Acute Unit of a tertiary hospital over a 10 year period. A record was made of demographic, clinical, functional and health-care variables. Functional decline at discharge (FDd) was defined by the difference between the previous Barthel Index (pBI) and the discharge Barthel Index (dBI). The percentage of FDd (%FDd=(pBI-dBI/pBI)×100) was calculated. The variables associated with greater %FDd in the bivariate analysis were included in multivariate logistic regression models. The predictive capacity of each model was assessed using the area under the ROC curve. The factors associated with greater %FDd were advanced age, female gender, to live in a nursing home, cognitive impairment, better baseline functional status and worse functional status at admission, number of diagnoses, and prolonged stay. The area under the ROC curve for the predictive models of %FDd was 0.638 (95% CI: 0.615-0.662) based on the previous situation, 0.756 (95% CI: 0.736-0.776) based on the situation during admission, and 0.952 (95% CI: 0.944-0.959) based on a combination of these factors. The overall assessment of patient characteristics, both during admission and baseline, may have greater value in prediction of FDd than analysis of factors separately in elderly patients hospitalised due to acute illness. Copyright © 2017. Publicado por Elsevier España, S.L.U.
NASA Astrophysics Data System (ADS)
Tahani, Masoud; Askari, Amir R.
2014-09-01
In spite of the fact that pull-in instability of electrically actuated nano/micro-beams has been investigated by many researchers to date, no explicit formula has been presented yet which can predict pull-in voltage based on a geometrically non-linear and distributed parameter model. The objective of present paper is to introduce a simple and accurate formula to predict this value for a fully clamped electrostatically actuated nano/micro-beam. To this end, a non-linear Euler-Bernoulli beam model is employed, which accounts for the axial residual stress, geometric non-linearity of mid-plane stretching, distributed electrostatic force and the van der Waals (vdW) attraction. The non-linear boundary value governing equation of equilibrium is non-dimensionalized and solved iteratively through single-term Galerkin based reduced order model (ROM). The solutions are validated thorough direct comparison with experimental and other existing results reported in previous studies. Pull-in instability under electrical and vdW loads are also investigated using universal graphs. Based on the results of these graphs, non-dimensional pull-in and vdW parameters, which are defined in the text, vary linearly versus the other dimensionless parameters of the problem. Using this fact, some linear equations are presented to predict pull-in voltage, the maximum allowable length, the so-called detachment length, and the minimum allowable gap for a nano/micro-system. These linear equations are also reduced to a couple of universal pull-in formulas for systems with small initial gap. The accuracy of the universal pull-in formulas are also validated by comparing its results with available experimental and some previous geometric linear and closed-form findings published in the literature.
2012-01-01
Background The first draft assembly and gene prediction of the grapevine genome (8X base coverage) was made available to the scientific community in 2007, and functional annotation was developed on this gene prediction. Since then additional Sanger sequences were added to the 8X sequences pool and a new version of the genomic sequence with superior base coverage (12X) was produced. Results In order to more efficiently annotate the function of the genes predicted in the new assembly, it is important to build on as much of the previous work as possible, by transferring 8X annotation of the genome to the 12X version. The 8X and 12X assemblies and gene predictions of the grapevine genome were compared to answer the question, “Can we uniquely map 8X predicted genes to 12X predicted genes?” The results show that while the assemblies and gene structure predictions are too different to make a complete mapping between them, most genes (18,725) showed a one-to-one relationship between 8X predicted genes and the last version of 12X predicted genes. In addition, reshuffled genomic sequence structures appeared. These highlight regions of the genome where the gene predictions need to be taken with caution. Based on the new grapevine gene functional annotation and in-depth functional categorization, twenty eight new molecular networks have been created for VitisNet while the existing networks were updated. Conclusions The outcomes of this study provide a functional annotation of the 12X genes, an update of VitisNet, the system of the grapevine molecular networks, and a new functional categorization of genes. Data are available at the VitisNet website (http://www.sdstate.edu/ps/research/vitis/pathways.cfm). PMID:22554261
Rotor Broadband Noise Prediction with Comparison to Model Data
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.; Burley, Casey L.
2001-01-01
This paper reports an analysis and prediction development of rotor broadband noise. The two primary components of this noise are Blade-Wake Interaction (BWI) noise, due to the blades' interaction with the turbulent wakes of the preceding blades, and "Self" noise, due to the development and shedding of turbulence within the blades' boundary layers. Emphasized in this report is the new code development for Self noise. The analysis and validation employs data from the HART program, a model BO-105 rotor wind tunnel test conducted in the German-Dutch Wind Tunnel (DNW). The BWI noise predictions are based on measured pressure response coherence functions using cross-spectral methods. The Self noise predictions are based on previously reported semiempirical modeling of Self noise obtained from isolated airfoil sections and the use of CAMRAD.Modl to define rotor performance and local blade segment flow conditions. Both BWI and Self noise from individual blade segments are Doppler shifted and summed at the observer positions. Prediction comparisons with measurements show good agreement for a range of rotor operating conditions from climb to steep descent. The broadband noise predictions, along with those of harmonic and impulsive Blade-Vortex Interaction (BVI) noise predictions, demonstrate a significant advance in predictive capability for main rotor noise.
Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches.
Rawassizadeh, Reza; Tomitsch, Martin; Nourizadeh, Manouchehr; Momeni, Elaheh; Peery, Aaron; Ulanova, Liudmila; Pazzani, Michael
2015-09-08
As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras.
McBride, Devin W.; Rodgers, Victor G. J.
2013-01-01
The activity coefficient is largely considered an empirical parameter that was traditionally introduced to correct the non-ideality observed in thermodynamic systems such as osmotic pressure. Here, the activity coefficient of free-solvent is related to physically realistic parameters and a mathematical expression is developed to directly predict the activity coefficients of free-solvent, for aqueous protein solutions up to near-saturation concentrations. The model is based on the free-solvent model, which has previously been shown to provide excellent prediction of the osmotic pressure of concentrated and crowded globular proteins in aqueous solutions up to near-saturation concentrations. Thus, this model uses only the independently determined, physically realizable quantities: mole fraction, solvent accessible surface area, and ion binding, in its prediction. Predictions are presented for the activity coefficients of free-solvent for near-saturated protein solutions containing either bovine serum albumin or hemoglobin. As a verification step, the predictability of the model for the activity coefficient of sucrose solutions was evaluated. The predicted activity coefficients of free-solvent are compared to the calculated activity coefficients of free-solvent based on osmotic pressure data. It is observed that the predicted activity coefficients are increasingly dependent on the solute-solvent parameters as the protein concentration increases to near-saturation concentrations. PMID:24324733
Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches
Rawassizadeh, Reza; Tomitsch, Martin; Nourizadeh, Manouchehr; Momeni, Elaheh; Peery, Aaron; Ulanova, Liudmila; Pazzani, Michael
2015-01-01
As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras. PMID:26370997
Illeghems, Koen; De Vuyst, Luc; Papalexandratou, Zoi; Weckx, Stefan
2012-01-01
This is the first report on the phylogenetic analysis of the community diversity of a single spontaneous cocoa bean box fermentation sample through a metagenomic approach involving 454 pyrosequencing. Several sequence-based and composition-based taxonomic profiling tools were used and evaluated to avoid software-dependent results and their outcome was validated by comparison with previously obtained culture-dependent and culture-independent data. Overall, this approach revealed a wider bacterial (mainly γ-Proteobacteria) and fungal diversity than previously found. Further, the use of a combination of different classification methods, in a software-independent way, helped to understand the actual composition of the microbial ecosystem under study. In addition, bacteriophage-related sequences were found. The bacterial diversity depended partially on the methods used, as composition-based methods predicted a wider diversity than sequence-based methods, and as classification methods based solely on phylogenetic marker genes predicted a more restricted diversity compared with methods that took all reads into account. The metagenomic sequencing analysis identified Hanseniaspora uvarum, Hanseniaspora opuntiae, Saccharomyces cerevisiae, Lactobacillus fermentum, and Acetobacter pasteurianus as the prevailing species. Also, the presence of occasional members of the cocoa bean fermentation process was revealed (such as Erwinia tasmaniensis, Lactobacillus brevis, Lactobacillus casei, Lactobacillus rhamnosus, Lactococcus lactis, Leuconostoc mesenteroides, and Oenococcus oeni). Furthermore, the sequence reads associated with viral communities were of a restricted diversity, dominated by Myoviridae and Siphoviridae, and reflecting Lactobacillus as the dominant host. To conclude, an accurate overview of all members of a cocoa bean fermentation process sample was revealed, indicating the superiority of metagenomic sequencing over previously used techniques.
Building a profile of subjective well-being for social media users.
Chen, Lushi; Gong, Tao; Kosinski, Michal; Stillwell, David; Davidson, Robert L
2017-01-01
Subjective well-being includes 'affect' and 'satisfaction with life' (SWL). This study proposes a unified approach to construct a profile of subjective well-being based on social media language in Facebook status updates. We apply sentiment analysis to generate users' affect scores, and train a random forest model to predict SWL using affect scores and other language features of the status updates. Results show that: the computer-selected features resemble the key predictors of SWL as identified in early studies; the machine-predicted SWL is moderately correlated with the self-reported SWL (r = 0.36, p < 0.01), indicating that language-based assessment can constitute valid SWL measures; the machine-assessed affect scores resemble those reported in a previous experimental study; and the machine-predicted subjective well-being profile can also reflect other psychological traits like depression (r = 0.24, p < 0.01). This study provides important insights for psychological prediction using multiple, machine-assessed components and longitudinal or dense psychological assessment using social media language.
Building a profile of subjective well-being for social media users
Kosinski, Michal; Stillwell, David; Davidson, Robert L.
2017-01-01
Subjective well-being includes ‘affect’ and ‘satisfaction with life’ (SWL). This study proposes a unified approach to construct a profile of subjective well-being based on social media language in Facebook status updates. We apply sentiment analysis to generate users’ affect scores, and train a random forest model to predict SWL using affect scores and other language features of the status updates. Results show that: the computer-selected features resemble the key predictors of SWL as identified in early studies; the machine-predicted SWL is moderately correlated with the self-reported SWL (r = 0.36, p < 0.01), indicating that language-based assessment can constitute valid SWL measures; the machine-assessed affect scores resemble those reported in a previous experimental study; and the machine-predicted subjective well-being profile can also reflect other psychological traits like depression (r = 0.24, p < 0.01). This study provides important insights for psychological prediction using multiple, machine-assessed components and longitudinal or dense psychological assessment using social media language. PMID:29135991
Prediction of quantitative intrathoracic fluid volume to diagnose pulmonary oedema using LabVIEW.
Urooj, Shabana; Khan, M; Ansari, A Q; Lay-Ekuakille, Aimé; Salhan, Ashok K
2012-01-01
Pulmonary oedema is a life-threatening disease that requires special attention in the area of research and clinical diagnosis. Computer-based techniques are rarely used to quantify the intrathoracic fluid volume (IFV) for diagnostic purposes. This paper discusses a software program developed to detect and diagnose pulmonary oedema using LabVIEW. The software runs on anthropometric dimensions and physiological parameters, mainly transthoracic electrical impedance (TEI). This technique is accurate and faster than existing manual techniques. The LabVIEW software was used to compute the parameters required to quantify IFV. An equation relating per cent control and IFV was obtained. The results of predicted TEI and measured TEI were compared with previously reported data to validate the developed program. It was found that the predicted values of TEI obtained from the computer-based technique were much closer to the measured values of TEI. Six new subjects were enrolled to measure and predict transthoracic impedance and hence to quantify IFV. A similar difference was also observed in the measured and predicted values of TEI for the new subjects.
NASA Astrophysics Data System (ADS)
Johnston, Michael A.; Farrell, Damien; Nielsen, Jens Erik
2012-04-01
The exchange of information between experimentalists and theoreticians is crucial to improving the predictive ability of theoretical methods and hence our understanding of the related biology. However many barriers exist which prevent the flow of information between the two disciplines. Enabling effective collaboration requires that experimentalists can easily apply computational tools to their data, share their data with theoreticians, and that both the experimental data and computational results are accessible to the wider community. We present a prototype collaborative environment for developing and validating predictive tools for protein biophysical characteristics. The environment is built on two central components; a new python-based integration module which allows theoreticians to provide and manage remote access to their programs; and PEATDB, a program for storing and sharing experimental data from protein biophysical characterisation studies. We demonstrate our approach by integrating PEATSA, a web-based service for predicting changes in protein biophysical characteristics, into PEATDB. Furthermore, we illustrate how the resulting environment aids method development using the Potapov dataset of experimentally measured ΔΔGfold values, previously employed to validate and train protein stability prediction algorithms.
Predicting chroma from luma with frequency domain intra prediction
NASA Astrophysics Data System (ADS)
Egge, Nathan E.; Valin, Jean-Marc
2015-03-01
This paper describes a technique for performing intra prediction of the chroma planes based on the reconstructed luma plane in the frequency domain. This prediction exploits the fact that while RGB to YUV color conversion has the property that it decorrelates the color planes globally across an image, there is still some correlation locally at the block level.1 Previous proposals compute a linear model of the spatial relationship between the luma plane (Y) and the two chroma planes (U and V).2 In codecs that use lapped transforms this is not possible since transform support extends across the block boundaries3 and thus neighboring blocks are unavailable during intra- prediction. We design a frequency domain intra predictor for chroma that exploits the same local correlation with lower complexity than the spatial predictor and which works with lapped transforms. We then describe a low- complexity algorithm that directly uses luma coefficients as a chroma predictor based on gain-shape quantization and band partitioning. An experiment is performed that compares these two techniques inside the experimental Daala video codec and shows the lower complexity algorithm to be a better chroma predictor.
Creasy, Arch; Reck, Jason; Pabst, Timothy; Hunter, Alan; Barker, Gregory; Carta, Giorgio
2018-05-29
A previously developed empirical interpolation (EI) method is extended to predict highly overloaded multicomponent elution behavior on a cation exchange (CEX) column based on batch isotherm data. Instead of a fully mechanistic model, the EI method employs an empirically modified multicomponent Langmuir equation to correlate two-component adsorption isotherm data at different salt concentrations. Piecewise cubic interpolating polynomials are then used to predict competitive binding at intermediate salt concentrations. The approach is tested for the separation of monoclonal antibody monomer and dimer mixtures by gradient elution on the cation exchange resin Nuvia HR-S. Adsorption isotherms are obtained over a range of salt concentrations with varying monomer and dimer concentrations. Coupled with a lumped kinetic model, the interpolated isotherms predict the column behavior for highly overloaded conditions. Predictions based on the EI method showed good agreement with experimental elution curves for protein loads up to 40 mg/mL column or about 50% of the column binding capacity. The approach can be extended to other chromatographic modalities and to more than two components. This article is protected by copyright. All rights reserved.
Airfoil self-noise and prediction
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.; Pope, D. Stuart; Marcolini, Michael A.
1989-01-01
A prediction method is developed for the self-generated noise of an airfoil blade encountering smooth flow. The prediction methods for the individual self-noise mechanisms are semiempirical and are based on previous theoretical studies and data obtained from tests of two- and three-dimensional airfoil blade sections. The self-noise mechanisms are due to specific boundary-layer phenomena, that is, the boundary-layer turbulence passing the trailing edge, separated-boundary-layer and stalled flow over an airfoil, vortex shedding due to laminar boundary layer instabilities, vortex shedding from blunt trailing edges, and the turbulent vortex flow existing near the tip of lifting blades. The predictions are compared successfully with published data from three self-noise studies of different airfoil shapes. An application of the prediction method is reported for a large scale-model helicopter rotor, and the predictions compared well with experimental broadband noise measurements. A computer code of the method is given.
Van Holsbeke, C; Ameye, L; Testa, A C; Mascilini, F; Lindqvist, P; Fischerova, D; Frühauf, F; Fransis, S; de Jonge, E; Timmerman, D; Epstein, E
2014-05-01
To develop and validate strategies, using new ultrasound-based mathematical models, for the prediction of high-risk endometrial cancer and compare them with strategies using previously developed models or the use of preoperative grading only. Women with endometrial cancer were prospectively examined using two-dimensional (2D) and three-dimensional (3D) gray-scale and color Doppler ultrasound imaging. More than 25 ultrasound, demographic and histological variables were analyzed. Two logistic regression models were developed: one 'objective' model using mainly objective variables; and one 'subjective' model including subjective variables (i.e. subjective impression of myometrial and cervical invasion, preoperative grade and demographic variables). The following strategies were validated: a one-step strategy using only preoperative grading and two-step strategies using preoperative grading as the first step and one of the new models, subjective assessment or previously developed models as a second step. One-hundred and twenty-five patients were included in the development set and 211 were included in the validation set. The 'objective' model retained preoperative grade and minimal tumor-free myometrium as variables. The 'subjective' model retained preoperative grade and subjective assessment of myometrial invasion. On external validation, the performance of the new models was similar to that on the development set. Sensitivity for the two-step strategy with the 'objective' model was 78% (95% CI, 69-84%) at a cut-off of 0.50, 82% (95% CI, 74-88%) for the strategy with the 'subjective' model and 83% (95% CI, 75-88%) for that with subjective assessment. Specificity was 68% (95% CI, 58-77%), 72% (95% CI, 62-80%) and 71% (95% CI, 61-79%) respectively. The two-step strategies detected up to twice as many high-risk cases as preoperative grading only. The new models had a significantly higher sensitivity than did previously developed models, at the same specificity. Two-step strategies with 'new' ultrasound-based models predict high-risk endometrial cancers with good accuracy and do this better than do previously developed models. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.
Olfactory discrimination predicts cognitive decline among community-dwelling older adults
Sohrabi, H R; Bates, K A; Weinborn, M G; Johnston, A N B; Bahramian, A; Taddei, K; Laws, S M; Rodrigues, M; Morici, M; Howard, M; Martins, G; Mackay-Sim, A; Gandy, S E; Martins, R N
2012-01-01
The presence of olfactory dysfunction in individuals at higher risk of Alzheimer's disease has significant diagnostic and screening implications for preventive and ameliorative drug trials. Olfactory threshold, discrimination and identification can be reliably recorded in the early stages of neurodegenerative diseases. The current study has examined the ability of various olfactory functions in predicting cognitive decline in a community-dwelling sample. A group of 308 participants, aged 46–86 years old, were recruited for this study. After 3 years of follow-up, participants were divided into cognitively declined and non-declined groups based on their performance on a neuropsychological battery. Assessment of olfactory functions using the Sniffin' Sticks battery indicated that, contrary to previous findings, olfactory discrimination, but not olfactory identification, significantly predicted subsequent cognitive decline (odds ratio=0.869; P<0.05; 95% confidence interval=0.764−0.988). The current study findings confirm previously reported associations between olfactory and cognitive functions, and indicate that impairment in olfactory discrimination can predict future cognitive decline. These findings further our current understanding of the association between cognition and olfaction, and support olfactory assessment in screening those at higher risk of dementia. PMID:22832962
Olfactory discrimination predicts cognitive decline among community-dwelling older adults.
Sohrabi, H R; Bates, K A; Weinborn, M G; Johnston, A N B; Bahramian, A; Taddei, K; Laws, S M; Rodrigues, M; Morici, M; Howard, M; Martins, G; Mackay-Sim, A; Gandy, S E; Martins, R N
2012-05-22
The presence of olfactory dysfunction in individuals at higher risk of Alzheimer's disease has significant diagnostic and screening implications for preventive and ameliorative drug trials. Olfactory threshold, discrimination and identification can be reliably recorded in the early stages of neurodegenerative diseases. The current study has examined the ability of various olfactory functions in predicting cognitive decline in a community-dwelling sample. A group of 308 participants, aged 46-86 years old, were recruited for this study. After 3 years of follow-up, participants were divided into cognitively declined and non-declined groups based on their performance on a neuropsychological battery. Assessment of olfactory functions using the Sniffin' Sticks battery indicated that, contrary to previous findings, olfactory discrimination, but not olfactory identification, significantly predicted subsequent cognitive decline (odds ratio = 0.869; P<0.05; 95% confidence interval = 0.764-0.988). The current study findings confirm previously reported associations between olfactory and cognitive functions, and indicate that impairment in olfactory discrimination can predict future cognitive decline. These findings further our current understanding of the association between cognition and olfaction, and support olfactory assessment in screening those at higher risk of dementia.
Frontal Theta Reflects Uncertainty and Unexpectedness during Exploration and Exploitation
Figueroa, Christina M.; Cohen, Michael X; Frank, Michael J.
2012-01-01
In order to understand the exploitation/exploration trade-off in reinforcement learning, previous theoretical and empirical accounts have suggested that increased uncertainty may precede the decision to explore an alternative option. To date, the neural mechanisms that support the strategic application of uncertainty-driven exploration remain underspecified. In this study, electroencephalography (EEG) was used to assess trial-to-trial dynamics relevant to exploration and exploitation. Theta-band activities over middle and lateral frontal areas have previously been implicated in EEG studies of reinforcement learning and strategic control. It was hypothesized that these areas may interact during top-down strategic behavioral control involved in exploratory choices. Here, we used a dynamic reward–learning task and an associated mathematical model that predicted individual response times. This reinforcement-learning model generated value-based prediction errors and trial-by-trial estimates of exploration as a function of uncertainty. Mid-frontal theta power correlated with unsigned prediction error, although negative prediction errors had greater power overall. Trial-to-trial variations in response-locked frontal theta were linearly related to relative uncertainty and were larger in individuals who used uncertainty to guide exploration. This finding suggests that theta-band activities reflect prefrontal-directed strategic control during exploratory choices. PMID:22120491
Decomposing experience-driven attention: Opposite attentional effects of previously predictive cues.
Lin, Zhicheng; Lu, Zhong-Lin; He, Sheng
2016-10-01
A central function of the brain is to track the dynamic statistical regularities in the environment - such as what predicts what over time. How does this statistical learning process alter sensory and attentional processes? Drawing upon animal conditioning and predictive coding, we developed a learning procedure that revealed two distinct components through which prior learning-experience controls attention. During learning, a visual search task was used in which the target randomly appeared at one of several locations but always inside an encloser of a particular color - the learned color served to direct attention to the target location. During test, the color no longer predicted the target location. When the same search task was used in the subsequent test, we found that the learned color continued to attract attention despite the behavior being counterproductive for the task and despite the presence of a completely predictive cue. However, when tested with a flanker task that had minimal location uncertainty - the target was at the fixation surrounded by a distractor - participants were better at ignoring distractors in the learned color than other colors. Evidently, previously predictive cues capture attention in the same search task but can be better suppressed in a flanker task. These results demonstrate opposing components - capture and inhibition - in experience-driven attention, with their manifestations crucially dependent on task context. We conclude that associative learning enhances context-sensitive top-down modulation while it reduces bottom-up sensory drive and facilitates suppression, supporting a learning-based predictive coding account.
Decomposing experience-driven attention: opposite attentional effects of previously predictive cues
Lin, Zhicheng; Lu, Zhong-Lin; He, Sheng
2016-01-01
A central function of the brain is to track the dynamic statistical regularities in the environment—such as what predicts what over time. How does this statistical learning process alter sensory and attentional processes? Drawing upon animal conditioning and predictive coding, we developed a learning procedure that revealed two distinct components through which prior learning-experience controls attention. During learning, a visual search task was used in which the target randomly appeared at one of several locations but always inside an encloser of a particular color—the learned color served to direct attention to the target location. During test, the color no longer predicted the target location. When the same search task was used in the subsequent test, we found that the learned color continued to attract attention despite the behavior being counterproductive for the task and despite the presence of a completely predictive cue. However, when tested with a flanker task that had minimal location uncertainty—the target was at the fixation surrounded by a distractor—participants were better at ignoring distractors in the learned color than other colors. Evidently, previously predictive cues capture attention in the same search task but can be better suppressed in a flanker task. These results demonstrate opposing components—capture and inhibition—in experience-driven attention, with their manifestations crucially dependent on task context. We conclude that associative learning enhances context-sensitive top-down modulation while reduces bottom-up sensory drive and facilitates suppression, supporting a learning-based predictive coding account. PMID:27068051
NASA Astrophysics Data System (ADS)
Kalayeh, Mahdi M.; Marin, Thibault; Pretorius, P. Hendrik; Wernick, Miles N.; Yang, Yongyi; Brankov, Jovan G.
2011-03-01
In this paper, we present a numerical observer for image quality assessment, aiming to predict human observer accuracy in a cardiac perfusion defect detection task for single-photon emission computed tomography (SPECT). In medical imaging, image quality should be assessed by evaluating the human observer accuracy for a specific diagnostic task. This approach is known as task-based assessment. Such evaluations are important for optimizing and testing imaging devices and algorithms. Unfortunately, human observer studies with expert readers are costly and time-demanding. To address this problem, numerical observers have been developed as a surrogate for human readers to predict human diagnostic performance. The channelized Hotelling observer (CHO) with internal noise model has been found to predict human performance well in some situations, but does not always generalize well to unseen data. We have argued in the past that finding a model to predict human observers could be viewed as a machine learning problem. Following this approach, in this paper we propose a channelized relevance vector machine (CRVM) to predict human diagnostic scores in a detection task. We have previously used channelized support vector machines (CSVM) to predict human scores and have shown that this approach offers better and more robust predictions than the classical CHO method. The comparison of the proposed CRVM with our previously introduced CSVM method suggests that CRVM can achieve similar generalization accuracy, while dramatically reducing model complexity and computation time.
Unsupervised user similarity mining in GSM sensor networks.
Shad, Shafqat Ali; Chen, Enhong
2013-01-01
Mobility data has attracted the researchers for the past few years because of its rich context and spatiotemporal nature, where this information can be used for potential applications like early warning system, route prediction, traffic management, advertisement, social networking, and community finding. All the mentioned applications are based on mobility profile building and user trend analysis, where mobility profile building is done through significant places extraction, user's actual movement prediction, and context awareness. However, significant places extraction and user's actual movement prediction for mobility profile building are a trivial task. In this paper, we present the user similarity mining-based methodology through user mobility profile building by using the semantic tagging information provided by user and basic GSM network architecture properties based on unsupervised clustering approach. As the mobility information is in low-level raw form, our proposed methodology successfully converts it to a high-level meaningful information by using the cell-Id location information rather than previously used location capturing methods like GPS, Infrared, and Wifi for profile mining and user similarity mining.
Ridderinkhof, K. Richard; van Wouwe, Nelleke C.; Band, Guido P. H.; Wylie, Scott A.; Van der Stigchel, Stefan; van Hees, Pieter; Buitenweg, Jessika; van de Vijver, Irene; van den Wildenberg, Wery P. M.
2012-01-01
Reward-based decision-learning refers to the process of learning to select those actions that lead to rewards while avoiding actions that lead to punishments. This process, known to rely on dopaminergic activity in striatal brain regions, is compromised in Parkinson’s disease (PD). We hypothesized that such decision-learning deficits are alleviated by induced positive affect, which is thought to incur transient boosts in midbrain and striatal dopaminergic activity. Computational measures of probabilistic reward-based decision-learning were determined for 51 patients diagnosed with PD. Previous work has shown these measures to rely on the nucleus caudatus (outcome evaluation during the early phases of learning) and the putamen (reward prediction during later phases of learning). We observed that induced positive affect facilitated learning, through its effects on reward prediction rather than outcome evaluation. Viewing a few minutes of comedy clips served to remedy dopamine-related problems associated with frontostriatal circuitry and, consequently, learning to predict which actions will yield reward. PMID:22707944
Mesolimbic confidence signals guide perceptual learning in the absence of external feedback
Guggenmos, Matthias; Wilbertz, Gregor; Hebart, Martin N; Sterzer, Philipp
2016-01-01
It is well established that learning can occur without external feedback, yet normative reinforcement learning theories have difficulties explaining such instances of learning. Here, we propose that human observers are capable of generating their own feedback signals by monitoring internal decision variables. We investigated this hypothesis in a visual perceptual learning task using fMRI and confidence reports as a measure for this monitoring process. Employing a novel computational model in which learning is guided by confidence-based reinforcement signals, we found that mesolimbic brain areas encoded both anticipation and prediction error of confidence—in remarkable similarity to previous findings for external reward-based feedback. We demonstrate that the model accounts for choice and confidence reports and show that the mesolimbic confidence prediction error modulation derived through the model predicts individual learning success. These results provide a mechanistic neurobiological explanation for learning without external feedback by augmenting reinforcement models with confidence-based feedback. DOI: http://dx.doi.org/10.7554/eLife.13388.001 PMID:27021283
Atmospheric Dispersion Modeling of the February 2014 Waste Isolation Pilot Plant Release
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nasstrom, John; Piggott, Tom; Simpson, Matthew
2015-07-22
This report presents the results of a simulation of the atmospheric dispersion and deposition of radioactivity released from the Waste Isolation Pilot Plant (WIPP) site in New Mexico in February 2014. These simulations were made by the National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory (LLNL), and supersede NARAC simulation results published in a previous WIPP report (WIPP, 2014). The results presented in this report use additional, more detailed data from WIPP on the specific radionuclides released, radioactivity release amounts and release times. Compared to the previous NARAC simulations, the new simulation results in this report aremore » based on more detailed modeling of the winds, turbulence, and particle dry deposition. In addition, the initial plume rise from the exhaust vent was considered in the new simulations, but not in the previous NARAC simulations. The new model results show some small differences compared to previous results, but do not change the conclusions in the WIPP (2014) report. Presented are the data and assumptions used in these model simulations, as well as the model-predicted dose and deposition on and near the WIPP site. A comparison of predicted and measured radionuclide-specific air concentrations is also presented.« less
Coutinho, Eduardo; Cangelosi, Angelo
2011-08-01
We sustain that the structure of affect elicited by music is largely dependent on dynamic temporal patterns in low-level music structural parameters. In support of this claim, we have previously provided evidence that spatiotemporal dynamics in psychoacoustic features resonate with two psychological dimensions of affect underlying judgments of subjective feelings: arousal and valence. In this article we extend our previous investigations in two aspects. First, we focus on the emotions experienced rather than perceived while listening to music. Second, we evaluate the extent to which peripheral feedback in music can account for the predicted emotional responses, that is, the role of physiological arousal in determining the intensity and valence of musical emotions. Akin to our previous findings, we will show that a significant part of the listeners' reported emotions can be predicted from a set of six psychoacoustic features--loudness, pitch level, pitch contour, tempo, texture, and sharpness. Furthermore, the accuracy of those predictions is improved with the inclusion of physiological cues--skin conductance and heart rate. The interdisciplinary work presented here provides a new methodology to the field of music and emotion research based on the combination of computational and experimental work, which aid the analysis of the emotional responses to music, while offering a platform for the abstract representation of those complex relationships. Future developments may aid specific areas, such as, psychology and music therapy, by providing coherent descriptions of the emotional effects of specific music stimuli. 2011 APA, all rights reserved
Yoshida, Kazuhiro; Umeda, Yuzo; Takaki, Akinobu; Nagasaka, Takeshi; Yoshida, Ryuichi; Nobuoka, Daisuke; Kuise, Takashi; Takagi, Kosei; Yasunaka, Tetsuya; Okada, Hiroyuki; Yagi, Takahito; Fujiwara, Toshiyoshi
2017-10-01
Determining the indications for and timing of liver transplantation (LT) for acute liver failure (ALF) is essential. The King's College Hospital (KCH) guidelines and Japanese guidelines are used to predict the need for LT and the outcomes in ALF. These guidelines' accuracy when applied to ALF in different regional and etiological backgrounds may differ. Here we compared the accuracy of new (2010) Japanese guidelines that use a simple scoring system with the 1996 Japanese guidelines and the KCH criteria for living donor liver transplantation (LDLT). We retrospectively analyzed 24 adult ALF patients (18 acute type, 6 sub-acute type) who underwent LDLT in 1998-2009 at our institution. We assessed the accuracies of the 3 guidelines' criteria for ALF. The overall 1-year survival rate was 87.5%. The new and previous Japanese guidelines were superior to the KCH criteria for accurately predicting LT for acute-type ALF (72% vs. 17%). The new Japanese guidelines could identify 13 acute-type ALF patients for LT, based on the timing of encephalopathy onset. Using the previous Japanese guidelines, although the same 13 acute-type ALF patients (72%) had indications for LT, only 4 patients were indicated at the 1st step, and it took an additional 5 days to decide the indication at the 2nd step in the other 9 cases. Our findings showed that the new Japanese guidelines can predict the indications for LT and provide a reliable alternative to the previous Japanese and KCH guidelines.
Computational Methods in Drug Discovery
Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens
2014-01-01
Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236
A multiscale strength model for tantalum over an extended range of strain rates
NASA Astrophysics Data System (ADS)
Barton, N. R.; Rhee, M.
2013-09-01
A strength model for tantalum is developed and exercised across a range of conditions relevant to various types of experimental observations. The model is based on previous multiscale modeling work combined with experimental observations. As such, the model's parameterization includes a hybrid of quantities that arise directly from predictive sub-scale physics models and quantities that are adjusted to align the model with experimental observations. Given current computing and experimental limitations, the response regions for sub-scale physics simulations and detailed experimental observations have been largely disjoint. In formulating the new model and presenting results here, attention is paid to integrated experimental observations that probe strength response at the elevated strain rates where a previous version of the model has generally been successful in predicting experimental data [Barton et al., J. Appl. Phys. 109(7), 073501 (2011)].
Howell, Lydia Pleotis; Joad, Jesse P; Callahan, Edward; Servis, Gregg; Bonham, Ann C
2009-08-01
Multigenerational teams are essential to the missions of academic health centers (AHCs). Generational forecasting using Strauss and Howe's predictive model, "the generational diagonal," can be useful for anticipating and addressing issues so that each generation is effective. Forecasts are based on the observation that cyclical historical events are experienced by all generations, but the response of each generation differs according to its phase of life and previous defining experiences. This article relates Strauss and Howe's generational forecasts to AHCs. Predicted issues such as work-life balance, indebtedness, and succession planning have existed previously, but they now have different causes or consequences because of the unique experiences and life stages of current generations. Efforts to address these issues at the authors' AHC include a work-life balance workgroup, expanded leave, and intramural grants.
Selective exposure: the impact of collectivism and individualism.
Kastenmüller, Andreas; Greitemeyer, Tobias; Jonas, Eva; Fischer, Peter; Frey, Dieter
2010-12-01
Previous research has found that people prefer information that supports rather than conflicts with their decisions (selective exposure). In the present three studies, we investigated the impact of collectivism and individualism on this bias. First, based on previous findings showing that collectivists compared to individualists are inclined to seek the 'middle way' and tend towards self-criticism, we predicted and found that the confirmation bias was more negative among collectivists compared to individualists. Second, we assumed that the difference between selected supporting versus conflicting information would move more in favour of conflicting information among both collectivists and individualists when the domain was important to them. As predicted (chronic and primed), collectivists and individualists, respectively, sought more conflicting (compared to supporting) information depending on whether collectivistic (e.g., the family) or individualistic (e.g., one's own uniqueness) attributes were important.
A new software for prediction of femoral neck fractures.
Testi, Debora; Cappello, Angelo; Sgallari, Fiorella; Rumpf, Martin; Viceconti, Marco
2004-08-01
Femoral neck fractures are an important clinical, social and economic problem. Even if many different attempts have been carried out to improve the accuracy predicting the fracture risk, it was demonstrated in retrospective studies that the standard clinical protocol achieves an accuracy of about 65%. A new procedure was developed including for the prediction not only bone mineral density but also geometric and femoral strength information and achieving an accuracy of about 80% in a previous retrospective study. Aim of the present work was to re-engineer research-based procedures and develop a real-time software for the prediction of the risk for femoral fracture. The result was efficient, repeatable and easy to use software for the evaluation of the femoral neck fracture risk to be inserted in the daily clinical practice providing a useful tool for the improvement of fracture prediction.
Soft Computing Methods for Disulfide Connectivity Prediction.
Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S
2015-01-01
The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.
Assessment of Geometry and In-Flow Effects on Contra-Rotating Open Rotor Broadband Noise Predictions
NASA Technical Reports Server (NTRS)
Zawodny, Nikolas S.; Nark, Douglas M.; Boyd, D. Douglas, Jr.
2015-01-01
Application of previously formulated semi-analytical models for the prediction of broadband noise due to turbulent rotor wake interactions and rotor blade trailing edges is performed on the historical baseline F31/A31 contra-rotating open rotor configuration. Simplified two-dimensional blade element analysis is performed on cambered NACA 4-digit airfoil profiles, which are meant to serve as substitutes for the actual rotor blade sectional geometries. Rotor in-flow effects such as induced axial and tangential velocities are incorporated into the noise prediction models based on supporting computational fluid dynamics (CFD) results and simplified in-flow velocity models. Emphasis is placed on the development of simplified rotor in-flow models for the purpose of performing accurate noise predictions independent of CFD information. The broadband predictions are found to compare favorably with experimental acoustic results.
Using Conversation Topics for Predicting Therapy Outcomes in Schizophrenia
Howes, Christine; Purver, Matthew; McCabe, Rose
2013-01-01
Previous research shows that aspects of doctor-patient communication in therapy can predict patient symptoms, satisfaction and future adherence to treatment (a significant problem with conditions such as schizophrenia). However, automatic prediction has so far shown success only when based on low-level lexical features, and it is unclear how well these can generalize to new data, or whether their effectiveness is due to their capturing aspects of style, structure or content. Here, we examine the use of topic as a higher-level measure of content, more likely to generalize and to have more explanatory power. Investigations show that while topics predict some important factors such as patient satisfaction and ratings of therapy quality, they lack the full predictive power of lower-level features. For some factors, unsupervised methods produce models comparable to manual annotation. PMID:23943658
Contrasting cue-density effects in causal and prediction judgments.
Vadillo, Miguel A; Musca, Serban C; Blanco, Fernando; Matute, Helena
2011-02-01
Many theories of contingency learning assume (either explicitly or implicitly) that predicting whether an outcome will occur should be easier than making a causal judgment. Previous research suggests that outcome predictions would depart from normative standards less often than causal judgments, which is consistent with the idea that the latter are based on more numerous and complex processes. However, only indirect evidence exists for this view. The experiment presented here specifically addresses this issue by allowing for a fair comparison of causal judgments and outcome predictions, both collected at the same stage with identical rating scales. Cue density, a parameter known to affect judgments, is manipulated in a contingency learning paradigm. The results show that, if anything, the cue-density bias is stronger in outcome predictions than in causal judgments. These results contradict key assumptions of many influential theories of contingency learning.
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Stettner, David R.
1994-01-01
This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.
Villa, Chiara; Brůžek, Jaroslav
2017-01-01
Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. Methods We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). Results and Discussion The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results. PMID:28533960
Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara; Brůžek, Jaroslav
2017-01-01
Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results.
Can air temperature be used to project influences of climate change on stream temperature?
Arismendi, Ivan; Safeeq, Mohammad; Dunham, Jason B.; Johnson, Sherri L.
2014-01-01
Worldwide, lack of data on stream temperature has motivated the use of regression-based statistical models to predict stream temperatures based on more widely available data on air temperatures. Such models have been widely applied to project responses of stream temperatures under climate change, but the performance of these models has not been fully evaluated. To address this knowledge gap, we examined the performance of two widely used linear and nonlinear regression models that predict stream temperatures based on air temperatures. We evaluated model performance and temporal stability of model parameters in a suite of regulated and unregulated streams with 11–44 years of stream temperature data. Although such models may have validity when predicting stream temperatures within the span of time that corresponds to the data used to develop them, model predictions did not transfer well to other time periods. Validation of model predictions of most recent stream temperatures, based on air temperature–stream temperature relationships from previous time periods often showed poor performance when compared with observed stream temperatures. Overall, model predictions were less robust in regulated streams and they frequently failed in detecting the coldest and warmest temperatures within all sites. In many cases, the magnitude of errors in these predictions falls within a range that equals or exceeds the magnitude of future projections of climate-related changes in stream temperatures reported for the region we studied (between 0.5 and 3.0 °C by 2080). The limited ability of regression-based statistical models to accurately project stream temperatures over time likely stems from the fact that underlying processes at play, namely the heat budgets of air and water, are distinctive in each medium and vary among localities and through time.
Radiomics biomarkers for accurate tumor progression prediction of oropharyngeal cancer
NASA Astrophysics Data System (ADS)
Hadjiiski, Lubomir; Chan, Heang-Ping; Cha, Kenny H.; Srinivasan, Ashok; Wei, Jun; Zhou, Chuan; Prince, Mark; Papagerakis, Silvana
2017-03-01
Accurate tumor progression prediction for oropharyngeal cancers is crucial for identifying patients who would best be treated with optimized treatment and therefore minimize the risk of under- or over-treatment. An objective decision support system that can merge the available radiomics, histopathologic and molecular biomarkers in a predictive model based on statistical outcomes of previous cases and machine learning may assist clinicians in making more accurate assessment of oropharyngeal tumor progression. In this study, we evaluated the feasibility of developing individual and combined predictive models based on quantitative image analysis from radiomics, histopathology and molecular biomarkers for oropharyngeal tumor progression prediction. With IRB approval, 31, 84, and 127 patients with head and neck CT (CT-HN), tumor tissue microarrays (TMAs) and molecular biomarker expressions, respectively, were collected. For 8 of the patients all 3 types of biomarkers were available and they were sequestered in a test set. The CT-HN lesions were automatically segmented using our level sets based method. Morphological, texture and molecular based features were extracted from CT-HN and TMA images, and selected features were merged by a neural network. The classification accuracy was quantified using the area under the ROC curve (AUC). Test AUCs of 0.87, 0.74, and 0.71 were obtained with the individual predictive models based on radiomics, histopathologic, and molecular features, respectively. Combining the radiomics and molecular models increased the test AUC to 0.90. Combining all 3 models increased the test AUC further to 0.94. This preliminary study demonstrates that the individual domains of biomarkers are useful and the integrated multi-domain approach is most promising for tumor progression prediction.
Vandermolen, Brooke I; Hezelgrave, Natasha L; Smout, Elizabeth M; Abbott, Danielle S; Seed, Paul T; Shennan, Andrew H
2016-10-01
Quantitative fetal fibronectin testing has demonstrated accuracy for prediction of spontaneous preterm birth in asymptomatic women with a history of preterm birth. Predictive accuracy in women with previous cervical surgery (a potentially different risk mechanism) is not known. We sought to compare the predictive accuracy of cervicovaginal fluid quantitative fetal fibronectin and cervical length testing in asymptomatic women with previous cervical surgery to that in women with 1 previous preterm birth. We conducted a prospective blinded secondary analysis of a larger observational study of cervicovaginal fluid quantitative fetal fibronectin concentration in asymptomatic women measured with a Hologic 10Q system (Hologic, Marlborough, MA). Prediction of spontaneous preterm birth (<30, <34, and <37 weeks) with cervicovaginal fluid quantitative fetal fibronectin concentration in primiparous women who had undergone at least 1 invasive cervical procedure (n = 473) was compared with prediction in women who had previous spontaneous preterm birth, preterm prelabor rupture of membranes, or late miscarriage (n = 821). Relationship with cervical length was explored. The rate of spontaneous preterm birth <34 weeks in the cervical surgery group was 3% compared with 9% in previous spontaneous preterm birth group. Receiver operating characteristic curves comparing quantitative fetal fibronectin for prediction at all 3 gestational end points were comparable between the cervical surgery and previous spontaneous preterm birth groups (34 weeks: area under the curve, 0.78 [95% confidence interval 0.64-0.93] vs 0.71 [95% confidence interval 0.64-0.78]; P = .39). Prediction of spontaneous preterm birth using cervical length compared with quantitative fetal fibronectin for prediction of preterm birth <34 weeks of gestation offered similar prediction (area under the curve, 0.88 [95% confidence interval 0.79-0.96] vs 0.77 [95% confidence interval 0.62-0.92], P = .12 in the cervical surgery group; and 0.77 [95% confidence interval 0.70-0.84] vs 0.74 [95% confidence interval 0.67-0.81], P = .32 in the previous spontaneous preterm birth group). Prediction of spontaneous preterm birth using cervicovaginal fluid quantitative fetal fibronectin in asymptomatic women with cervical surgery is valid, and has comparative accuracy to that in women with a history of spontaneous preterm birth. Copyright © 2016 Elsevier Inc. All rights reserved.
A Probabilistic Model of Meter Perception: Simulating Enculturation.
van der Weij, Bastiaan; Pearce, Marcus T; Honing, Henkjan
2017-01-01
Enculturation is known to shape the perception of meter in music but this is not explicitly accounted for by current cognitive models of meter perception. We hypothesize that the induction of meter is a result of predictive coding: interpreting onsets in a rhythm relative to a periodic meter facilitates prediction of future onsets. Such prediction, we hypothesize, is based on previous exposure to rhythms. As such, predictive coding provides a possible explanation for the way meter perception is shaped by the cultural environment. Based on this hypothesis, we present a probabilistic model of meter perception that uses statistical properties of the relation between rhythm and meter to infer meter from quantized rhythms. We show that our model can successfully predict annotated time signatures from quantized rhythmic patterns derived from folk melodies. Furthermore, we show that by inferring meter, our model improves prediction of the onsets of future events compared to a similar probabilistic model that does not infer meter. Finally, as a proof of concept, we demonstrate how our model can be used in a simulation of enculturation. From the results of this simulation, we derive a class of rhythms that are likely to be interpreted differently by enculturated listeners with different histories of exposure to rhythms.
MHC2NNZ: A novel peptide binding prediction approach for HLA DQ molecules
NASA Astrophysics Data System (ADS)
Xie, Jiang; Zeng, Xu; Lu, Dongfang; Liu, Zhixiang; Wang, Jiao
2017-07-01
The major histocompatibility complex class II (MHC-II) molecule plays a crucial role in immunology. Computational prediction of MHC-II binding peptides can help researchers understand the mechanism of immune systems and design vaccines. Most of the prediction algorithms for MHC-II to date have made large efforts in human leukocyte antigen (HLA, the name of MHC in Human) molecules encoded in the DR locus. However, HLA DQ molecules are equally important and have only been made less progress because it is more difficult to handle them experimentally. In this study, we propose an artificial neural network-based approach called MHC2NNZ to predict peptides binding to HLA DQ molecules. Unlike previous artificial neural network-based methods, MHC2NNZ not only considers sequence similarity features but also captures the chemical and physical properties, and a novel method incorporating these properties is proposed to represent peptide flanking regions (PFR). Furthermore, MHC2NNZ improves the prediction accuracy by combining with amino acid preference at more specific positions of the peptides binding core. By evaluating on 3549 peptides binding to six most frequent HLA DQ molecules, MHC2NNZ is demonstrated to outperform other state-of-the-art MHC-II prediction methods.
Phan, Andy; Mailey, Katherine; Saeki, Jessica; Gu, Xiaobo
2017-01-01
Accurate thermodynamic parameters improve RNA structure predictions and thus accelerate understanding of RNA function and the identification of RNA drug binding sites. Many viral RNA structures, such as internal ribosome entry sites, have internal loops and bulges that are potential drug target sites. Current models used to predict internal loops are biased toward small, symmetric purine loops, and thus poorly predict asymmetric, pyrimidine-rich loops with >6 nucleotides (nt) that occur frequently in viral RNA. This article presents new thermodynamic data for 40 pyrimidine loops, many of which can form UU or protonated CC base pairs. Uracil and protonated cytosine base pairs stabilize asymmetric internal loops. Accurate prediction rules are presented that account for all thermodynamic measurements of RNA asymmetric internal loops. New loop initiation terms for loops with >6 nt are presented that do not follow previous assumptions that increasing asymmetry destabilizes loops. Since the last 2004 update, 126 new loops with asymmetry or sizes greater than 2 × 2 have been measured. These new measurements significantly deepen and diversify the thermodynamic database for RNA. These results will help better predict internal loops that are larger, pyrimidine-rich, and occur within viral structures such as internal ribosome entry sites. PMID:28213527
Prediction of Fat-Free Mass in Kidney Transplant Recipients.
Størset, Elisabet; von Düring, Marit Elizabeth; Godang, Kristin; Bergan, Stein; Midtvedt, Karsten; Åsberg, Anders
2016-08-01
Individualization of drug doses is essential in kidney transplant recipients. For many drugs, the individual dose is better predicted when using fat-free mass (FFM) as a scaling factor. Multiple equations have been developed to predict FFM based on healthy subjects. These equations have not been evaluated in kidney transplant recipients. The objectives of this study were to develop a kidney transplant specific equation for FFM prediction and to evaluate its predictive performance compared with previously published equations. Ten weeks after transplantation, FFM was measured by dual-energy X-ray absorptiometry. Data from a consecutive cohort of 369 kidney transplant recipients were randomly assigned to an equation development data set (n = 245) or an evaluation data set (n = 124). Prediction equations were developed using linear and nonlinear regression analysis. The predictive performance of the developed equation and previously published equations in the evaluation data set was assessed. The following equation was developed: FFM (kg) = {FFMmax × body weight (kg)/[81.3 + body weight (kg)]} × [1 + height (cm) × 0.052] × [1-age (years) × 0.0007], where FFMmax was estimated to be 11.4 in males and 10.2 in females. This equation provided an unbiased, precise prediction of FFM in the evaluation data set: mean error (ME) (95% CI), -0.71 kg (-1.60 to 0.19 kg) in males and -0.36 kg (-1.52 to 0.80 kg) in females, root mean squared error 4.21 kg (1.65-6.77 kg) in males and 3.49 kg (1.15-5.84 kg) in females. Using previously published equations, FFM was systematically overpredicted in kidney-transplanted males [ME +1.33 kg (0.40-2.25 kg) to +5.01 kg (4.06-5.95 kg)], but not in females [ME -2.99 kg (-4.07 to -1.90 kg) to +3.45 kg (2.29-4.61) kg]. A new equation for FFM prediction in kidney transplant recipients has been developed. The equation may be used for population pharmacokinetic modeling and clinical dose selection in kidney transplant recipients.
Improved lossless intra coding for H.264/MPEG-4 AVC.
Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J
2006-09-01
A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.
Thomas, Reuben; Thomas, Russell S.; Auerbach, Scott S.; Portier, Christopher J.
2013-01-01
Background Several groups have employed genomic data from subchronic chemical toxicity studies in rodents (90 days) to derive gene-centric predictors of chronic toxicity and carcinogenicity. Genes are annotated to belong to biological processes or molecular pathways that are mechanistically well understood and are described in public databases. Objectives To develop a molecular pathway-based prediction model of long term hepatocarcinogenicity using 90-day gene expression data and to evaluate the performance of this model with respect to both intra-species, dose-dependent and cross-species predictions. Methods Genome-wide hepatic mRNA expression was retrospectively measured in B6C3F1 mice following subchronic exposure to twenty-six (26) chemicals (10 were positive, 2 equivocal and 14 negative for liver tumors) previously studied by the US National Toxicology Program. Using these data, a pathway-based predictor model for long-term liver cancer risk was derived using random forests. The prediction model was independently validated on test sets associated with liver cancer risk obtained from mice, rats and humans. Results Using 5-fold cross validation, the developed prediction model had reasonable predictive performance with the area under receiver-operator curve (AUC) equal to 0.66. The developed prediction model was then used to extrapolate the results to data associated with rat and human liver cancer. The extrapolated model worked well for both extrapolated species (AUC value of 0.74 for rats and 0.91 for humans). The prediction models implied a balanced interplay between all pathway responses leading to carcinogenicity predictions. Conclusions Pathway-based prediction models estimated from sub-chronic data hold promise for predicting long-term carcinogenicity and also for its ability to extrapolate results across multiple species. PMID:23737943
Thomas, Reuben; Thomas, Russell S; Auerbach, Scott S; Portier, Christopher J
2013-01-01
Several groups have employed genomic data from subchronic chemical toxicity studies in rodents (90 days) to derive gene-centric predictors of chronic toxicity and carcinogenicity. Genes are annotated to belong to biological processes or molecular pathways that are mechanistically well understood and are described in public databases. To develop a molecular pathway-based prediction model of long term hepatocarcinogenicity using 90-day gene expression data and to evaluate the performance of this model with respect to both intra-species, dose-dependent and cross-species predictions. Genome-wide hepatic mRNA expression was retrospectively measured in B6C3F1 mice following subchronic exposure to twenty-six (26) chemicals (10 were positive, 2 equivocal and 14 negative for liver tumors) previously studied by the US National Toxicology Program. Using these data, a pathway-based predictor model for long-term liver cancer risk was derived using random forests. The prediction model was independently validated on test sets associated with liver cancer risk obtained from mice, rats and humans. Using 5-fold cross validation, the developed prediction model had reasonable predictive performance with the area under receiver-operator curve (AUC) equal to 0.66. The developed prediction model was then used to extrapolate the results to data associated with rat and human liver cancer. The extrapolated model worked well for both extrapolated species (AUC value of 0.74 for rats and 0.91 for humans). The prediction models implied a balanced interplay between all pathway responses leading to carcinogenicity predictions. Pathway-based prediction models estimated from sub-chronic data hold promise for predicting long-term carcinogenicity and also for its ability to extrapolate results across multiple species.
The influence of weather on migraine – are migraine attacks predictable?
Hoffmann, Jan; Schirra, Tonio; Lo, Hendra; Neeb, Lars; Reuter, Uwe; Martus, Peter
2015-01-01
Objective The study aimed at elucidating a potential correlation between specific meteorological variables and the prevalence and intensity of migraine attacks as well as exploring a potential individual predictability of a migraine attack based on meteorological variables and their changes. Methods Attack prevalence and intensity of 100 migraineurs were correlated with atmospheric pressure, relative air humidity, and ambient temperature in 4-h intervals over 12 consecutive months. For each correlation, meteorological parameters at the time of the migraine attack as well as their variation within the preceding 24 h were analyzed. For migraineurs showing a positive correlation, logistic regression analysis was used to assess the predictability of a migraine attack based on meteorological information. Results In a subgroup of migraineurs, a significant weather sensitivity could be observed. In contrast, pooled analysis of all patients did not reveal a significant association. An individual prediction of a migraine attack based on meteorological data was not possible, mainly as a result of the small prevalence of attacks. Interpretation The results suggest that only a subgroup of migraineurs is sensitive to specific weather conditions. Our findings may provide an explanation as to why previous studies, which commonly rely on a pooled analysis, show inconclusive results. The lack of individual attack predictability indicates that the use of preventive measures based on meteorological conditions is not feasible. PMID:25642431
Prediction of near-term breast cancer risk using a Bayesian belief network
NASA Astrophysics Data System (ADS)
Zheng, Bin; Ramalingam, Pandiyarajan; Hariharan, Harishwaran; Leader, Joseph K.; Gur, David
2013-03-01
Accurately predicting near-term breast cancer risk is an important prerequisite for establishing an optimal personalized breast cancer screening paradigm. In previous studies, we investigated and tested the feasibility of developing a unique near-term breast cancer risk prediction model based on a new risk factor associated with bilateral mammographic density asymmetry between the left and right breasts of a woman using a single feature. In this study we developed a multi-feature based Bayesian belief network (BBN) that combines bilateral mammographic density asymmetry with three other popular risk factors, namely (1) age, (2) family history, and (3) average breast density, to further increase the discriminatory power of our cancer risk model. A dataset involving "prior" negative mammography examinations of 348 women was used in the study. Among these women, 174 had breast cancer detected and verified in the next sequential screening examinations, and 174 remained negative (cancer-free). A BBN was applied to predict the risk of each woman having cancer detected six to 18 months later following the negative screening mammography. The prediction results were compared with those using single features. The prediction accuracy was significantly increased when using the BBN. The area under the ROC curve increased from an AUC=0.70 to 0.84 (p<0.01), while the positive predictive value (PPV) and negative predictive value (NPV) also increased from a PPV=0.61 to 0.78 and an NPV=0.65 to 0.75, respectively. This study demonstrates that a multi-feature based BBN can more accurately predict the near-term breast cancer risk than with a single feature.
The Effect of Empathy in Proenvironmental Attitudes and Behaviors
ERIC Educational Resources Information Center
Berenguer, Jaime
2007-01-01
Previous studies have pointed out the importance of empathy in improving attitudes toward stigmatized groups and toward the environment. In the present article, it is argued that environmental behaviors and attitudes can be improved using empathic perspective-taking for inducing empathy. Based on Batson's Model of Altruism, it was predicted that…
ERIC Educational Resources Information Center
White, Brian
2004-01-01
This paper presents a generally applicable method for characterizing subjects' hypothesis-testing behaviour based on a synthesis that extends on previous work. Beginning with a transcript of subjects' speech and videotape of their actions, a Reasoning Map is created that depicts the flow of their hypotheses, tests, predictions, results, and…
Determinants of Literacy Proficiency: A Lifelong-Lifewide Learning Perspective
ERIC Educational Resources Information Center
Desjardins, Richard
2003-01-01
The aim of this article is to investigate the predictive capacity of major determinants of literacy proficiency that are associated with a variety of contexts including school, home, work, community and leisure. An identical structural model based on previous research is fitted to data for 18 countries. The results show that even after accounting…
Affective and Motivational Outcomes of Working in Collaborative Groups
ERIC Educational Resources Information Center
Boekaerts, Monique; Minnaert, Alexander
2006-01-01
The Quality of Working in Groups Instrument (QWIGI) was used in this research to measure students' fluctuating psychological need states as well as their situational interest online. Based on previous research with the QWIGI, it was predicted that the variance in university sophomores' situational interest in each of the five different topics of…
ERIC Educational Resources Information Center
Lee, Christina S.; Hayes, Rashelle B.; McQuaid, Elizabeth L.; Borrelli, Belinda
2010-01-01
Introduction. Only one previous study on minority retention in smoking cessation treatment has been conducted (Nevid JS, Javier RA, Moulton JL III. "Factors predicting participant attrition in a community-based, culturally specific smoking cessation program for Hispanic smokers." "Health Psychol" 1996; 15: 226-29). We investigated predictors of…
Quinney, Sara K; Zhang, Xin; Lucksiri, Aroonrut; Gorski, J Christopher; Li, Lang; Hall, Stephen D
2010-02-01
The prediction of clinical drug-drug interactions (DDIs) due to mechanism-based inhibitors of CYP3A is complicated when the inhibitor itself is metabolized by CYP3Aas in the case of clarithromycin. Previous attempts to predict the effects of clarithromycin on CYP3A substrates, e.g., midazolam, failed to account for nonlinear metabolism of clarithromycin. A semiphysiologically based pharmacokinetic model was developed for clarithromycin and midazolam metabolism, incorporating hepatic and intestinal metabolism by CYP3A and non-CYP3A mechanisms. CYP3A inactivation by clarithromycin occurred at both sites. K(I) and k(inact) values for clarithromycin obtained from in vitro sources were unable to accurately predict the clinical effect of clarithromycin on CYP3A activity. An iterative approach determined the optimum values to predict in vivo effects of clarithromycin on midazolam to be 5.3 microM for K(i) and 0.4 and 4 h(-1) for k(inact) in the liver and intestines, respectively. The incorporation of CYP3A-dependent metabolism of clarithromycin enabled prediction of its nonlinear pharmacokinetics. The predicted 2.6-fold change in intravenous midazolam area under the plasma concentration-time curve (AUC) after 500 mg of clarithromycin orally twice daily was consistent with clinical observations. Although the mean predicted 5.3-fold change in the AUC of oral midazolam was lower than mean observed values, it was within the range of observations. Intestinal CYP3A activity was less sensitive to changes in K(I), k(inact), and CYP3A half-life than hepatic CYP3A. This semiphysiologically based pharmacokinetic model incorporating CYP3A inactivation in the intestine and liver accurately predicts the nonlinear pharmacokinetics of clarithromycin and the DDI observed between clarithromycin and midazolam. Furthermore, this model framework can be applied to other mechanism-based inhibitors.
An Information Retrieval Approach for Robust Prediction of Road Surface States.
Park, Jae-Hyung; Kim, Kwanho
2017-01-28
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods.
Response Monitoring and Adjustment: Differential Relations with Psychopathic Traits
Bresin, Konrad; Finy, M. Sima; Sprague, Jenessa; Verona, Edelyn
2014-01-01
Studies on the relation between psychopathy and cognitive functioning often show mixed results, partially because different factors of psychopathy have not been considered fully. Based on previous research, we predicted divergent results based on a two-factor model of psychopathy (interpersonal-affective traits and impulsive-antisocial traits). Specifically, we predicted that the unique variance of interpersonal-affective traits would be related to increased monitoring (i.e., error-related negativity) and adjusting to errors (i.e., post-error slowing), whereas impulsive-antisocial traits would be related to reductions in these processes. Three studies using a diverse selection of assessment tools, samples, and methods are presented to identify response monitoring correlates of the two main factors of psychopathy. In Studies 1 (undergraduates), 2 (adolescents), and 3 (offenders), interpersonal-affective traits were related to increased adjustment following errors and, in Study 3, to enhanced monitoring of errors. Impulsive-antisocial traits were not consistently related to error adjustment across the studies, although these traits were related to a deficient monitoring of errors in Study 3. The results may help explain previous mixed findings and advance implications for etiological models of psychopathy. PMID:24933282
Hierarchical lattice models of hydrogen-bond networks in water
NASA Astrophysics Data System (ADS)
Dandekar, Rahul; Hassanali, Ali A.
2018-06-01
We develop a graph-based model of the hydrogen-bond network in water, with a view toward quantitatively modeling the molecular-level correlational structure of the network. The networks formed are studied by the constructing the model on two infinite-dimensional lattices. Our models are built bottom up, based on microscopic information coming from atomistic simulations, and we show that the predictions of the model are consistent with known results from ab initio simulations of liquid water. We show that simple entropic models can predict the correlations and clustering of local-coordination defects around tetrahedral waters observed in the atomistic simulations. We also find that orientational correlations between bonds are longer ranged than density correlations, determine the directional correlations within closed loops, and show that the patterns of water wires within these structures are also consistent with previous atomistic simulations. Our models show the existence of density and compressibility anomalies, as seen in the real liquid, and the phase diagram of these models is consistent with the singularity-free scenario previously proposed by Sastry and coworkers [Phys. Rev. E 53, 6144 (1996), 10.1103/PhysRevE.53.6144].
An Information Retrieval Approach for Robust Prediction of Road Surface States
Park, Jae-Hyung; Kim, Kwanho
2017-01-01
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods. PMID:28134859
Hernández González, Jorge Enrique; Hernández Alvarez, Lilian; Pascutti, Pedro Geraldo; Valiente, Pedro A
2017-09-01
Falcipain-2 (FP-2) is a major hemoglobinase of Plasmodium falciparum, considered an important drug target for the development of antimalarials. A previous study reported a novel series of 20 reversible peptide-based inhibitors of FP-2. However, the lack of tridimensional structures of the complexes hinders further optimization strategies to enhance the inhibitory activity of the compounds. Here we report the prediction of the binding modes of the aforementioned inhibitors to FP-2. A computational approach combining previous knowledge on the determinants of binding to the enzyme, docking, and postdocking refinement steps, is employed. The latter steps comprise molecular dynamics simulations and free energy calculations. Remarkably, this approach leads to the identification of near-native ligand conformations when applied to a validation set of protein-ligand structures. Overall, we proposed substrate-like binding modes of the studied compounds fulfilling the structural requirements for FP-2 binding and yielding free energy values that correlated well with the experimental data. Proteins 2017; 85:1666-1683. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
An Open-Access Modeled Passenger Flow Matrix for the Global Air Network in 2010
Huang, Zhuojie; Wu, Xiao; Garcia, Andres J.; Fik, Timothy J.; Tatem, Andrew J.
2013-01-01
The expanding global air network provides rapid and wide-reaching connections accelerating both domestic and international travel. To understand human movement patterns on the network and their socioeconomic, environmental and epidemiological implications, information on passenger flow is required. However, comprehensive data on global passenger flow remain difficult and expensive to obtain, prompting researchers to rely on scheduled flight seat capacity data or simple models of flow. This study describes the construction of an open-access modeled passenger flow matrix for all airports with a host city-population of more than 100,000 and within two transfers of air travel from various publicly available air travel datasets. Data on network characteristics, city population, and local area GDP amongst others are utilized as covariates in a spatial interaction framework to predict the air transportation flows between airports. Training datasets based on information from various transportation organizations in the United States, Canada and the European Union were assembled. A log-linear model controlling the random effects on origin, destination and the airport hierarchy was then built to predict passenger flows on the network, and compared to the results produced using previously published models. Validation analyses showed that the model presented here produced improved predictive power and accuracy compared to previously published models, yielding the highest successful prediction rate at the global scale. Based on this model, passenger flows between 1,491 airports on 644,406 unique routes were estimated in the prediction dataset. The airport node characteristics and estimated passenger flows are freely available as part of the Vector-Borne Disease Airline Importation Risk (VBD-Air) project at: www.vbd-air.com/data. PMID:23691194
An open-access modeled passenger flow matrix for the global air network in 2010.
Huang, Zhuojie; Wu, Xiao; Garcia, Andres J; Fik, Timothy J; Tatem, Andrew J
2013-01-01
The expanding global air network provides rapid and wide-reaching connections accelerating both domestic and international travel. To understand human movement patterns on the network and their socioeconomic, environmental and epidemiological implications, information on passenger flow is required. However, comprehensive data on global passenger flow remain difficult and expensive to obtain, prompting researchers to rely on scheduled flight seat capacity data or simple models of flow. This study describes the construction of an open-access modeled passenger flow matrix for all airports with a host city-population of more than 100,000 and within two transfers of air travel from various publicly available air travel datasets. Data on network characteristics, city population, and local area GDP amongst others are utilized as covariates in a spatial interaction framework to predict the air transportation flows between airports. Training datasets based on information from various transportation organizations in the United States, Canada and the European Union were assembled. A log-linear model controlling the random effects on origin, destination and the airport hierarchy was then built to predict passenger flows on the network, and compared to the results produced using previously published models. Validation analyses showed that the model presented here produced improved predictive power and accuracy compared to previously published models, yielding the highest successful prediction rate at the global scale. Based on this model, passenger flows between 1,491 airports on 644,406 unique routes were estimated in the prediction dataset. The airport node characteristics and estimated passenger flows are freely available as part of the Vector-Borne Disease Airline Importation Risk (VBD-Air) project at: www.vbd-air.com/data.
Polygenic risk score analysis of pathologically confirmed Alzheimer disease.
Escott-Price, Valentina; Myers, Amanda J; Huentelman, Matt; Hardy, John
2017-08-01
Previous estimates of the utility of polygenic risk score analysis for the prediction of Alzheimer disease have given area under the curve (AUC) estimates of <80%. However, these have been based on the genetic analysis of clinical case-control series. Here, we apply the same analytic approaches to a pathological case-control series and show a predictive AUC of 84%. We suggest that this analysis has clinical utility and that there is limited room for further improvement using genetic data. Ann Neurol 2017;82:311-314. © 2017 American Neurological Association.
Remembering the snake in the grass: Threat enhances recognition but not source memory.
Meyer, Miriam Magdalena; Bell, Raoul; Buchner, Axel
2015-12-01
Research on the influence of emotion on source memory has yielded inconsistent findings. The object-based framework (Mather, 2007) predicts that negatively arousing stimuli attract attention, resulting in enhanced within-object binding, and, thereby, enhanced source memory for intrinsic context features of emotional stimuli. To test this prediction, we presented pictures of threatening and harmless animals, the color of which had been experimentally manipulated. In a memory test, old-new recognition for the animals and source memory for their color was assessed. In all 3 experiments, old-new recognition was better for the more threatening material, which supports previous reports of an emotional memory enhancement. This recognition advantage was due to the emotional properties of the stimulus material, and not specific for snake stimuli. However, inconsistent with the prediction of the object-based framework, intrinsic source memory was not affected by emotion. (c) 2015 APA, all rights reserved).
Force Modelling in Orthogonal Cutting Considering Flank Wear Effect
NASA Astrophysics Data System (ADS)
Rathod, Kanti Bhikhubhai; Lalwani, Devdas I.
2017-05-01
In the present work, an attempt has been made to provide a predictive cutting force model during orthogonal cutting by combining two different force models, that is, a force model for a perfectly sharp tool plus considering the effect of edge radius and a force model for a worn tool. The first force model is for a perfectly sharp tool that is based on Oxley's predictive machining theory for orthogonal cutting as the Oxley's model is for perfectly sharp tool, the effect of cutting edge radius (hone radius) is added and improve model is presented. The second force model is based on worn tool (flank wear) that was proposed by Waldorf. Further, the developed combined force model is also used to predict flank wear width using inverse approach. The performance of the developed combined total force model is compared with the previously published results for AISI 1045 and AISI 4142 materials and found reasonably good agreement.
Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad
2018-04-21
In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.
PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction
Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.
2008-01-01
A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945
Sammour, T; Cohen, L; Karunatillake, A I; Lewis, M; Lawrence, M J; Hunter, A; Moore, J W; Thomas, M L
2017-11-01
Recently published data support the use of a web-based risk calculator ( www.anastomoticleak.com ) for the prediction of anastomotic leak after colectomy. The aim of this study was to externally validate this calculator on a larger dataset. Consecutive adult patients undergoing elective or emergency colectomy for colon cancer at a single institution over a 9-year period were identified using the Binational Colorectal Cancer Audit database. Patients with a rectosigmoid cancer, an R2 resection, or a diverting ostomy were excluded. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve (AUROC) was derived and compared with that of the American College of Surgeons National Surgical Quality Improvement Program ® (ACS NSQIP) calculator and the colon leakage score (CLS) calculator for left colectomy. Commercially available artificial intelligence-based analytics software was used to further interrogate the prediction algorithm. A total of 626 patients were identified. Four hundred and fifty-six patients met the inclusion criteria, and 402 had complete data available for all the calculator variables (126 had a left colectomy). Laparoscopic surgery was performed in 39.6% and emergency surgery in 14.7%. The anastomotic leak rate was 7.2%, with 31.0% requiring reoperation. The anastomoticleak.com calculator was significantly predictive of leak and performed better than the ACS NSQIP calculator (AUROC 0.73 vs 0.58) and the CLS calculator (AUROC 0.96 vs 0.80) for left colectomy. Artificial intelligence-predictive analysis supported these findings and identified an improved prediction model. The anastomotic leak risk calculator is significantly predictive of anastomotic leak after colon cancer resection. Wider investigation of artificial intelligence-based analytics for risk prediction is warranted.
Nnoaham, Kelechi E.; Hummelshoj, Lone; Kennedy, Stephen H.; Jenkinson, Crispin; Zondervan, Krina T.
2012-01-01
Objective To generate and validate symptom-based models to predict endometriosis among symptomatic women prior to undergoing their first laparoscopy. Design Prospective, observational, two-phase study, in which women completed a 25-item questionnaire prior to surgery. Setting Nineteen hospitals in 13 countries. Patient(s) Symptomatic women (n = 1,396) scheduled for laparoscopy without a previous surgical diagnosis of endometriosis. Intervention(s) None. Main Outcome Measure(s) Sensitivity and specificity of endometriosis diagnosis predicted by symptoms and patient characteristics from optimal models developed using multiple logistic regression analyses in one data set (phase I), and independently validated in a second data set (phase II) by receiver operating characteristic (ROC) curve analysis. Result(s) Three hundred sixty (46.7%) women in phase I and 364 (58.2%) in phase II were diagnosed with endometriosis at laparoscopy. Menstrual dyschezia (pain on opening bowels) and a history of benign ovarian cysts most strongly predicted both any and stage III and IV endometriosis in both phases. Prediction of any-stage endometriosis, although improved by ultrasound scan evidence of cyst/nodules, was relatively poor (area under the curve [AUC] = 68.3). Stage III and IV disease was predicted with good accuracy (AUC = 84.9, sensitivity of 82.3% and specificity 75.8% at an optimal cut-off of 0.24). Conclusion(s) Our symptom-based models predict any-stage endometriosis relatively poorly and stage III and IV disease with good accuracy. Predictive tools based on such models could help to prioritize women for surgical investigation in clinical practice and thus contribute to reducing time to diagnosis. We invite other researchers to validate the key models in additional populations. PMID:22657249
The 2016 outbreak on Jupiter's North Temperate Belt and jet from ground-based and Juno imaging
NASA Astrophysics Data System (ADS)
Rogers, J. H.; Orton, G. S.; Eichstädt, G.; Vedovato, M.; Caplinger, M.; Momary, T. W.; Hansen, C. J.
2017-09-01
A new outbreak of convective plumes on the peak of Jupiter's fastest jet, which had been predicted the previous year, began in autumn, 2016. It was observed just after solar conjunction by the NASA Infrared Telescope Facility, by JunoCam, and by amateur astronomers. It unfolded in essentially the same way as previous such outbreaks, leading to revival of the North Temperate Belt with a notably red component. The maturation of this belt was monitored at high resolution by JunoCam.
NASA Astrophysics Data System (ADS)
Zou, Luyao; Widicus Weaver, Susanna L.
2016-06-01
Three new weak bands of the Ar-H2O vibration-rotation-tunneling spectrum have been measured in the millimeter wavelength range. These bands were predicted from combination differences based on previously measured bands in the submillimeter region. Two previously reported submillimeter bands were also remeasured with higher frequency resolution. These new measurements allow us to obtain accurate information on the Coriolis interaction between the 101 and 110 states. Here we report these results and the associated improved molecular constants.
Cao, Pengxing
2017-01-01
Models of within-host influenza viral dynamics have contributed to an improved understanding of viral dynamics and antiviral effects over the past decade. Existing models can be classified into two broad types based on the mechanism of viral control: models utilising target cell depletion to limit the progress of infection and models which rely on timely activation of innate and adaptive immune responses to control the infection. In this paper, we compare how two exemplar models based on these different mechanisms behave and investigate how the mechanistic difference affects the assessment and prediction of antiviral treatment. We find that the assumed mechanism for viral control strongly influences the predicted outcomes of treatment. Furthermore, we observe that for the target cell-limited model the assumed drug efficacy strongly influences the predicted treatment outcomes. The area under the viral load curve is identified as the most reliable predictor of drug efficacy, and is robust to model selection. Moreover, with support from previous clinical studies, we suggest that the target cell-limited model is more suitable for modelling in vitro assays or infection in some immunocompromised/immunosuppressed patients while the immune response model is preferred for predicting the infection/antiviral effect in immunocompetent animals/patients. PMID:28933757
Mwangi, Benson; Wu, Mon-Ju; Bauer, Isabelle E; Modi, Haina; Zeni, Cristian P; Zunta-Soares, Giovana B; Hasan, Khader M; Soares, Jair C
2015-11-30
Previous studies have reported abnormalities of white-matter diffusivity in pediatric bipolar disorder. However, it has not been established whether these abnormalities are able to distinguish individual subjects with pediatric bipolar disorder from healthy controls with a high specificity and sensitivity. Diffusion-weighted imaging scans were acquired from 16 youths diagnosed with DSM-IV bipolar disorder and 16 demographically matched healthy controls. Regional white matter tissue microstructural measurements such as fractional anisotropy, axial diffusivity and radial diffusivity were computed using an atlas-based approach. These measurements were used to 'train' a support vector machine (SVM) algorithm to predict new or 'unseen' subjects' diagnostic labels. The SVM algorithm predicted individual subjects with specificity=87.5%, sensitivity=68.75%, accuracy=78.12%, positive predictive value=84.62%, negative predictive value=73.68%, area under receiver operating characteristic curve (AUROC)=0.7812 and chi-square p-value=0.0012. A pattern of reduced regional white matter fractional anisotropy was observed in pediatric bipolar disorder patients. These results suggest that atlas-based diffusion weighted imaging measurements can distinguish individual pediatric bipolar disorder patients from healthy controls. Notably, from a clinical perspective these findings will contribute to the pathophysiological understanding of pediatric bipolar disorder. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Xie, Dan; Li, Ao; Wang, Minghui; Fan, Zhewen; Feng, Huanqing
2005-01-01
Subcellular location of a protein is one of the key functional characters as proteins must be localized correctly at the subcellular level to have normal biological function. In this paper, a novel method named LOCSVMPSI has been introduced, which is based on the support vector machine (SVM) and the position-specific scoring matrix generated from profiles of PSI-BLAST. With a jackknife test on the RH2427 data set, LOCSVMPSI achieved a high overall prediction accuracy of 90.2%, which is higher than the prediction results by SubLoc and ESLpred on this data set. In addition, prediction performance of LOCSVMPSI was evaluated with 5-fold cross validation test on the PK7579 data set and the prediction results were consistently better than the previous method based on several SVMs using composition of both amino acids and amino acid pairs. Further test on the SWISSPROT new-unique data set showed that LOCSVMPSI also performed better than some widely used prediction methods, such as PSORTII, TargetP and LOCnet. All these results indicate that LOCSVMPSI is a powerful tool for the prediction of eukaryotic protein subcellular localization. An online web server (current version is 1.3) based on this method has been developed and is freely available to both academic and commercial users, which can be accessed by at . PMID:15980436
Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.
2015-01-01
Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164
Predicting the probability of slip in gait: methodology and distribution study.
Gragg, Jared; Yang, James
2016-01-01
The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.
Feng, Wei; Zheng, Jing; Dong, Yao; Li, Xueshu; Lehmler, Hans-Joachim; Pessah, Isaac N.
2017-01-01
Nondioxin-like polychlorinated biphenyls (NDL PCBs) activate ryanodine-sensitive Ca2+ channels (RyRs) and this activation has been associated with neurotoxicity in exposed animals. RyR-active congeners follow a distinct structure–activity relationship and a quantitative structure–activity relationship (QSAR) predicts that a large number of PCBs likely activate the receptor, which requires validation. Additionally, previous structural based conclusions have been established using receptor ligand binding assays but the impact of varying PCB structures on ion channel gating behavior is not understood. We used [3H]Ryanodine ([3H]Ry) binding to assess the RyR-activity of 14 previously untested PCB congeners evaluating the predictability of the QSAR. Congeners determined to display widely varying potency were then assayed with single channel voltage clamp analysis to assess direct influences on channel gating kinetics. The RyR-activity of individual PCBs assessed in in vitro assays followed the general pattern predicted by the QSAR but binding and lipid bilayer experiments demonstrated higher potency than predicted. Of the 49 congeners tested to date, tetra-ortho PCB 202 was found to be the most potent RyR-active congener increasing channel open probability at 200 pM. Shifting meta-substitutions to the para-position resulted in a > 100-fold reduction in potency as seen with PCB 197. Non-ortho PCB 11 was found to lack activity at the receptor supporting a minimum mono-ortho substitution for PCB RyR activity. These findings expand and support previous SAR assessments; where out of the 49 congeners tested to date 42 activate the receptor demonstrating that the RyR is a sensitive and common target of PCBs. PMID:27655348
Simkovic, Felix; Thomas, Jens M H; Keegan, Ronan M; Winn, Martyn D; Mayans, Olga; Rigden, Daniel J
2016-07-01
For many protein families, the deluge of new sequence information together with new statistical protocols now allow the accurate prediction of contacting residues from sequence information alone. This offers the possibility of more accurate ab initio (non-homology-based) structure prediction. Such models can be used in structure solution by molecular replacement (MR) where the target fold is novel or is only distantly related to known structures. Here, AMPLE, an MR pipeline that assembles search-model ensembles from ab initio structure predictions ('decoys'), is employed to assess the value of contact-assisted ab initio models to the crystallographer. It is demonstrated that evolutionary covariance-derived residue-residue contact predictions improve the quality of ab initio models and, consequently, the success rate of MR using search models derived from them. For targets containing β-structure, decoy quality and MR performance were further improved by the use of a β-strand contact-filtering protocol. Such contact-guided decoys achieved 14 structure solutions from 21 attempted protein targets, compared with nine for simple Rosetta decoys. Previously encountered limitations were superseded in two key respects. Firstly, much larger targets of up to 221 residues in length were solved, which is far larger than the previously benchmarked threshold of 120 residues. Secondly, contact-guided decoys significantly improved success with β-sheet-rich proteins. Overall, the improved performance of contact-guided decoys suggests that MR is now applicable to a significantly wider range of protein targets than were previously tractable, and points to a direct benefit to structural biology from the recent remarkable advances in sequencing.
Simkovic, Felix; Thomas, Jens M. H.; Keegan, Ronan M.; Winn, Martyn D.; Mayans, Olga; Rigden, Daniel J.
2016-01-01
For many protein families, the deluge of new sequence information together with new statistical protocols now allow the accurate prediction of contacting residues from sequence information alone. This offers the possibility of more accurate ab initio (non-homology-based) structure prediction. Such models can be used in structure solution by molecular replacement (MR) where the target fold is novel or is only distantly related to known structures. Here, AMPLE, an MR pipeline that assembles search-model ensembles from ab initio structure predictions (‘decoys’), is employed to assess the value of contact-assisted ab initio models to the crystallographer. It is demonstrated that evolutionary covariance-derived residue–residue contact predictions improve the quality of ab initio models and, consequently, the success rate of MR using search models derived from them. For targets containing β-structure, decoy quality and MR performance were further improved by the use of a β-strand contact-filtering protocol. Such contact-guided decoys achieved 14 structure solutions from 21 attempted protein targets, compared with nine for simple Rosetta decoys. Previously encountered limitations were superseded in two key respects. Firstly, much larger targets of up to 221 residues in length were solved, which is far larger than the previously benchmarked threshold of 120 residues. Secondly, contact-guided decoys significantly improved success with β-sheet-rich proteins. Overall, the improved performance of contact-guided decoys suggests that MR is now applicable to a significantly wider range of protein targets than were previously tractable, and points to a direct benefit to structural biology from the recent remarkable advances in sequencing. PMID:27437113
Dang, T D T; Vermeulen, A; Mertens, L; Geeraerd, A H; Van Impe, J F; Devlieghere, F
2011-01-31
In a previous study on Zygosaccharomyces bailii, three growth/no growth models have been developed, predicting growth probability of the yeast at different conditions typical for acidified foods (Dang, T.D.T., Mertens, L., Vermeulen, A., Geeraerd, A.H., Van Impe, J.F., Debevere, J., Devlieghere, F., 2010. Modeling the growth/no growth boundary of Z. bailii in acidic conditions: A contribution to the alternative method to preserve foods without using chemical preservatives. International Journal of Food Microbiology 137, 1-12). In these broth-based models, the variables were pH, water activity and acetic acid, with acetic acid concentration expressed in volume % on the total culture medium (i.e., broth). To continue the previous study, validation experiments were performed for 15 selected combinations of intrinsic factors to assess the performance of the model at 22°C (60days) in a real food product (ketchup). Although the majority of experimental results were consistent, some remarkable deviations between prediction and validation were observed, e.g., Z. bailii growth occurred in conditions where almost no growth had been predicted. A thorough investigation revealed that the difference between two ways of expressing acetic acid concentration (i.e., on broth basis and on water basis) is rather significant, particularly for media containing high amounts of dry matter. Consequently, the use of broth-based concentrations in the models was not appropriate. Three models with acetic acid concentration expressed on water basis were established and it was observed that predictions by these models well matched the validation results; therefore a "systematic error" in broth-based models was recognized. In practice, quantities of antimicrobial agents are often calculated based on the water content of food products. Hence, to assure reliable predictions and facilitate the application of models (developed from lab media with high dry matter contents), it is important to express antimicrobial agents' concentrations on a common basis-the water content. Reviews over other published growth/no growth models in literature are carried out and expressions of the stress factors' concentrations (on broth basis) found in these models confirm this finding. Copyright © 2010 Elsevier B.V. All rights reserved.
Dose-volume histogram prediction using density estimation.
Skarpman Munter, Johanna; Sjölund, Jens
2015-09-07
Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.
An Ensemble Approach for Drug Side Effect Prediction
Jahid, Md Jamiul; Ruan, Jianhua
2014-01-01
In silico prediction of drug side-effects in early stage of drug development is becoming more popular now days, which not only reduces the time for drug design but also reduces the drug development costs. In this article we propose an ensemble approach to predict drug side-effects of drug molecules based on their chemical structure. Our idea originates from the observation that similar drugs have similar side-effects. Based on this observation we design an ensemble approach that combine the results from different classification models where each model is generated by a different set of similar drugs. We applied our approach to 1385 side-effects in the SIDER database for 888 drugs. Results show that our approach outperformed previously published approaches and standard classifiers. Furthermore, we applied our method to a number of uncharacterized drug molecules in DrugBank database and predict their side-effect profiles for future usage. Results from various sources confirm that our method is able to predict the side-effects for uncharacterized drugs and more importantly able to predict rare side-effects which are often ignored by other approaches. The method described in this article can be useful to predict side-effects in drug design in an early stage to reduce experimental cost and time. PMID:25327524
NASA Technical Reports Server (NTRS)
Bleck, Rainer; Bao, Jian-Wen; Benjamin, Stanley G.; Brown, John M.; Fiorino, Michael; Henderson, Thomas B.; Lee, Jin-Luen; MacDonald, Alexander E.; Madden, Paul; Middlecoff, Jacques;
2015-01-01
A hydrostatic global weather prediction model based on an icosahedral horizontal grid and a hybrid terrain following/ isentropic vertical coordinate is described. The model is an extension to three spatial dimensions of a previously developed, icosahedral, shallow-water model featuring user-selectable horizontal resolution and employing indirect addressing techniques. The vertical grid is adaptive to maximize the portion of the atmosphere mapped into the isentropic coordinate subdomain. The model, best described as a stacked shallow-water model, is being tested extensively on real-time medium-range forecasts to ready it for possible inclusion in operational multimodel ensembles for medium-range to seasonal prediction.
NASA Technical Reports Server (NTRS)
1996-01-01
Because of their superior high-temperature properties, gas generator turbine airfoils made of single-crystal, nickel-base superalloys are fast becoming the standard equipment on today's advanced, high-performance aerospace engines. The increased temperature capabilities of these airfoils has allowed for a significant increase in the operating temperatures in turbine sections, resulting in superior propulsion performance and greater efficiencies. However, the previously developed methodologies for life-prediction models are based on experience with polycrystalline alloys and may not be applicable to single-crystal alloys under certain operating conditions. One of the main areas where behavior differences between single-crystal and polycrystalline alloys are readily apparent is subcritical fatigue crack growth (FCG). The NASA Lewis Research Center's work in this area enables accurate prediction of the subcritical fatigue crack growth behavior in single-crystal, nickel-based superalloys at elevated temperatures.
NASA Astrophysics Data System (ADS)
Ansari, Hamid Reza
2014-09-01
In this paper we propose a new method for predicting rock porosity based on a combination of several artificial intelligence systems. The method focuses on one of the Iranian carbonate fields in the Persian Gulf. Because there is strong heterogeneity in carbonate formations, estimation of rock properties experiences more challenge than sandstone. For this purpose, seismic colored inversion (SCI) and a new approach of committee machine are used in order to improve porosity estimation. The study comprises three major steps. First, a series of sample-based attributes is calculated from 3D seismic volume. Acoustic impedance is an important attribute that is obtained by the SCI method in this study. Second, porosity log is predicted from seismic attributes using common intelligent computation systems including: probabilistic neural network (PNN), radial basis function network (RBFN), multi-layer feed forward network (MLFN), ε-support vector regression (ε-SVR) and adaptive neuro-fuzzy inference system (ANFIS). Finally, a power law committee machine (PLCM) is constructed based on imperial competitive algorithm (ICA) to combine the results of all previous predictions in a single solution. This technique is called PLCM-ICA in this paper. The results show that PLCM-ICA model improved the results of neural networks, support vector machine and neuro-fuzzy system.
Postoperative Refraction in the Second Eye Having Cataract Surgery
Leffler, Christopher T.; Wilkes, Martin; Reeves, Juliana; Mahmood, Muneera A.
2011-01-01
Introduction. Previous cataract surgery studies assumed that first-eye predicted and observed postoperative refractions are equally important for predicting second-eye postoperative refraction. Methods. In a retrospective analysis of 173 patients having bilateral sequential phacoemulsification, multivariable linear regression was used to predict the second-eye postoperative refraction based on refractions predicted by the SRK-T formula for both eyes, the first-eye postoperative refraction, and the difference in IOL selected between eyes. Results. The first-eye observed postoperative refraction was an independent predictor of the second eye postoperative refraction (P < 0.001) and was weighted more heavily than the first-eye predicted refraction. Compared with the SRK-T formula, this model reduced the root-mean-squared (RMS) error of the predicted refraction by 11.3%. Conclusions. The first-eye postoperative refraction is an independent predictor of the second-eye postoperative refraction. The first-eye predicted refraction is less important. These findings may be due to interocular symmetry. PMID:24533181
Using a combined computational-experimental approach to predict antibody-specific B cell epitopes.
Sela-Culang, Inbal; Benhnia, Mohammed Rafii-El-Idrissi; Matho, Michael H; Kaever, Thomas; Maybeno, Matt; Schlossman, Andrew; Nimrod, Guy; Li, Sheng; Xiang, Yan; Zajonc, Dirk; Crotty, Shane; Ofran, Yanay; Peters, Bjoern
2014-04-08
Antibody epitope mapping is crucial for understanding B cell-mediated immunity and required for characterizing therapeutic antibodies. In contrast to T cell epitope mapping, no computational tools are in widespread use for prediction of B cell epitopes. Here, we show that, utilizing the sequence of an antibody, it is possible to identify discontinuous epitopes on its cognate antigen. The predictions are based on residue-pairing preferences and other interface characteristics. We combined these antibody-specific predictions with results of cross-blocking experiments that identify groups of antibodies with overlapping epitopes to improve the predictions. We validate the high performance of this approach by mapping the epitopes of a set of antibodies against the previously uncharacterized D8 antigen, using complementary techniques to reduce method-specific biases (X-ray crystallography, peptide ELISA, deuterium exchange, and site-directed mutagenesis). These results suggest that antibody-specific computational predictions and simple cross-blocking experiments allow for accurate prediction of residues in conformational B cell epitopes. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ahmed, Nafees; Anwar, Sirajudheen; Thet Htar, Thet
2017-06-01
The Plasmodium falciparum Lactate Dehydrogenase enzyme (PfLDH) catalyzes inter-conversion of pyruvate to lactate during glycolysis producing the energy required for parasitic growth. The PfLDH has been studied as a potential molecular target for development of anti-malarial agents. In an attempt to find the potent inhibitor of PfLDH, we have used Discovery studio to perform molecular docking in the active binding pocket of PfLDH by CDOCKER, followed by three-dimensional quantitative structure-activity relationship (3D-QSAR) studies of tricyclic guanidine batzelladine compounds, which were previously synthesized in our laboratory. Docking studies showed that there is a very strong correlation between in silico and in vitro results. Based on docking results, a highly predictive 3D-QSAR model was developed with q2 of 0.516. The model has predicted r2 of 0.91 showing that predicted IC50 values are in good agreement with experimental IC50 values. The results obtained from this study revealed the developed model can be used to design new anti-malarial compounds based on tricyclic guanidine derivatives and to predict activities of new inhibitors.
Ahmed, Nafees; Anwar, Sirajudheen; Thet Htar, Thet
2017-01-01
The Plasmodium falciparum Lactate Dehydrogenase enzyme ( Pf LDH) catalyzes inter-conversion of pyruvate to lactate during glycolysis producing the energy required for parasitic growth. The Pf LDH has been studied as a potential molecular target for development of anti-malarial agents. In an attempt to find the potent inhibitor of Pf LDH, we have used Discovery studio to perform molecular docking in the active binding pocket of Pf LDH by CDOCKER, followed by three-dimensional quantitative structure-activity relationship (3D-QSAR) studies of tricyclic guanidine batzelladine compounds, which were previously synthesized in our laboratory. Docking studies showed that there is a very strong correlation between in silico and in vitro results. Based on docking results, a highly predictive 3D-QSAR model was developed with q 2 of 0.516. The model has predicted r 2 of 0.91 showing that predicted IC 50 values are in good agreement with experimental IC 50 values. The results obtained from this study revealed the developed model can be used to design new anti-malarial compounds based on tricyclic guanidine derivatives and to predict activities of new inhibitors.
Ahmed, Nafees; Anwar, Sirajudheen; Thet Htar, Thet
2017-01-01
The Plasmodium falciparum Lactate Dehydrogenase enzyme (PfLDH) catalyzes inter-conversion of pyruvate to lactate during glycolysis producing the energy required for parasitic growth. The PfLDH has been studied as a potential molecular target for development of anti-malarial agents. In an attempt to find the potent inhibitor of PfLDH, we have used Discovery studio to perform molecular docking in the active binding pocket of PfLDH by CDOCKER, followed by three-dimensional quantitative structure-activity relationship (3D-QSAR) studies of tricyclic guanidine batzelladine compounds, which were previously synthesized in our laboratory. Docking studies showed that there is a very strong correlation between in silico and in vitro results. Based on docking results, a highly predictive 3D-QSAR model was developed with q2 of 0.516. The model has predicted r2 of 0.91 showing that predicted IC50 values are in good agreement with experimental IC50 values. The results obtained from this study revealed the developed model can be used to design new anti-malarial compounds based on tricyclic guanidine derivatives and to predict activities of new inhibitors. PMID:28664157
Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization
NASA Astrophysics Data System (ADS)
Lee, Kyungbook; Song, Seok Goo
2017-09-01
Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.
Protein function prediction using neighbor relativity in protein-protein interaction network.
Moosavi, Sobhan; Rahgozar, Masoud; Rahimi, Amir
2013-04-01
There is a large gap between the number of discovered proteins and the number of functionally annotated ones. Due to the high cost of determining protein function by wet-lab research, function prediction has become a major task for computational biology and bioinformatics. Some researches utilize the proteins interaction information to predict function for un-annotated proteins. In this paper, we propose a novel approach called "Neighbor Relativity Coefficient" (NRC) based on interaction network topology which estimates the functional similarity between two proteins. NRC is calculated for each pair of proteins based on their graph-based features including distance, common neighbors and the number of paths between them. In order to ascribe function to an un-annotated protein, NRC estimates a weight for each neighbor to transfer its annotation to the unknown protein. Finally, the unknown protein will be annotated by the top score transferred functions. We also investigate the effect of using different coefficients for various types of functions. The proposed method has been evaluated on Saccharomyces cerevisiae and Homo sapiens interaction networks. The performance analysis demonstrates that NRC yields better results in comparison with previous protein function prediction approaches that utilize interaction network. Copyright © 2012 Elsevier Ltd. All rights reserved.
Tanner, Nichole T; Porter, Alexander; Gould, Michael K; Li, Xiao-Jun; Vachani, Anil; Silvestri, Gerard A
2017-08-01
The annual incidence of pulmonary nodules is estimated at 1.57 million. Guidelines recommend using an initial assessment of nodule probability of malignancy (pCA). A previous study found that despite this recommendation, physicians did not follow guidelines. Physician assessments (N = 337) and two previously validated risk model assessments of pretest probability of cancer were evaluated for performance in 337 patients with pulmonary nodules based on final diagnosis and compared. Physician-assessed pCA was categorized into low, intermediate, and high risk, and the next test ordered was evaluated. The prevalence of malignancy was 47% (n = 158) at 1 year. Physician-assessed pCA performed better than nodule prediction calculators (area under the curve, 0.85 vs 0.75; P < .001 and .78; P = .0001). Physicians did not follow indicated guidelines when selecting the next test in 61% of cases (n = 205). Despite recommendations for serial CT imaging in those with low pCA, 52% (n = 13) were managed more aggressively with PET imaging or biopsy; 12% (n = 3) underwent biopsy procedures for benign disease. Alternatively, in the high-risk category, the majority (n = 103 [75%]) were managed more conservatively. Stratified by diagnosis, 92% (n = 22) with benign disease underwent more conservative management with CT imaging (20%), PET scanning (15%), or biopsy (8%), although three had surgery (8%). Physician assessment as a means for predicting malignancy in pulmonary nodules is more accurate than previously validated nodule prediction calculators. Despite the accuracy of clinical intuition, physicians did not follow guideline-based recommendations when selecting the next diagnostic test. To provide optimal patient care, focus in the areas of guideline refinement, implementation, and dissemination is needed. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Adelstein, Bernard D.; Yeom, Kiwon
2013-01-01
The Misalignment Effect Function (MEF) describes the decrement in manual performance associated with a rotation between operators' visual display frame of reference and that of their manual control. It now has been empirically determined for rotation axes oblique to canonical body axes and is compared with the MEF previously measured for rotations about canonical axes. A targeting rule, called the Secant Rule, based on these earlier measurements is derived from a hypothetical process and shown to describe some of the data from three previous experiments. It explains the motion trajectories determined for rotations less than 65deg in purely kinematic terms without the need to appeal to a mental rotation process. Further analysis of this rule in three dimensions applied to oblique rotation axes leads to a somewhat surprising expectation that the difficulty posed by rotational misalignment should get harder as the required movement is shorter. This prediction is confirmed. Geometry underlying this rule also suggests analytic extensions for predicting more generally the difficulty of making movements in arbitrary directions subject to arbitrary misalignments.
FIT: statistical modeling tool for transcriptome dynamics under fluctuating field conditions
Iwayama, Koji; Aisaka, Yuri; Kutsuna, Natsumaro
2017-01-01
Abstract Motivation: Considerable attention has been given to the quantification of environmental effects on organisms. In natural conditions, environmental factors are continuously changing in a complex manner. To reveal the effects of such environmental variations on organisms, transcriptome data in field environments have been collected and analyzed. Nagano et al. proposed a model that describes the relationship between transcriptomic variation and environmental conditions and demonstrated the capability to predict transcriptome variation in rice plants. However, the computational cost of parameter optimization has prevented its wide application. Results: We propose a new statistical model and efficient parameter optimization based on the previous study. We developed and released FIT, an R package that offers functions for parameter optimization and transcriptome prediction. The proposed method achieves comparable or better prediction performance within a shorter computational time than the previous method. The package will facilitate the study of the environmental effects on transcriptomic variation in field conditions. Availability and Implementation: Freely available from CRAN (https://cran.r-project.org/web/packages/FIT/). Contact: anagano@agr.ryukoku.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online PMID:28158396
Internet-based system for simulation-based medical planning for cardiovascular disease.
Steele, Brooke N; Draney, Mary T; Ku, Joy P; Taylor, Charles A
2003-06-01
Current practice in vascular surgery utilizes only diagnostic and empirical data to plan treatments, which does not enable quantitative a priori prediction of the outcomes of interventions. We have previously described simulation-based medical planning methods to model blood flow in arteries and plan medical treatments based on physiologic models. An important consideration for the design of these patient-specific modeling systems is the accessibility to physicians with modest computational resources. We describe a simulation-based medical planning environment developed for the World Wide Web (WWW) using the Virtual Reality Modeling Language (VRML) and the Java programming language.
Strong ion calculator--a practical bedside application of modern quantitative acid-base physiology.
Lloyd, P
2004-12-01
To review acid-base balance by considering the physical effects of ions in solution and describe the use of a calculator to derive the strong ion difference and Atot and strong ion gap. A review of articles reporting on the use of strong ion difference and Atot in the interpretation of acid base balance. Tremendous progress has been made in the last decade in our understanding of acid-base physiology. We now have a quantitative understanding of the mechanisms underlying the acidity of an aqueous solution. We can now predict the acidity given information about the concentration of the various ion-forming species within it. We can predict changes in acid-base status caused by disturbance of these factors, and finally, we can detect unmeasured anions with greater sensitivity than was previously possible with the anion gap, using either arterial or venous blood sampling. Acid-base interpretation has ceased to be an intuitive and arcane art. Much of it is now an exact computation that can be automated and incorporated into an online hospital laboratory information system. All diseases and all therapies can affect a patient's acid-base status only through the final common pathway of one or more of the three independent factors. With Constable's equations we can now accurately predict the acidity of plasma. When there is a discrepancy between the observed and predicted acidity we can deduce the net concentration of unmeasured ions to account for the difference.
Varga, Peter; Grünwald, Leonard; Windolf, Markus
2018-02-22
Fixation of osteoporotic proximal humerus fractures has remained challenging, but may be improved by careful pre-operative planning. The aim of this study was to investigate how well the failure of locking plate fixation of osteoporotic proximal humerus fractures can be predicted by bone density measures assessed with currently available clinical imaging (realistic case) and a higher resolution and quality modality (theoretical best-case). Various density measures were correlated to experimentally assessed number of cycles to construct failure of plated unstable low-density proximal humerus fractures (N = 18). The influence of density evaluation technique was investigated by comparing local (peri-implant) versus global evaluation regions; HR-pQCT-based versus clinical QCT-based image data; ipsilateral versus contralateral side; and bone mineral content (BMC) versus bone mineral density (BMD). All investigated density measures were significantly correlated with the experimental cycles to failure. The best performing clinically feasible parameter was the QCT-based BMC of the contralateral articular cap region, providing significantly better correlation (R 2 = 0.53) compared to a previously proposed clinical density measure (R 2 = 0.30). BMC had consistently, but not significantly stronger correlations with failure than BMD. The overall best results were obtained with the ipsilateral HR-pQCT-based local BMC (R 2 = 0.74) that may be used for implant optimization. Strong correlations were found between the corresponding density measures of the two CT image sources, as well as between the two sides. Future studies should investigate if BMC of the contralateral articular cap region could provide improved prediction of clinical fixation failure compared to previously proposed measures. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Nesbitt, James A.
2001-01-01
A finite-difference computer program (COSIM) has been written which models the one-dimensional, diffusional transport associated with high-temperature oxidation and interdiffusion of overlay-coated substrates. The program predicts concentration profiles for up to three elements in the coating and substrate after various oxidation exposures. Surface recession due to solute loss is also predicted. Ternary cross terms and concentration-dependent diffusion coefficients are taken into account. The program also incorporates a previously-developed oxide growth and spalling model to simulate either isothermal or cyclic oxidation exposures. In addition to predicting concentration profiles after various oxidation exposures, the program can also be used to predict coating life based on a concentration dependent failure criterion (e.g., surface solute content drops to 2%). The computer code is written in FORTRAN and employs numerous subroutines to make the program flexible and easily modifiable to other coating oxidation problems.
Motion compensation via redundant-wavelet multihypothesis.
Fowler, James E; Cui, Suxia; Wang, Yonghui
2006-10-01
Multihypothesis motion compensation has been widely used in video coding with previous attention focused on techniques employing predictions that are diverse spatially or temporally. In this paper, the multihypothesis concept is extended into the transform domain by using a redundant wavelet transform to produce multiple predictions that are diverse in transform phase. The corresponding multiple-phase inverse transform implicitly combines the phase-diverse predictions into a single spatial-domain prediction for motion compensation. The performance advantage of this redundant-wavelet-multihypothesis approach is investigated analytically, invoking the fact that the multiple-phase inverse involves a projection that significantly reduces the power of a dense-motion residual modeled as additive noise. The analysis shows that redundant-wavelet multihypothesis is capable of up to a 7-dB reduction in prediction-residual variance over an equivalent single-phase, single-hypothesis approach. Experimental results substantiate the performance advantage for a block-based implementation.
MoFvAb: Modeling the Fv region of antibodies
Bujotzek, Alexander; Fuchs, Angelika; Qu, Changtao; Benz, Jörg; Klostermann, Stefan; Antes, Iris; Georges, Guy
2015-01-01
Knowledge of the 3-dimensional structure of the antigen-binding region of antibodies enables numerous useful applications regarding the design and development of antibody-based drugs. We present a knowledge-based antibody structure prediction methodology that incorporates concepts that have arisen from an applied antibody engineering environment. The protocol exploits the rich and continuously growing supply of experimentally derived antibody structures available to predict CDR loop conformations and the packing of heavy and light chain quickly and without user intervention. The homology models are refined by a novel antibody-specific approach to adapt and rearrange sidechains based on their chemical environment. The method achieves very competitive all-atom root mean square deviation values in the order of 1.5 Å on different evaluation datasets consisting of both known and previously unpublished antibody crystal structures. PMID:26176812
Recognition of predictors for mid-long term runoff prediction based on lasso
NASA Astrophysics Data System (ADS)
Xie, S.; Huang, Y.
2017-12-01
Reliable and accuracy mid-long term runoff prediction is of great importance in integrated management of reservoir. And many methods are proposed to model runoff time series. Almost all forecast lead times (LT) of these models are 1 month, and the predictors are previous runoff with different time lags. However, runoff prediction with increased LT, which is more beneficial, is not popular in current researches. It is because the connection between previous runoff and current runoff will be weakened with the increase of LT. So 74 atmospheric circulation factors (ACFs) together with pre-runoff are used as alternative predictors for mid-long term runoff prediction of Longyangxia reservoir in this study. Because pre-runoff and 74 ACFs with different time lags are so many and most of these factors are useless, lasso, which means `least absolutely shrinkage and selection operator', is used to recognize predictors. And the result demonstrates that 74 ACFs are beneficial for runoff prediction in both validation and test sets when LT is greater than 6. And there are 6 factors other than pre-runoff, most of which are with big time lag, are selected as predictors frequently. In order to verify the effect of 74 ACFs, 74 stochastic time series generated from normalized 74 ACFs are used as input of model. The result shows that these 74 stochastic time series are useless, which confirm the effect of 74 ACFs on mid-long term runoff prediction.
NASA Astrophysics Data System (ADS)
Al-Ghraibah, Amani
Solar flares release stored magnetic energy in the form of radiation and can have significant detrimental effects on earth including damage to technological infrastructure. Recent work has considered methods to predict future flare activity on the basis of quantitative measures of the solar magnetic field. Accurate advanced warning of solar flare occurrence is an area of increasing concern and much research is ongoing in this area. Our previous work 111] utilized standard pattern recognition and classification techniques to determine (classify) whether a region is expected to flare within a predictive time window, using a Relevance Vector Machine (RVM) classification method. We extracted 38 features which describing the complexity of the photospheric magnetic field, the result classification metrics will provide the baseline against which we compare our new work. We find a true positive rate (TPR) of 0.8, true negative rate (TNR) of 0.7, and true skill score (TSS) of 0.49. This dissertation proposes three basic topics; the first topic is an extension to our previous work [111, where we consider a feature selection method to determine an appropriate feature subset with cross validation classification based on a histogram analysis of selected features. Classification using the top five features resulting from this analysis yield better classification accuracies across a large unbalanced dataset. In particular, the feature subsets provide better discrimination of the many regions that flare where we find a TPR of 0.85, a TNR of 0.65 sightly lower than our previous work, and a TSS of 0.5 which has an improvement comparing with our previous work. In the second topic, we study the prediction of solar flare size and time-to-flare using support vector regression (SVR). When we consider flaring regions only, we find an average error in estimating flare size of approximately half a GOES class. When we additionally consider non-flaring regions, we find an increased average error of approximately 3/4 a GOES class. We also consider thresholding the regressed flare size for the experiment containing both flaring and non-flaring regions and find a TPR. of 0.69 and a TNR of 0.86 for flare prediction, consistent with our previous studies of flare prediction using the same magnetic complexity features. The results for both of these size regression experiments are consistent across a wide range of predictive time windows, indicating that the magnetic complexity features may be persistent in appearance long before flare activity. This conjecture is supported by our larger error rates of some 40 hours in the time-to-flare regression problem. The magnetic complexity features considered here appear to have discriminative potential for flare size, but their persistence in time makes them less discriminative for the time-to-flare problem. We also study the prediction of solar flare size and time-to-flare using two temporal features, namely the ▵- and ▵-▵-features, the same average size and time-to-flare regression error are found when these temporal features are used in size and time-to-flare prediction. In the third topic, we study the temporal evolution of active region magnetic fields using Hidden Markov Models (HMMs) which is one of the efficient temporal analyses found in literature. We extracted 38 features which describing the complexity of the photospheric magnetic field. These features are converted into a sequence of symbols using k-nearest neighbor search method. We study many parameters before prediction; like the length of the training window Wtrain which denotes to the number of history images use to train the flare and non-flare HMMs, and number of hidden states Q. In training phase, the model parameters of the HMM of each category are optimized so as to best describe the training symbol sequences. In testing phase, we use the best flare and non-flare models to predict/classify active regions as a flaring or non-flaring region using a sliding window method. The best prediction result is found where the length of the history training images are 15 images (i.e., Wtrain= 15) and the length of the sliding testing window is less than or equal to W train, the best result give a TPR of 0.79 consistent with previous flare prediction work, TNR of 0.87 arid TSS of 0.66, where both are higher than our previous flare prediction work. We find that the best number of hidden states which can describe the temporal evolution of the solar ARs is equal to five states, at the same time, a close resultant metrics are found using different number of states.
Pekkala, Timo; Hall, Anette; Lötjönen, Jyrki; Mattila, Jussi; Soininen, Hilkka; Ngandu, Tiia; Laatikainen, Tiina; Kivipelto, Miia; Solomon, Alina
2016-01-01
Background and objective: This study aimed to develop a late-life dementia prediction model using a novel validated supervised machine learning method, the Disease State Index (DSI), in the Finnish population-based CAIDE study. Methods: The CAIDE study was based on previous population-based midlife surveys. CAIDE participants were re-examined twice in late-life, and the first late-life re-examination was used as baseline for the present study. The main study population included 709 cognitively normal subjects at first re-examination who returned to the second re-examination up to 10 years later (incident dementia n = 39). An extended population (n = 1009, incident dementia 151) included non-participants/non-survivors (national registers data). DSI was used to develop a dementia index based on first re-examination assessments. Performance in predicting dementia was assessed as area under the ROC curve (AUC). Results: AUCs for DSI were 0.79 and 0.75 for main and extended populations. Included predictors were cognition, vascular factors, age, subjective memory complaints, and APOE genotype. Conclusion: The supervised machine learning method performed well in identifying comprehensive profiles for predicting dementia development up to 10 years later. DSI could thus be useful for identifying individuals who are most at risk and may benefit from dementia prevention interventions. PMID:27802228
Motion-based prediction is sufficient to solve the aperture problem
Perrinet, Laurent U; Masson, Guillaume S
2012-01-01
In low-level sensory systems, it is still unclear how the noisy information collected locally by neurons may give rise to a coherent global percept. This is well demonstrated for the detection of motion in the aperture problem: as luminance of an elongated line is symmetrical along its axis, tangential velocity is ambiguous when measured locally. Here, we develop the hypothesis that motion-based predictive coding is sufficient to infer global motion. Our implementation is based on a context-dependent diffusion of a probabilistic representation of motion. We observe in simulations a progressive solution to the aperture problem similar to physiology and behavior. We demonstrate that this solution is the result of two underlying mechanisms. First, we demonstrate the formation of a tracking behavior favoring temporally coherent features independently of their texture. Second, we observe that incoherent features are explained away while coherent information diffuses progressively to the global scale. Most previous models included ad-hoc mechanisms such as end-stopped cells or a selection layer to track specific luminance-based features as necessary conditions to solve the aperture problem. Here, we have proved that motion-based predictive coding, as it is implemented in this functional model, is sufficient to solve the aperture problem. This solution may give insights in the role of prediction underlying a large class of sensory computations. PMID:22734489
Application of Support Vector Machine to Forex Monitoring
NASA Astrophysics Data System (ADS)
Kamruzzaman, Joarder; Sarker, Ruhul A.
Previous studies have demonstrated superior performance of artificial neural network (ANN) based forex forecasting models over traditional regression models. This paper applies support vector machines to build a forecasting model from the historical data using six simple technical indicators and presents a comparison with an ANN based model trained by scaled conjugate gradient (SCG) learning algorithm. The models are evaluated and compared on the basis of five commonly used performance metrics that measure closeness of prediction as well as correctness in directional change. Forecasting results of six different currencies against Australian dollar reveal superior performance of SVM model using simple linear kernel over ANN-SCG model in terms of all the evaluation metrics. The effect of SVM parameter selection on prediction performance is also investigated and analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, A. J.; Wei, Y. G.
2006-07-24
Fivefold deformation twins were reported recently to be observed in the experiment of the nanocrystalline face-centered-cubic metals and alloys. However, they were not predicted previously based on the molecular dynamics (MD) simulations and the reason was thought to be a uniaxial tension considered in the simulations. In the present investigation, through introducing pretwins in grain regions, using the MD simulations, the authors predict out the fivefold deformation twins in the grain regions of the nanocrystal grain cell, which undergoes a uniaxial tension. It is shown in their simulation results that series of Shockley partial dislocations emitted from grain boundaries providemore » sequential twining mechanism, which results in fivefold deformation twins.« less
Improved Bond Equations for Fiber-Reinforced Polymer Bars in Concrete.
Pour, Sadaf Moallemi; Alam, M Shahria; Milani, Abbas S
2016-08-30
This paper explores a set of new equations to predict the bond strength between fiber reinforced polymer (FRP) rebar and concrete. The proposed equations are based on a comprehensive statistical analysis and existing experimental results in the literature. Namely, the most effective parameters on bond behavior of FRP concrete were first identified by applying a factorial analysis on a part of the available database. Then the database that contains 250 pullout tests were divided into four groups based on the concrete compressive strength and the rebar surface. Afterward, nonlinear regression analysis was performed for each study group in order to determine the bond equations. The results show that the proposed equations can predict bond strengths more accurately compared to the other previously reported models.
Acoustic environmental accuracy requirements for response determination
NASA Technical Reports Server (NTRS)
Pettitt, M. R.
1983-01-01
A general purpose computer program was developed for the prediction of vehicle interior noise. This program, named VIN, has both modal and statistical energy analysis capabilities for structural/acoustic interaction analysis. The analytic models and their computer implementation were verified through simple test cases with well-defined experimental results. The model was also applied in a space shuttle payload bay launch acoustics prediction study. The computer program processes large and small problems with equal efficiency because all arrays are dynamically sized by program input variables at run time. A data base is built and easily accessed for design studies. The data base significantly reduces the computational costs of such studies by allowing the reuse of the still-valid calculated parameters of previous iterations.
Rational design of new electrolyte materials for electrochemical double layer capacitors
NASA Astrophysics Data System (ADS)
Schütter, Christoph; Husch, Tamara; Viswanathan, Venkatasubramanian; Passerini, Stefano; Balducci, Andrea; Korth, Martin
2016-09-01
The development of new electrolytes is a centerpiece of many strategies to improve electrochemical double layer capacitor (EDLC) devices. We present here a computational screening-based rational design approach to find new electrolyte materials. As an example application, the known chemical space of almost 70 million compounds is investigated in search of electrochemically more stable solvents. Cyano esters are identified as especially promising new compound class. Theoretical predictions are validated with subsequent experimental studies on a selected case. These studies show that based on theoretical predictions only, a previously untested, but very well performing compound class was identified. We thus find that our rational design strategy is indeed able to successfully identify completely new materials with substantially improved properties.
Structured prediction models for RNN based sequence labeling in clinical text.
Jagannatha, Abhyuday N; Yu, Hong
2016-11-01
Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record narratives. Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. We extend the previously studied LSTM-CRF models with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methodologies for structured prediction in order to improve the exact phrase detection of various medical entities.
Structured prediction models for RNN based sequence labeling in clinical text
Jagannatha, Abhyuday N; Yu, Hong
2016-01-01
Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record narratives. Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. We extend the previously studied LSTM-CRF models with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methodologies1 for structured prediction in order to improve the exact phrase detection of various medical entities. PMID:28004040
Chowdhury, Shomeek; Zhang, Jian; Kurgan, Lukasz
2018-05-28
Deciphering a complete landscape of protein-RNA interactions in the human proteome remains an elusive challenge. We computationally elucidate RNA binding proteins (RBPs) using an approach that complements previous efforts. We employ two modern complementary sequence-based methods that provide accurate predictions from the structured and the intrinsically disordered sequences, even in the absence of sequence similarity to the known RBPs. We generate and analyze putative RNA binding residues on the whole proteome scale. Using a conservative setting that ensures low, 5% false positive rate, we identify 1511 putative RBPs that include 281 known RBPs and 166 RBPs that were previously predicted. We empirically demonstrate that these overlaps are statistically significant. We also validate the putative RBPs based on two major hallmarks of their RNA binding residues: high levels of evolutionary conservation and enrichment in charged amino acids. Moreover, we show that the novel RBPs are significantly under-annotated functionally which coincides with the fact that they were not yet found to interact with RNAs. We provide two examples of our novel putative RBPs for which there is recent evidence of their interactions with RNAs. The dataset of novel putative RBPs and RNA binding residues for the future hypothesis generation is provided in the Supporting Information. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling
NASA Astrophysics Data System (ADS)
Shapiro, B.; Jin, Q.
2015-12-01
Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.
Life history and spatial traits predict extinction risk due to climate change
NASA Astrophysics Data System (ADS)
Pearson, Richard G.; Stanton, Jessica C.; Shoemaker, Kevin T.; Aiello-Lammens, Matthew E.; Ersts, Peter J.; Horning, Ned; Fordham, Damien A.; Raxworthy, Christopher J.; Ryu, Hae Yeong; McNees, Jason; Akçakaya, H. Reşit
2014-03-01
There is an urgent need to develop effective vulnerability assessments for evaluating the conservation status of species in a changing climate. Several new assessment approaches have been proposed for evaluating the vulnerability of species to climate change based on the expectation that established assessments such as the IUCN Red List need revising or superseding in light of the threat that climate change brings. However, although previous studies have identified ecological and life history attributes that characterize declining species or those listed as threatened, no study so far has undertaken a quantitative analysis of the attributes that cause species to be at high risk of extinction specifically due to climate change. We developed a simulation approach based on generic life history types to show here that extinction risk due to climate change can be predicted using a mixture of spatial and demographic variables that can be measured in the present day without the need for complex forecasting models. Most of the variables we found to be important for predicting extinction risk, including occupied area and population size, are already used in species conservation assessments, indicating that present systems may be better able to identify species vulnerable to climate change than previously thought. Therefore, although climate change brings many new conservation challenges, we find that it may not be fundamentally different from other threats in terms of assessing extinction risks.
NASA Astrophysics Data System (ADS)
Xu, Shiluo; Niu, Ruiqing
2018-02-01
Every year, landslides pose huge threats to thousands of people in China, especially those in the Three Gorges area. It is thus necessary to establish an early warning system to help prevent property damage and save peoples' lives. Most of the landslide displacement prediction models that have been proposed are static models. However, landslides are dynamic systems. In this paper, the total accumulative displacement of the Baijiabao landslide is divided into trend and periodic components using empirical mode decomposition. The trend component is predicted using an S-curve estimation, and the total periodic component is predicted using a long short-term memory neural network (LSTM). LSTM is a dynamic model that can remember historical information and apply it to the current output. Six triggering factors are chosen to predict the periodic term using the Pearson cross-correlation coefficient and mutual information. These factors include the cumulative precipitation during the previous month, the cumulative precipitation during a two-month period, the reservoir level during the current month, the change in the reservoir level during the previous month, the cumulative increment of the reservoir level during the current month, and the cumulative displacement during the previous month. When using one-step-ahead prediction, LSTM yields a root mean squared error (RMSE) value of 6.112 mm, while the support vector machine for regression (SVR) and the back-propagation neural network (BP) yield values of 10.686 mm and 8.237 mm, respectively. Meanwhile, the Elman network (Elman) yields an RMSE value of 6.579 mm. In addition, when using multi-step-ahead prediction, LSTM obtains an RMSE value of 8.648 mm, while SVR, BP and the Elman network obtains RSME values of 13.418 mm, 13.014 mm, and 13.370 mm. The predicted results indicate that, to some extent, the dynamic model (LSTM) achieves results that are more accurate than those of the static models (i.e., SVR and BP). LSTM even displays better performance than the Elman network, which is also a dynamic method.
Echigoya, Yusuke; Mouly, Vincent; Garcia, Luis; Yokota, Toshifumi; Duddy, William
2015-01-01
The use of antisense ‘splice-switching’ oligonucleotides to induce exon skipping represents a potential therapeutic approach to various human genetic diseases. It has achieved greatest maturity in exon skipping of the dystrophin transcript in Duchenne muscular dystrophy (DMD), for which several clinical trials are completed or ongoing, and a large body of data exists describing tested oligonucleotides and their efficacy. The rational design of an exon skipping oligonucleotide involves the choice of an antisense sequence, usually between 15 and 32 nucleotides, targeting the exon that is to be skipped. Although parameters describing the target site can be computationally estimated and several have been identified to correlate with efficacy, methods to predict efficacy are limited. Here, an in silico pre-screening approach is proposed, based on predictive statistical modelling. Previous DMD data were compiled together and, for each oligonucleotide, some 60 descriptors were considered. Statistical modelling approaches were applied to derive algorithms that predict exon skipping for a given target site. We confirmed (1) the binding energetics of the oligonucleotide to the RNA, and (2) the distance in bases of the target site from the splice acceptor site, as the two most predictive parameters, and we included these and several other parameters (while discounting many) into an in silico screening process, based on their capacity to predict high or low efficacy in either phosphorodiamidate morpholino oligomers (89% correctly predicted) and/or 2’O Methyl RNA oligonucleotides (76% correctly predicted). Predictions correlated strongly with in vitro testing for sixteen de novo PMO sequences targeting various positions on DMD exons 44 (R2 0.89) and 53 (R2 0.89), one of which represents a potential novel candidate for clinical trials. We provide these algorithms together with a computational tool that facilitates screening to predict exon skipping efficacy at each position of a target exon. PMID:25816009
Oduru, Sreedhar; Campbell, Janee L; Karri, SriTulasi; Hendry, William J; Khan, Shafiq A; Williams, Simon C
2003-01-01
Background Complete genome annotation will likely be achieved through a combination of computer-based analysis of available genome sequences combined with direct experimental characterization of expressed regions of individual genomes. We have utilized a comparative genomics approach involving the sequencing of randomly selected hamster testis cDNAs to begin to identify genes not previously annotated on the human, mouse, rat and Fugu (pufferfish) genomes. Results 735 distinct sequences were analyzed for their relatedness to known sequences in public databases. Eight of these sequences were derived from previously unidentified genes and expression of these genes in testis was confirmed by Northern blotting. The genomic locations of each sequence were mapped in human, mouse, rat and pufferfish, where applicable, and the structure of their cognate genes was derived using computer-based predictions, genomic comparisons and analysis of uncharacterized cDNA sequences from human and macaque. Conclusion The use of a comparative genomics approach resulted in the identification of eight cDNAs that correspond to previously uncharacterized genes in the human genome. The proteins encoded by these genes included a new member of the kinesin superfamily, a SET/MYND-domain protein, and six proteins for which no specific function could be predicted. Each gene was expressed primarily in testis, suggesting that they may play roles in the development and/or function of testicular cells. PMID:12783626
Davie-Martin, Cleo L; Hageman, Kimberly J; Chin, Yu-Ping; Rougé, Valentin; Fujita, Yuki
2015-09-01
Soil-air partition coefficient (Ksoil-air) values are often employed to investigate the fate of organic contaminants in soils; however, these values have not been measured for many compounds of interest, including semivolatile current-use pesticides. Moreover, predictive equations for estimating Ksoil-air values for pesticides (other than the organochlorine pesticides) have not been robustly developed, due to a lack of measured data. In this work, a solid-phase fugacity meter was used to measure the Ksoil-air values of 22 semivolatile current- and historic-use pesticides and their degradation products. Ksoil-air values were determined for two soils (semiarid and volcanic) under a range of environmentally relevant temperature (10-30 °C) and relative humidity (30-100%) conditions, such that 943 Ksoil-air measurements were made. Measured values were used to derive a predictive equation for pesticide Ksoil-air values based on temperature, relative humidity, soil organic carbon content, and pesticide-specific octanol-air partition coefficients. Pesticide volatilization losses from soil, calculated with the newly derived Ksoil-air predictive equation and a previously described pesticide volatilization model, were compared to previous results and showed that the choice of Ksoil-air predictive equation mainly affected the more-volatile pesticides and that the way in which relative humidity was accounted for was the most critical difference.
Kong, Ru; Li, Jingwei; Orban, Csaba; Sabuncu, Mert R; Liu, Hesheng; Schaefer, Alexander; Sun, Nanbo; Zuo, Xi-Nian; Holmes, Avram J; Eickhoff, Simon B; Yeo, B T Thomas
2018-06-06
Resting-state functional magnetic resonance imaging (rs-fMRI) offers the opportunity to delineate individual-specific brain networks. A major question is whether individual-specific network topography (i.e., location and spatial arrangement) is behaviorally relevant. Here, we propose a multi-session hierarchical Bayesian model (MS-HBM) for estimating individual-specific cortical networks and investigate whether individual-specific network topography can predict human behavior. The multiple layers of the MS-HBM explicitly differentiate intra-subject (within-subject) from inter-subject (between-subject) network variability. By ignoring intra-subject variability, previous network mappings might confuse intra-subject variability for inter-subject differences. Compared with other approaches, MS-HBM parcellations generalized better to new rs-fMRI and task-fMRI data from the same subjects. More specifically, MS-HBM parcellations estimated from a single rs-fMRI session (10 min) showed comparable generalizability as parcellations estimated by 2 state-of-the-art methods using 5 sessions (50 min). We also showed that behavioral phenotypes across cognition, personality, and emotion could be predicted by individual-specific network topography with modest accuracy, comparable to previous reports predicting phenotypes based on connectivity strength. Network topography estimated by MS-HBM was more effective for behavioral prediction than network size, as well as network topography estimated by other parcellation approaches. Thus, similar to connectivity strength, individual-specific network topography might also serve as a fingerprint of human behavior.
Freeth, Tony
2014-01-01
The ancient Greek astronomical calculating machine, known as the Antikythera Mechanism, predicted eclipses, based on the 223-lunar month Saros cycle. Eclipses are indicated on a four-turn spiral Saros Dial by glyphs, which describe type and time of eclipse and include alphabetical index letters, referring to solar eclipse inscriptions. These include Index Letter Groups, describing shared eclipse characteristics. The grouping and ordering of the index letters, the organization of the inscriptions and the eclipse times have previously been unsolved. A new reading and interpretation of data from the back plate of the Antikythera Mechanism, including the glyphs, the index letters and the eclipse inscriptions, has resulted in substantial changes to previously published work. Based on these new readings, two arithmetical models are presented here that explain the complete eclipse prediction scheme. The first model solves the glyph distribution, the grouping and anomalous ordering of the index letters and the structure of the inscriptions. It also implies the existence of lost lunar eclipse inscriptions. The second model closely matches the glyph times and explains the four-turn spiral of the Saros Dial. Together, these models imply a surprisingly early epoch for the Antikythera Mechanism. The ancient Greeks built a machine that can predict, for many years ahead, not only eclipses but also a remarkable array of their characteristics, such as directions of obscuration, magnitude, colour, angular diameter of the Moon, relationship with the Moon's node and eclipse time. It was not entirely accurate, but it was an astonishing achievement for its era.
Freeth, Tony
2014-01-01
The ancient Greek astronomical calculating machine, known as the Antikythera Mechanism, predicted eclipses, based on the 223-lunar month Saros cycle. Eclipses are indicated on a four-turn spiral Saros Dial by glyphs, which describe type and time of eclipse and include alphabetical index letters, referring to solar eclipse inscriptions. These include Index Letter Groups, describing shared eclipse characteristics. The grouping and ordering of the index letters, the organization of the inscriptions and the eclipse times have previously been unsolved. A new reading and interpretation of data from the back plate of the Antikythera Mechanism, including the glyphs, the index letters and the eclipse inscriptions, has resulted in substantial changes to previously published work. Based on these new readings, two arithmetical models are presented here that explain the complete eclipse prediction scheme. The first model solves the glyph distribution, the grouping and anomalous ordering of the index letters and the structure of the inscriptions. It also implies the existence of lost lunar eclipse inscriptions. The second model closely matches the glyph times and explains the four-turn spiral of the Saros Dial. Together, these models imply a surprisingly early epoch for the Antikythera Mechanism. The ancient Greeks built a machine that can predict, for many years ahead, not only eclipses but also a remarkable array of their characteristics, such as directions of obscuration, magnitude, colour, angular diameter of the Moon, relationship with the Moon’s node and eclipse time. It was not entirely accurate, but it was an astonishing achievement for its era. PMID:25075747
The Role of Prediction In Perception: Evidence From Interrupted Visual Search
Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro
2014-01-01
Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440
Early-warning signals of critical transition: Effect of extrinsic noise
NASA Astrophysics Data System (ADS)
Qin, Shanshan; Tang, Chao
2018-03-01
Complex dynamical systems often have tipping points and exhibit catastrophic regime shift. Despite the notorious difficulty of predicting such transitions, accumulating studies have suggested the existence of generic early-warning signals (EWSs) preceding upcoming transitions. However, previous theories and models were based on the effect of the intrinsic noise (IN) when a system is approaching a critical point, and did not consider the pervasive environmental fluctuations or the extrinsic noise (EN). Here, we extend previous theory to investigate how the interplay of EN and IN affects EWSs. Stochastic simulations of model systems subject to both IN and EN have verified our theory and demonstrated that EN can dramatically alter and diminish the EWS. This effect is stronger with increasing amplitude and correlation time scale of the EN. In the presence of EN, the EWS can fail to predict or even give a false alarm of critical transitions.
Translation of Genotype to Phenotype by a Hierarchy of Cell Subsystems.
Yu, Michael Ku; Kramer, Michael; Dutkowski, Janusz; Srivas, Rohith; Licon, Katherine; Kreisberg, Jason; Ng, Cherie T; Krogan, Nevan; Sharan, Roded; Ideker, Trey
2016-02-24
Accurately translating genotype to phenotype requires accounting for the functional impact of genetic variation at many biological scales. Here we present a strategy for genotype-phenotype reasoning based on existing knowledge of cellular subsystems. These subsystems and their hierarchical organization are defined by the Gene Ontology or a complementary ontology inferred directly from previously published datasets. Guided by the ontology's hierarchical structure, we organize genotype data into an "ontotype," that is, a hierarchy of perturbations representing the effects of genetic variation at multiple cellular scales. The ontotype is then interpreted using logical rules generated by machine learning to predict phenotype. This approach substantially outperforms previous, non-hierarchical methods for translating yeast genotype to cell growth phenotype, and it accurately predicts the growth outcomes of two new screens of 2,503 double gene knockouts impacting DNA repair or nuclear lumen. Ontotypes also generalize to larger knockout combinations, setting the stage for interpreting the complex genetics of disease.
IMPACT OF NEW GAMOW–TELLER STRENGTHS ON EXPLOSIVE TYPE IA SUPERNOVA NUCLEOSYNTHESIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mori, Kanji; Famiano, Michael A.; Kajino, Toshitaka
2016-12-20
Recent experimental results have confirmed a possible reduction in the Gamow–Teller (GT{sub +}) strengths of pf-shell nuclei. These proton-rich nuclei are of relevance in the deflagration and explosive burning phases of SNe Ia. While prior GT strengths result in nucleosynthesis predictions with a lower-than-expected electron fraction, a reduction in the GT{sub +} strength can result in a slightly increased electron fraction compared to previous shell model predictions, though the enhancement is not as large as previous enhancements in going from rates computed by Fuller, Fowler, and Newman based on an independent particle model. A shell model parametrization has been developed thatmore » more closely matches experimental GT strengths. The resultant electron-capture rates are used in nucleosynthesis calculations for carbon deflagration and explosion phases of SNe Ia, and the final mass fractions are compared to those obtained using more commonly used rates.« less
NASA Technical Reports Server (NTRS)
Kaufman, A.; Laflen, J. H.; Lindholm, U. S.
1985-01-01
Unified constitutive material models were developed for structural analyses of aircraft gas turbine engine components with particular application to isotropic materials used for high-pressure stage turbine blades and vanes. Forms or combinations of models independently proposed by Bodner and Walker were considered. These theories combine time-dependent and time-independent aspects of inelasticity into a continuous spectrum of behavior. This is in sharp contrast to previous classical approaches that partition inelastic strain into uncoupled plastic and creep components. Predicted stress-strain responses from these models were evaluated against monotonic and cyclic test results for uniaxial specimens of two cast nickel-base alloys, B1900+Hf and Rene' 80. Previously obtained tension-torsion test results for Hastelloy X alloy were used to evaluate multiaxial stress-strain cycle predictions. The unified models, as well as appropriate algorithms for integrating the constitutive equations, were implemented in finite-element computer codes.
Impact of New Gamow-Teller Strengths on Explosive Type Ia Supernova Nucleosynthesis
NASA Astrophysics Data System (ADS)
Mori, Kanji; Famiano, Michael A.; Kajino, Toshitaka; Suzuki, Toshio; Hidaka, Jun; Honma, Michio; Iwamoto, Koichi; Nomoto, Ken'ichi; Otsuka, Takaharu
2016-12-01
Recent experimental results have confirmed a possible reduction in the Gamow-Teller (GT+) strengths of pf-shell nuclei. These proton-rich nuclei are of relevance in the deflagration and explosive burning phases of SNe Ia. While prior GT strengths result in nucleosynthesis predictions with a lower-than-expected electron fraction, a reduction in the GT+ strength can result in a slightly increased electron fraction compared to previous shell model predictions, though the enhancement is not as large as previous enhancements in going from rates computed by Fuller, Fowler, and Newman based on an independent particle model. A shell model parametrization has been developed that more closely matches experimental GT strengths. The resultant electron-capture rates are used in nucleosynthesis calculations for carbon deflagration and explosion phases of SNe Ia, and the final mass fractions are compared to those obtained using more commonly used rates.
NASA Astrophysics Data System (ADS)
Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.
2018-03-01
In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.
DATA ASSIMILATION APPROACH FOR FORECAST OF SOLAR ACTIVITY CYCLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitiashvili, Irina N., E-mail: irina.n.kitiashvili@nasa.gov
Numerous attempts to predict future solar cycles are mostly based on empirical relations derived from observations of previous cycles, and they yield a wide range of predicted strengths and durations of the cycles. Results obtained with current dynamo models also deviate strongly from each other, thus raising questions about criteria to quantify the reliability of such predictions. The primary difficulties in modeling future solar activity are shortcomings of both the dynamo models and observations that do not allow us to determine the current and past states of the global solar magnetic structure and its dynamics. Data assimilation is a relativelymore » new approach to develop physics-based predictions and estimate their uncertainties in situations where the physical properties of a system are not well-known. This paper presents an application of the ensemble Kalman filter method for modeling and prediction of solar cycles through use of a low-order nonlinear dynamo model that includes the essential physics and can describe general properties of the sunspot cycles. Despite the simplicity of this model, the data assimilation approach provides reasonable estimates for the strengths of future solar cycles. In particular, the prediction of Cycle 24 calculated and published in 2008 is so far holding up quite well. In this paper, I will present my first attempt to predict Cycle 25 using the data assimilation approach, and discuss the uncertainties of that prediction.« less
NASA Astrophysics Data System (ADS)
Cherumadanakadan Thelliyil, S.; Ravindran, A. M.; Giannakis, D.; Majda, A.
2016-12-01
An improved index for real time monitoring and forecast verification of monsoon intraseasonal oscillations (MISO) is introduced using the recently developed Nonlinear Laplacian Spectral Analysis (NLSA) algorithm. Previous studies has demonstrated the proficiency of NLSA in capturing low frequency variability and intermittency of a time series. Using NLSA a hierarchy of Laplace-Beltrami (LB) eigen functions are extracted from the unfiltered daily GPCP rainfall data over the south Asian monsoon region. Two modes representing the full life cycle of complex northeastward propagating boreal summer MISO are identified from the hierarchy of Laplace-Beltrami eigen functions. These two MISO modes have a number of advantages over the conventionally used Extended Empirical Orthogonal Function (EEOF) MISO modes including higher memory and better predictability, higher fractional variance over the western Pacific, Western Ghats and adjoining Arabian Sea regions and more realistic representation of regional heat sources associated with the MISO. The skill of NLSA based MISO indices in real time prediction of MISO is demonstrated using hindcasts of CFSv2 extended range prediction runs. It is shown that these indices yield a higher prediction skill than the other conventional indices supporting the use of NLSA in real time prediction of MISO. Real time monitoring and prediction of MISO finds its application in agriculture, construction and hydro-electric power sectors and hence an important component of monsoon prediction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, Ackeem; Herrera, David; Hijal, Tarek
We describe a method for predicting waiting times in radiation oncology. Machine learning is a powerful predictive modelling tool that benefits from large, potentially complex, datasets. The essence of machine learning is to predict future outcomes by learning from previous experience. The patient waiting experience remains one of the most vexing challenges facing healthcare. Waiting time uncertainty can cause patients, who are already sick and in pain, to worry about when they will receive the care they need. In radiation oncology, patients typically experience three types of waiting: Waiting at home for their treatment plan to be prepared Waiting inmore » the waiting room for daily radiotherapy Waiting in the waiting room to see a physician in consultation or follow-up These waiting periods are difficult for staff to predict and only rough estimates are typically provided, based on personal experience. In the present era of electronic health records, waiting times need not be so uncertain. At our centre, we have incorporated the electronic treatment records of all previously-treated patients into our machine learning model. We found that the Random Forest Regression model provides the best predictions for daily radiotherapy treatment waiting times (type 2). Using this model, we achieved a median residual (actual minus predicted value) of 0.25 minutes and a standard deviation residual of 6.5 minutes. The main features that generated the best fit model (from most to least significant) are: Allocated time, median past duration, fraction number and the number of treatment fields.« less
Kerckhoffs, Jules; Hoek, Gerard; Vlaanderen, Jelle; van Nunen, Erik; Messier, Kyle; Brunekreef, Bert; Gulliver, John; Vermeulen, Roel
2017-11-01
Land-use regression (LUR) models for ultrafine particles (UFP) and Black Carbon (BC) in urban areas have been developed using short-term stationary monitoring or mobile platforms in order to capture the high variability of these pollutants. However, little is known about the comparability of predictions of mobile and short-term stationary models and especially the validity of these models for assessing residential exposures and the robustness of model predictions developed in different campaigns. We used an electric car to collect mobile measurements (n = 5236 unique road segments) and short-term stationary measurements (3 × 30min, n = 240) of UFP and BC in three Dutch cities (Amsterdam, Utrecht, Maastricht) in 2014-2015. Predictions of LUR models based on mobile measurements were compared to (i) measured concentrations at the short-term stationary sites, (ii) LUR model predictions based on short-term stationary measurements at 1500 random addresses in the three cities, (iii) externally obtained home outdoor measurements (3 × 24h samples; n = 42) and (iv) predictions of a LUR model developed based upon a 2013 mobile campaign in two cities (Amsterdam, Rotterdam). Despite the poor model R 2 of 15%, the ability of mobile UFP models to predict measurements with longer averaging time increased substantially from 36% for short-term stationary measurements to 57% for home outdoor measurements. In contrast, the mobile BC model only predicted 14% of the variation in the short-term stationary sites and also 14% of the home outdoor sites. Models based upon mobile and short-term stationary monitoring provided fairly high correlated predictions of UFP concentrations at 1500 randomly selected addresses in the three Dutch cities (R 2 = 0.64). We found higher UFP predictions (of about 30%) based on mobile models opposed to short-term model predictions and home outdoor measurements with no clear geospatial patterns. The mobile model for UFP was stable over different settings as the model predicted concentration levels highly correlated to predictions made by a previously developed LUR model with another spatial extent and in a different year at the 1500 random addresses (R 2 = 0.80). In conclusion, mobile monitoring provided robust LUR models for UFP, valid to use in epidemiological studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Predictive Cache Modeling and Analysis
2011-11-01
metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed...Cache Miss Minimization Technology To efficiently explore combinations and discover nearly-optimal task-assignment algorithms , we extended to our...it was possible to use our algorithmic techniques to decrease network bandwidth consumption by ~25%. In this effort, we adapted these existing
Early Prediction of Student Profiles Based on Performance and Gaming Preferences
ERIC Educational Resources Information Center
Barata, Gabriel; Gama, Sandra; Jorge, Joaquim; Gonçalves, Daniel
2016-01-01
State of the art research shows that gamified learning can be used to engage students and help them perform better. However, most studies use a one-size-fits-all approach to gamification, where individual differences and needs are ignored. In a previous study, we identified four types of students attending a gamified college course, characterized…
Using the Theory of Planned Behavior and Cheating Justifications to Predict Academic Misconduct
ERIC Educational Resources Information Center
Stone, Thomas H.; Jawahar, I. M.; Kisamore, Jennifer L.
2009-01-01
Purpose: The purpose of this paper is to show that academic misconduct appears to be on the rise; some research has linked academic misconduct to unethical workplace behaviors. Unlike previous empirically-driven research, this theory-based study seeks to examine the usefulness of a modification of Ajzen's theory of planned behavior to predict…
Effects of School Mobility on Adolescent Social Ties and Academic Adjustment
ERIC Educational Resources Information Center
Langenkamp, Amy G.
2016-01-01
Why are transfer students at an increased risk for disengagement and dropout? Previous research suggests that the loss of school-based social relationships play a role. Data from the National Longitudinal Study of Adolescent Health (Add Health) are used to analyze what predicts student transfer and what effect this has on students' social…
Strength validation and fire endurance of glued-laminated timber beams
E. L. Schaffer; C. M. Marx; D. A. Bender; F. E. Woeste
A previous paper presented a reliability-based model to predict the strength of glued-laminated timber beams at both room temperature and during fire exposure. This Monte Carlo simulation procedure generates strength and fire endurance (time-to-failure, TTF) data for glued- laminated beams that allow assessment of mean strength and TTF as well as their variability....
Discrete return lidar-based prediction of leaf area index in two conifer forests
Jennifer L. R. Jensen; Karen S. Humes; Lee A. Vierling; Andrew T. Hudak
2008-01-01
Leaf area index (LAI) is a key forest structural characteristic that serves as a primary control for exchanges of mass and energy within a vegetated ecosystem. Most previous attempts to estimate LAI from remotely sensed data have relied on empirical relationships between field-measured observations and various spectral vegetation indices (SVIs) derived from optical...
USDA-ARS?s Scientific Manuscript database
Forecasting peak standing crop (PSC) for the coming grazing season can help ranchers make appropriate stocking decisions to reduce enterprise risks. Previously developed PSC predictors were based on short-term experimental data (<15 yr) and limited stocking rates (SR) without including the effect of...
From Research to Practice: Basic Mathematics Skills and Success in Introductory Statistics
ERIC Educational Resources Information Center
Lunsford, M. Leigh; Poplin, Phillip
2011-01-01
Based on previous research of Johnson and Kuennen (2006), we conducted a study to determine factors that would possibly predict student success in an introductory statistics course. Our results were similar to Johnson and Kuennen in that we found students' basic mathematical skills, as measured on a test created by Johnson and Kuennen, were a…
ERIC Educational Resources Information Center
Spataro, Pietro; Rossi-Arnaud, Clelia; Longobardi, Emiddia
2018-01-01
Previous research has consistently demonstrated that false-belief (FB) understanding correlates with and predicts metalinguistic ability in preschoolers. Surprisingly, however, there is scant evidence on the question of whether this relation persists at later ages. The present cross-sectional study sought to fill this gap by examining the…
Ke, Alice Ban; Nallani, Srikanth C; Zhao, Ping; Rostami-Hodjegan, Amin; Isoherranen, Nina; Unadkat, Jashvant D
2013-04-01
Conducting pharmacokinetic (PK) studies in pregnant women is challenging. Therefore, we asked if a physiologically based pharmacokinetic (PBPK) model could be used to evaluate different dosing regimens for pregnant women. We refined and verified our previously published pregnancy PBPK model by incorporating cytochrome P450 CYP1A2 suppression (based on caffeine PK) and CYP2D6 induction (based on metoprolol PK) into the model. This model accounts for gestational age-dependent changes in maternal physiology and hepatic CYP3A activity. For verification, the disposition of CYP1A2-metabolized drug theophylline (THEO) and CYP2D6-metabolized drugs paroxetine (PAR), dextromethorphan (DEX), and clonidine (CLO) during pregnancy was predicted. Our PBPK model successfully predicted THEO disposition during the third trimester (T3). Predicted mean postpartum to third trimester (PP:T3) ratios of THEO area under the curve (AUC), maximum plasma concentration, and minimum plasma concentration were 0.76, 0.95, and 0.66 versus observed values 0.75, 0.89, and 0.72, respectively. The predicted mean PAR steady-state plasma concentration (Css) ratio (PP:T3) was 7.1 versus the observed value 3.7. Predicted mean DEX urinary ratio (UR) (PP:T3) was 2.9 versus the observed value 1.9. Predicted mean CLO AUC ratio (PP:T3) was 2.2 versus the observed value 1.7. Sensitivity analysis suggested that a 100% induction of CYP2D6 during T3 was required to recover the observed PP:T3 ratios of PAR Css, DEX UR, and CLO AUC. Based on these data, it is prudent to conclude that the magnitude of hepatic CYP2D6 induction during T3 ranges from 100 to 200%. Our PBPK model can predict the disposition of CYP1A2, 2D6, and 3A drugs during pregnancy.
Ke, Alice Ban; Nallani, Srikanth C.; Zhao, Ping; Rostami-Hodjegan, Amin; Isoherranen, Nina
2013-01-01
Conducting pharmacokinetic (PK) studies in pregnant women is challenging. Therefore, we asked if a physiologically based pharmacokinetic (PBPK) model could be used to evaluate different dosing regimens for pregnant women. We refined and verified our previously published pregnancy PBPK model by incorporating cytochrome P450 CYP1A2 suppression (based on caffeine PK) and CYP2D6 induction (based on metoprolol PK) into the model. This model accounts for gestational age–dependent changes in maternal physiology and hepatic CYP3A activity. For verification, the disposition of CYP1A2–metabolized drug theophylline (THEO) and CYP2D6–metabolized drugs paroxetine (PAR), dextromethorphan (DEX), and clonidine (CLO) during pregnancy was predicted. Our PBPK model successfully predicted THEO disposition during the third trimester (T3). Predicted mean postpartum to third trimester (PP:T3) ratios of THEO area under the curve (AUC), maximum plasma concentration, and minimum plasma concentration were 0.76, 0.95, and 0.66 versus observed values 0.75, 0.89, and 0.72, respectively. The predicted mean PAR steady-state plasma concentration (Css) ratio (PP:T3) was 7.1 versus the observed value 3.7. Predicted mean DEX urinary ratio (UR) (PP:T3) was 2.9 versus the observed value 1.9. Predicted mean CLO AUC ratio (PP:T3) was 2.2 versus the observed value 1.7. Sensitivity analysis suggested that a 100% induction of CYP2D6 during T3 was required to recover the observed PP:T3 ratios of PAR Css, DEX UR, and CLO AUC. Based on these data, it is prudent to conclude that the magnitude of hepatic CYP2D6 induction during T3 ranges from 100 to 200%. Our PBPK model can predict the disposition of CYP1A2, 2D6, and 3A drugs during pregnancy. PMID:23355638
Clark, Samuel A; Hickey, John M; Daetwyler, Hans D; van der Werf, Julius H J
2012-02-09
The theory of genomic selection is based on the prediction of the effects of genetic markers in linkage disequilibrium with quantitative trait loci. However, genomic selection also relies on relationships between individuals to accurately predict genetic value. This study aimed to examine the importance of information on relatives versus that of unrelated or more distantly related individuals on the estimation of genomic breeding values. Simulated and real data were used to examine the effects of various degrees of relationship on the accuracy of genomic selection. Genomic Best Linear Unbiased Prediction (gBLUP) was compared to two pedigree based BLUP methods, one with a shallow one generation pedigree and the other with a deep ten generation pedigree. The accuracy of estimated breeding values for different groups of selection candidates that had varying degrees of relationships to a reference data set of 1750 animals was investigated. The gBLUP method predicted breeding values more accurately than BLUP. The most accurate breeding values were estimated using gBLUP for closely related animals. Similarly, the pedigree based BLUP methods were also accurate for closely related animals, however when the pedigree based BLUP methods were used to predict unrelated animals, the accuracy was close to zero. In contrast, gBLUP breeding values, for animals that had no pedigree relationship with animals in the reference data set, allowed substantial accuracy. An animal's relationship to the reference data set is an important factor for the accuracy of genomic predictions. Animals that share a close relationship to the reference data set had the highest accuracy from genomic predictions. However a baseline accuracy that is driven by the reference data set size and the overall population effective population size enables gBLUP to estimate a breeding value for unrelated animals within a population (breed), using information previously ignored by pedigree based BLUP methods.
Are We Predicting the Actual or Apparent Distribution of Temperate Marine Fishes?
Monk, Jacquomo; Ierodiaconou, Daniel; Harvey, Euan; Rattray, Alex; Versace, Vincent L.
2012-01-01
Planning for resilience is the focus of many marine conservation programs and initiatives. These efforts aim to inform conservation strategies for marine regions to ensure they have inbuilt capacity to retain biological diversity and ecological function in the face of global environmental change – particularly changes in climate and resource exploitation. In the absence of direct biological and ecological information for many marine species, scientists are increasingly using spatially-explicit, predictive-modeling approaches. Through the improved access to multibeam sonar and underwater video technology these models provide spatial predictions of the most suitable regions for an organism at resolutions previously not possible. However, sensible-looking, well-performing models can provide very different predictions of distribution depending on which occurrence dataset is used. To examine this, we construct species distribution models for nine temperate marine sedentary fishes for a 25.7 km2 study region off the coast of southeastern Australia. We use generalized linear model (GLM), generalized additive model (GAM) and maximum entropy (MAXENT) to build models based on co-located occurrence datasets derived from two underwater video methods (i.e. baited and towed video) and fine-scale multibeam sonar based seafloor habitat variables. Overall, this study found that the choice of modeling approach did not considerably influence the prediction of distributions based on the same occurrence dataset. However, greater dissimilarity between model predictions was observed across the nine fish taxa when the two occurrence datasets were compared (relative to models based on the same dataset). Based on these results it is difficult to draw any general trends in regards to which video method provides more reliable occurrence datasets. Nonetheless, we suggest predictions reflecting the species apparent distribution (i.e. a combination of species distribution and the probability of detecting it). Consequently, we also encourage researchers and marine managers to carefully interpret model predictions. PMID:22536325
Sensor-derived physical activity parameters can predict future falls in people with dementia
Schwenk, Michael; Hauer, Klaus; Zieschang, Tania; Englert, Stefan; Mohler, Jane; Najafi, Bijan
2014-01-01
Background There is a need for simple clinical tools that can objectively assess fall risk in people with dementia. Wearable sensors seem to have potential for fall prediction, however, there has been limited work performed in this important area. Objective To explore the validity of sensor-derived physical activity (PA) parameters for predicting future falls in people with dementia. To compare sensor-based fall risk assessment with conventional fall risk measures. Methods A cohort study of people with confirmed dementia discharged from a geriatric rehabilitation ward. PA was quantified using 24-hour motion-sensor monitoring at the beginning of the study. PA parameters (percentage of walking, standing, sitting, lying; duration of single walking, standing, and sitting bouts) were extracted using specific algorithms. Conventional assessment included performance-based tests (Timed-up-and-go test, Performance-Oriented-Mobility-Assessment, 5-chair stand) and questionnaires (cognition, ADL-status, fear of falling, depression, previous faller). Outcome measures were fallers (at least one fall in the 3-month follow-up period) versus non-fallers. Results Seventy-seven people were included in the study (age 81.8 ± 6.3; community dwelling 88%, institutionalized 12%). Surprisingly, fallers and non-fallers did not differ on any conventional assessment (p= 0.069–0.991), except for ‘previous faller’ (p= 0.006). Interestingly, several PA parameters discriminated between groups. The ‘walking bouts average duration’, ‘longest walking bout duration’ and ‘walking bouts duration variability’ were lower in fallers, compared to non-fallers (p= 0.008–0.027). The ‘standing bouts average duration’ was higher in fallers (p= 0.050). Two variables, ‘walking bouts average duration’ [odds ratio (OR) 0.79, p= 0.012] and ‘previous faller’ [OR 4.44, p= 0.007] were identified as independent predictors for falls. The OR for a ‘walking bouts average duration’ of less than 15 seconds for predicting fallers was 6.30 (p= 0.020). Combining ‘walking bouts average duration’ and ‘previous faller’ improved fall prediction [OR 7.71, p< 0.001, sensitivity/specificity 72%/76%]. Discussion Results demonstrate that sensor-derived PA parameters are independent predictors of fall risk and may have higher diagnostic accuracy in persons with dementia compared to conventional fall risk measures. Our findings highlight the potential of telemonitoring technology for estimating fall risk. Results should be confirmed in a larger study and by measuring PA over a longer time period. PMID:25171300
Sensor-derived physical activity parameters can predict future falls in people with dementia.
Schwenk, Michael; Hauer, Klaus; Zieschang, Tania; Englert, Stefan; Mohler, Jane; Najafi, Bijan
2014-01-01
There is a need for simple clinical tools that can objectively assess the fall risk in people with dementia. Wearable sensors seem to have the potential for fall prediction; however, there has been limited work performed in this important area. To explore the validity of sensor-derived physical activity (PA) parameters for predicting future falls in people with dementia. To compare sensor-based fall risk assessment with conventional fall risk measures. This was a cohort study of people with confirmed dementia discharged from a geriatric rehabilitation ward. PA was quantified using 24-hour motion-sensor monitoring at the beginning of the study. PA parameters (percentage of walking, standing, sitting, and lying; duration of single walking, standing, and sitting bouts) were extracted using specific algorithms. Conventional assessment included performance-based tests (Timed Up and Go Test, Performance-Oriented Mobility Assessment, 5-chair stand) and questionnaires (cognition, ADL status, fear of falling, depression, previous faller). Outcome measures were fallers (at least one fall in the 3-month follow-up period) versus non-fallers. 77 people were included in the study (age 81.8 ± 6.3; community-dwelling 88%, institutionalized 12%). Surprisingly, fallers and non-fallers did not differ on any conventional assessment (p = 0.069-0.991), except for 'previous faller' (p = 0.006). Interestingly, several PA parameters discriminated between the groups. The 'walking bout average duration', 'longest walking bout duration' and 'walking bout duration variability' were lower in fallers, compared to non-fallers (p = 0.008-0.027). The 'standing bout average duration' was higher in fallers (p = 0.050). Two variables, 'walking bout average duration' [odds ratio (OR) 0.79, p = 0.012] and 'previous faller' (OR 4.44, p = 0.007) were identified as independent predictors for falls. The OR for a 'walking bout average duration' <15 s for predicting fallers was 6.30 (p = 0.020). Combining 'walking bout average duration' and 'previous faller' improved fall prediction (OR 7.71, p < 0.001, sensitivity/specificity 72%/76%). RESULTS demonstrate that sensor-derived PA parameters are independent predictors of the fall risk and may have higher diagnostic accuracy in persons with dementia compared to conventional fall risk measures. Our findings highlight the potential of telemonitoring technology for estimating the fall risk. RESULTS should be confirmed in a larger study and by measuring PA over a longer period of time. © 2014 S. Karger AG, Basel.
Tan, Bruce K; Lu, Guanning; Kwasny, Mary J; Hsueh, Wayne D; Shintani-Smith, Stephanie; Conley, David B; Chandra, Rakesh K; Kern, Robert C; Leung, Randy
2013-11-01
Current symptom criteria poorly predict a diagnosis of chronic rhinosinusitis (CRS) resulting in excessive treatment of patients with presumed CRS. The objective of this study was analyze the positive predictive value of individual symptoms, or symptoms in combination, in patients with CRS symptoms and examine the costs of the subsequent diagnostic algorithm using a decision tree-based cost analysis. We analyzed previously collected patient-reported symptoms from a cross-sectional study of patients who had received a computed tomography (CT) scan of their sinuses at a tertiary care otolaryngology clinic for evaluation of CRS symptoms to calculate the positive predictive value of individual symptoms. Classification and regression tree (CART) analysis then optimized combinations of symptoms and thresholds to identify CRS patients. The calculated positive predictive values were applied to a previously developed decision tree that compared an upfront CT (uCT) algorithm against an empiric medical therapy (EMT) algorithm with further analysis that considered the availability of point of care (POC) imaging. The positive predictive value of individual symptoms ranged from 0.21 for patients reporting forehead pain and to 0.69 for patients reporting hyposmia. The CART model constructed a dichotomous model based on forehead pain, maxillary pain, hyposmia, nasal discharge, and facial pain (C-statistic 0.83). If POC CT were available, median costs ($64-$415) favored using the upfront CT for all individual symptoms. If POC CT was unavailable, median costs favored uCT for most symptoms except intercanthal pain (-$15), hyposmia (-$100), and discolored nasal discharge (-$24), although these symptoms became equivocal on cost sensitivity analysis. The three-tiered CART model could subcategorize patients into tiers where uCT was always favorable (median costs: $332-$504) and others for which EMT was always favorable (median costs -$121 to -$275). The uCT algorithm was always more costly if the nasal endoscopy was positive. Among patients with classic CRS symptoms, the frequency of individual symptoms varied the likelihood of a CRS diagnosis marginally. Only hyposmia, the absence of facial pain, and discolored discharge sufficiently increased the likelihood of diagnosis to potentially make EMT less costly. The development of an evidence-based, multisymptom-based risk stratification model could substantially affect the management costs of the subsequent diagnostic algorithm. © 2013 ARS-AAOA, LLC.
Berg, Rigmor C; Grimes, Richard
2011-09-01
A great deal of research effort has been expended in an effort to identify the variables which most influence men who have sex with men's (MSM) unsafe sexual behaviors.While a set of predictor variables has emerged, these predict the unsafe behaviors of MSM in some locations but not in others, suggesting the need to investigate the predictive ability of these variables among MSM in previously understudied populations. Therefore, this study examined the ability of previously identified factors to predict unsafe sexual behaviors among MSM in Houston, Texas. Data were collected through a short self-report survey completed by MSM attending the Houston pride festival. The multiethnic participants (N = 109) represented a range of age, educational, and income backgrounds. Fifty-seven percent of the survey respondents had been drunk and/or high in sexual contexts, 19 percent evidenced alcohol dependency, 26 percent reported finding sex partners online and sex with serodiscordant or unknown serostatus partners was common. Compared to men who did not report unprotected anal intercourse (UAI) in the preceding two months, MSM who engaged in UAI were younger and more likely to use alcohol in sexual contexts, meet men online for offline sex, and perceive lower safer sex norms in their community. Although these results were statistically significant, the strength of the relationships was too small to have any practical value. The lack of useful explanatory power underscores the importance of accelerated HIV research that identifies the unique, local factors associated with unsafe sex in other previously understudied populations.
Phillips, Brett T; Fourman, Mitchell S; Rivara, Andrew; Dagum, Alexander B; Huston, Tara L; Ganz, Jason C; Bui, Duc T; Khan, Sami U
2014-01-01
Several devices exist today to assist the intraoperative determination of skin flap perfusion. Laser-Assisted Indocyanine Green Dye Angiography (LAICGA) has been shown to accurately predict mastectomy skin flap necrosis using quantitative perfusion values. The laser properties of the latest LAICGA device (SPY Elite) differ significantly from its predecessor system (SPY 2001), preventing direct translation of previous published data. The purpose of this study was to establish a mathematical relationship of perfusion values between these 2 devices. Breast reconstruction patients were prospectively enrolled into a clinical trial where skin flap evaluation and excision was based on quantitative SPY Q values previously established in the literature. Initial study patients underwent mastectomy skin flap evaluation using both SPY systems simultaneously. Absolute perfusion unit (APU) values at identical locations on the breast were then compared graphically. 210 data points were identified on the same patients (n = 4) using both SPY systems. A linear relationship (y = 2.9883x + 12.726) was identified with a high level or correlation (R(2) = 0.744). Previously published values using SPY 2001 (APU 3.7) provided a value of 23.8 APU on the SPY Elite. In addition, postoperative necrosis in these patients correlated to regions of skin identified with the SPY Elite with APU less than 23.8. Intraoperative comparison of LAICGA systems has provided direct correlation of perfusion values predictive of necrosis that were previously established in the literature. An APU value of 3.7 from the SPY 2001 correlates to a SPY Elite APU value of 23.8.
Fourman, Mitchell S.; Rivara, Andrew; Dagum, Alexander B.; Huston, Tara L.; Ganz, Jason C.; Bui, Duc T.; Khan, Sami U.
2014-01-01
Objective: Several devices exist today to assist the intraoperative determination of skin flap perfusion. Laser-Assisted Indocyanine Green Dye Angiography (LAICGA) has been shown to accurately predict mastectomy skin flap necrosis using quantitative perfusion values. The laser properties of the latest LAICGA device (SPY Elite) differ significantly from its predecessor system (SPY 2001), preventing direct translation of previous published data. The purpose of this study was to establish a mathematical relationship of perfusion values between these 2 devices. Methods: Breast reconstruction patients were prospectively enrolled into a clinical trial where skin flap evaluation and excision was based on quantitative SPY Q values previously established in the literature. Initial study patients underwent mastectomy skin flap evaluation using both SPY systems simultaneously. Absolute perfusion unit (APU) values at identical locations on the breast were then compared graphically. Results: 210 data points were identified on the same patients (n = 4) using both SPY systems. A linear relationship (y = 2.9883x + 12.726) was identified with a high level or correlation (R2 = 0.744). Previously published values using SPY 2001 (APU 3.7) provided a value of 23.8 APU on the SPY Elite. In addition, postoperative necrosis in these patients correlated to regions of skin identified with the SPY Elite with APU less than 23.8. Conclusion: Intraoperative comparison of LAICGA systems has provided direct correlation of perfusion values predictive of necrosis that were previously established in the literature. An APU value of 3.7 from the SPY 2001 correlates to a SPY Elite APU value of 23.8. PMID:25525483
Watanabe, S M; Goodman, M F
1982-01-01
Enzyme kinetic measurements are presented showing that Km rather than maximum velocity (Vmax) discrimination governs the frequency of forming 2-aminopurine X cytosine base mispairs by DNA polymerase alpha. An in vitro system is used in which incorporation of dTMP or dCMP occurs opposite a template 2-aminopurine, and values for Km and Vmax are obtained. Results from a previous study in which dTTP and dCTP were competing simultaneously for insertion opposite 2-aminopurine indicated that dTMP is inserted 22 times more frequently than dCMP. We now report that the ratio of Km values KCm/KTm = 25 +/- 6, which agrees quantitatively with the dTMP/dCMP incorporation ratio obtained previously. We also report that VCmax is indistinguishable from VTmax. These Km and Vmax data are consistent with predictions from a model, the Km discrimination model, in which replication fidelity is determined by free energy differences between matched and mismatched base pairs. Central to this model is the prediction that the ratio of Km values for insertion of correct and incorrect nucleotides specifies the insertion fidelity, and the maximum velocities of insertion are the same for both nucleotides. PMID:6959128
Parra-Ruiz, Jorge; Ramos, V; Dueñas, C; Coronado-Álvarez, N M; Cabo-Magadán, R; Portillo-Tuñón, V; Vinuesa, D; Muñoz-Medina, L; Hernández-Quero, J
2015-10-01
Tuberculous meningitis (TBM) is one of the most serious and difficult to diagnose manifestations of TB. An ADA value >9.5 IU/L has great sensitivity and specificity. However, all available studies have been conducted in areas of high endemicity, so we sought to determine the accuracy of ADA in a low endemicity area. This retrospective study included 190 patients (105 men) who had ADA tested in CSF for some reason. Patients were classified as probable/certain TBM or non-TBM based on clinical and Thwaite's criteria. Optimal ADA cutoff was established by ROC curves and a predictive algorithm based on ADA and other CSF biochemical parameters was generated. Eleven patients were classified as probable/certain TBM. In a low endemicity area, the best ADA cutoff was 11.5 IU/L with 91 % sensitivity and 77.7 % specificity. We also developed a predictive algorithm based on the combination of ADA (>11.5 IU/L), glucose (<65 mg/dL) and leukocytes (≥13.5 cell/mm(3)) with increased accuracy (Se: 91 % Sp: 88 %). Optimal ADA cutoff value in areas of low TB endemicity is higher than previously reported. Our algorithm is more accurate than ADA activity alone with better sensitivity and specificity than previously reported algorithms.
Value-Based Standards Guide Sexism Inferences for Self and Others.
Mitamura, Chelsea; Erickson, Lynnsey; Devine, Patricia G
2017-09-01
People often disagree about what constitutes sexism, and these disagreements can be both socially and legally consequential. It is unclear, however, why or how people come to different conclusions about whether something or someone is sexist. Previous research on judgments about sexism has focused on the perceiver's gender and attitudes, but neither of these variables identifies comparative standards that people use to determine whether any given behavior (or person) is sexist. Extending Devine and colleagues' values framework (Devine, Monteith, Zuwerink, & Elliot, 1991; Plant & Devine, 1998), we argue that, when evaluating others' behavior, perceivers rely on the morally-prescriptive values that guide their own behavior toward women. In a series of 3 studies we demonstrate that (1) people's personal standards for sexism in their own and others' behavior are each related to their values regarding sexism, (2) these values predict how much behavioral evidence people need to infer sexism, and (3) people with stringent, but not lenient, value-based standards get angry and try to regulate a sexist perpetrator's behavior to reduce sexism. Furthermore, these personal values are related to all outcomes in the present work above and beyond other person characteristics previously used to predict sexism inferences. We discuss the implications of differing value-based standards for explaining and reconciling disputes over what constitutes sexist behavior.
Mason, Amy; Foster, Dona; Bradley, Phelim; Golubchik, Tanya; Doumith, Michel; Gordon, N Claire; Pichon, Bruno; Iqbal, Zamin; Staves, Peter; Crook, Derrick; Walker, A Sarah; Kearns, Angela; Peto, Tim
2018-06-20
Background : In principle, whole genome sequencing (WGS) can predict phenotypic resistance directly from genotype, replacing laboratory-based tests. However, the contribution of different bioinformatics methods to genotype-phenotype discrepancies has not been systematically explored to date. Methods : We compared three WGS-based bioinformatics methods (Genefinder (read-based), Mykrobe (de Bruijn graph-based) and Typewriter (BLAST-based)) for predicting presence/absence of 83 different resistance determinants and virulence genes, and overall antimicrobial susceptibility, in 1379 Staphylococcus aureus isolates previously characterised by standard laboratory methods (disc diffusion, broth and/or agar dilution and PCR). Results : 99.5% (113830/114457) of individual resistance-determinant/virulence gene predictions were identical between all three methods, with only 627 (0.5%) discordant predictions, demonstrating high overall agreement (Fliess-Kappa=0.98, p<0.0001). Discrepancies when identified were in only one of the three methods for all genes except the cassette recombinase, ccrC(b ). Genotypic antimicrobial susceptibility prediction matched laboratory phenotype in 98.3% (14224/14464) cases (2720 (18.8%) resistant, 11504 (79.5%) susceptible). There was greater disagreement between the laboratory phenotypes and the combined genotypic predictions (97 (0.7%) phenotypically-susceptible but all bioinformatic methods reported resistance; 89 (0.6%) phenotypically-resistant, but all bioinformatics methods reported susceptible) than within the three bioinformatics methods (54 (0.4%) cases, 16 phenotypically-resistant, 38 phenotypically-susceptible). However, in 36/54 (67%), the consensus genotype matched the laboratory phenotype. Conclusions : In this study, the choice between these three specific bioinformatic methods to identify resistance-determinants or other genes in S. aureus did not prove critical, with all demonstrating high concordance with each other and phenotypic/molecular methods. However, each has some limitations and therefore consensus methods provide some assurance. Copyright © 2018 American Society for Microbiology.
External validation of urinary PCA3-based nomograms to individually predict prostate biopsy outcome.
Auprich, Marco; Haese, Alexander; Walz, Jochen; Pummer, Karl; de la Taille, Alexandre; Graefen, Markus; de Reijke, Theo; Fisch, Margit; Kil, Paul; Gontero, Paolo; Irani, Jacques; Chun, Felix K-H
2010-11-01
Prior to safely adopting risk stratification tools, their performance must be tested in an external patient cohort. To assess accuracy and generalizability of previously reported, internally validated, prebiopsy prostate cancer antigen 3 (PCA3) gene-based nomograms when applied to a large, external, European cohort of men at risk of prostate cancer (PCa). Biopsy data, including urinary PCA3 score, were available for 621 men at risk of PCa who were participating in a European multi-institutional study. All patients underwent a ≥10-core prostate biopsy. Biopsy indication was based on suspicious digital rectal examination, persistently elevated prostate-specific antigen level (2.5-10 ng/ml) and/or suspicious histology (atypical small acinar proliferation of the prostate, >/= two cores affected by high-grade prostatic intraepithelial neoplasia in first set of biopsies). PCA3 scores were assessed using the Progensa assay (Gen-Probe Inc, San Diego, CA, USA). According to the previously reported nomograms, different PCA3 score codings were used. The probability of a positive biopsy was calculated using previously published logistic regression coefficients. Predicted outcomes were compared to the actual biopsy results. Accuracy was calculated using the area under the curve as a measure of discrimination; calibration was explored graphically. Biopsy-confirmed PCa was detected in 255 (41.1%) men. Median PCA3 score of biopsy-negative versus biopsy-positive men was 20 versus 48 in the total cohort, 17 versus 47 at initial biopsy, and 37 versus 53 at repeat biopsy (all p≤0.002). External validation of all four previously reported PCA3-based nomograms demonstrated equally high accuracy (0.73-0.75) and excellent calibration. The main limitations of the study reside in its early detection setting, referral scenario, and participation of only tertiary-care centers. In accordance with the original publication, previously developed PCA3-based nomograms achieved high accuracy and sufficient calibration. These novel nomograms represent robust tools and are thus generalizable to European men at risk of harboring PCa. Consequently, in presence of a PCA3 score, these nomograms may be safely used to assist clinicians when prostate biopsy is contemplated. Copyright © 2010 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Real-time monitoring of high-gravity corn mash fermentation using in situ raman spectroscopy.
Gray, Steven R; Peretti, Steven W; Lamb, H Henry
2013-06-01
In situ Raman spectroscopy was employed for real-time monitoring of simultaneous saccharification and fermentation (SSF) of corn mash by an industrial strain of Saccharomyces cerevisiae. An accurate univariate calibration model for ethanol was developed based on the very strong 883 cm(-1) C-C stretching band. Multivariate partial least squares (PLS) calibration models for total starch, dextrins, maltotriose, maltose, glucose, and ethanol were developed using data from eight batch fermentations and validated using predictions for a separate batch. The starch, ethanol, and dextrins models showed significant prediction improvement when the calibration data were divided into separate high- and low-concentration sets. Collinearity between the ethanol and starch models was avoided by excluding regions containing strong ethanol peaks from the starch model and, conversely, excluding regions containing strong saccharide peaks from the ethanol model. The two-set calibration models for starch (R(2) = 0.998, percent error = 2.5%) and ethanol (R(2) = 0.999, percent error = 2.1%) provide more accurate predictions than any previously published spectroscopic models. Glucose, maltose, and maltotriose are modeled to accuracy comparable to previous work on less complex fermentation processes. Our results demonstrate that Raman spectroscopy is capable of real time in situ monitoring of a complex industrial biomass fermentation. To our knowledge, this is the first PLS-based chemometric modeling of corn mash fermentation under typical industrial conditions, and the first Raman-based monitoring of a fermentation process with glucose, oligosaccharides and polysaccharides present. Copyright © 2013 Wiley Periodicals, Inc.
Greene, Daniel J; Elshafei, Ahmed; Nyame, Yaw A; Kara, Onder; Malkoc, Ercan; Gao, Tianming; Jones, J Stephen
2016-08-01
The aim of this study was to externally validate a previously developed PCA3-based nomogram for the prediction of prostate cancer (PCa) and high-grade (intermediate and/or high-grade) prostate cancer (HGPCa) at the time of initial prostate biopsy. A retrospective review was performed on a cohort of 336 men from a large urban academic medical center. All men had serum PSA <20 ng/ml and underwent initial transrectal ultrasound-guided prostate biopsy with at least 10 cores sampling for suspicious exam and/or elevated PSA. Covariates were collected for the nomogram and included age, ethnicity, family history (FH) of PCa, PSA at diagnosis, PCA3, total prostate volume (TPV), and abnormal finding on digital rectal exam (DRE). These variables were used to test the accuracy (concordance index) and calibration of a previously published PCA3 nomogram. Biopsy confirms PCa and HGPCa in 51.0% and 30.4% of validation patients, respectively. This differed from the original cohort in that it had significantly more PCa and HGPCA (51% vs. 44%, P = 0.019; and 30.4% vs. 19.1%, P < 0.001). Despite the differences in PCa detection the concordance index was 75% and 77% for overall PCa and HGPCa, respectively. Calibration for overall PCa was good. This represents the first external validation of a PCA3-based prostate cancer predictive nomogram in a North American population. Prostate 76:1019-1023, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Tangborn, Wendell V.; Rasmussen, Lowell A.
1976-01-01
On the basis of a linear relationship between winter (October-April) precipitation and annual runoff from a drainage basin (Rasmussen and Tangborn, 1976) a physically reasonable model for predicting summer (May-September) streamflow from drainages in the North Cascades region was developed. This hydrometeorological prediction method relates streamflow for a season beginning on the day of prediction to the storage (including snow, ice, soil moisture, and groundwater) on that day. The spring storage is inferred from an input-output relationship based on the principle of conservation of mass: spring storage equals winter precipitation on the basin less winter runoff from the basin and less winter evapotranspiration, which is presumed to be small. The method of prediction is based on data only from the years previous to the one for which the prediction is made, and the system is revised each year as data for the previous year become available. To improve the basin storage estimate made in late winter or early spring, a short-season runoff prediction is made. The errors resulting from this short-term prediction are used to revise the storage estimate and improve the later prediction. This considerably improves the accuracy of the later prediction, especially for periods early in the summer runoff season. The optimum length for the test period appears to be generally less than a month for east side basins and between 1 and 2 months for those on the west side of the Cascade Range. The time distribution of the total summer runoff can be predicted when this test season is used so that on May 1 monthly streamflow for the May-September season can be predicted. It was found that summer precipitation and the time of minimum storage are two error sources that were amenable to analysis. For streamflow predictions in seasons beginning in early spring the deviation of the subsequent summer precipitation from a long-period average will contribute up to 53% of the prediction error. This contribution decreases to nearly zero during the summer and then rises slightly for late summer predictions. The reason for the smaller than expected effect of summer precipitation is thought to be due to the compensating effect of increased evaporative losses and increased infiltration when precipitation is greater than normal during the summer months. The error caused by the beginning winter month (assumed to be October in this study) not coinciding with the time of minimum storage was examined; it appears that October may be the best average beginning winter month for most drainages but that a more detailed study is needed. The optimum beginning of the winter season appears to vary from August to October when individual years are examined. These results demonstrate that standard precipitation and runoff measurements in the North Cascades region are adequate for constructing a predictive hydrologic model. This model can be used to make streamflow predictions that compare favorably with current multiple regression methods based on mountain snow surveys. This method has the added advantages of predicting the space and time distributions of storage and summer runoff.
Scribbans, T D; Berg, K; Narazaki, K; Janssen, I; Gurd, B J
2015-09-01
There is currently little information regarding the ability of metabolic prediction equations to accurately predict oxygen uptake and exercise intensity from heart rate (HR) during intermittent sport. The purpose of the present study was to develop and, cross-validate equations appropriate for accurately predicting oxygen cost (VO2) and energy expenditure from HR during intermittent sport participation. Eleven healthy adult males (19.9±1.1yrs) were recruited to establish the relationship between %VO2peak and %HRmax during low-intensity steady state endurance (END), moderate-intensity interval (MOD) and high intensity-interval exercise (HI), as performed on a cycle ergometer. Three equations (END, MOD, and HI) for predicting %VO2peak based on %HRmax were developed. HR and VO2 were directly measured during basketball games (6 male, 20.8±1.0 yrs; 6 female, 20.0±1.3yrs) and volleyball drills (12 female; 20.8±1.0yrs). Comparisons were made between measured and predicted VO2 and energy expenditure using the 3 equations developed and 2 previously published equations. The END and MOD equations accurately predicted VO2 and energy expenditure, while the HI equation underestimated, and the previously published equations systematically overestimated VO2 and energy expenditure. Intermittent sport VO2 and energy expenditure can be accurately predicted from heart rate data using either the END (%VO2peak=%HRmax x 1.008-17.17) or MOD (%VO2peak=%HRmax x 1.2-32) equations. These 2 simple equations provide an accessible and cost-effective method for accurate estimation of exercise intensity and energy expenditure during intermittent sport.
HGIMDA: Heterogeneous graph inference for miRNA-disease association prediction
Zhang, Xu; You, Zhu-Hong; Huang, Yu-An; Yan, Gui-Ying
2016-01-01
Recently, microRNAs (miRNAs) have drawn more and more attentions because accumulating experimental studies have indicated miRNA could play critical roles in multiple biological processes as well as the development and progression of human complex diseases. Using the huge number of known heterogeneous biological datasets to predict potential associations between miRNAs and diseases is an important topic in the field of biology, medicine, and bioinformatics. In this study, considering the limitations in the previous computational methods, we developed the computational model of Heterogeneous Graph Inference for MiRNA-Disease Association prediction (HGIMDA) to uncover potential miRNA-disease associations by integrating miRNA functional similarity, disease semantic similarity, Gaussian interaction profile kernel similarity, and experimentally verified miRNA-disease associations into a heterogeneous graph. HGIMDA obtained AUCs of 0.8781 and 0.8077 based on global and local leave-one-out cross validation, respectively. Furthermore, HGIMDA was applied to three important human cancers for performance evaluation. As a result, 90% (Colon Neoplasms), 88% (Esophageal Neoplasms) and 88% (Kidney Neoplasms) of top 50 predicted miRNAs are confirmed by recent experiment reports. Furthermore, HGIMDA could be effectively applied to new diseases and new miRNAs without any known associations, which overcome the important limitations of many previous computational models. PMID:27533456
HGIMDA: Heterogeneous graph inference for miRNA-disease association prediction.
Chen, Xing; Yan, Chenggang Clarence; Zhang, Xu; You, Zhu-Hong; Huang, Yu-An; Yan, Gui-Ying
2016-10-04
Recently, microRNAs (miRNAs) have drawn more and more attentions because accumulating experimental studies have indicated miRNA could play critical roles in multiple biological processes as well as the development and progression of human complex diseases. Using the huge number of known heterogeneous biological datasets to predict potential associations between miRNAs and diseases is an important topic in the field of biology, medicine, and bioinformatics. In this study, considering the limitations in the previous computational methods, we developed the computational model of Heterogeneous Graph Inference for MiRNA-Disease Association prediction (HGIMDA) to uncover potential miRNA-disease associations by integrating miRNA functional similarity, disease semantic similarity, Gaussian interaction profile kernel similarity, and experimentally verified miRNA-disease associations into a heterogeneous graph. HGIMDA obtained AUCs of 0.8781 and 0.8077 based on global and local leave-one-out cross validation, respectively. Furthermore, HGIMDA was applied to three important human cancers for performance evaluation. As a result, 90% (Colon Neoplasms), 88% (Esophageal Neoplasms) and 88% (Kidney Neoplasms) of top 50 predicted miRNAs are confirmed by recent experiment reports. Furthermore, HGIMDA could be effectively applied to new diseases and new miRNAs without any known associations, which overcome the important limitations of many previous computational models.
NASA Astrophysics Data System (ADS)
Solovjov, Vladimir P.; Webb, Brent W.; Andre, Frederic
2018-07-01
Following previous theoretical development based on the assumption of a rank correlated spectrum, the Rank Correlated Full Spectrum k-distribution (RC-FSK) method is proposed. The method proves advantageous in modeling radiation transfer in high temperature gases in non-uniform media in two important ways. First, and perhaps most importantly, the method requires no specification of a reference gas thermodynamic state. Second, the spectral construction of the RC-FSK model is simpler than original correlated FSK models, requiring only two cumulative k-distributions. Further, although not exhaustive, example problems presented here suggest that the method may also yield improved accuracy relative to prior methods, and may exhibit less sensitivity to the blackbody source temperature used in the model predictions. This paper outlines the theoretical development of the RC-FSK method, comparing the spectral construction with prior correlated spectrum FSK method formulations. Further the RC-FSK model's relationship to the Rank Correlated Spectral Line Weighted-sum-of-gray-gases (RC-SLW) model is defined. The work presents predictions using the Rank Correlated FSK method and previous FSK methods in three different example problems. Line-by-line benchmark predictions are used to assess the accuracy.
Wollstein, Andreas; Walsh, Susan; Liu, Fan; Chakravarthy, Usha; Rahu, Mati; Seland, Johan H; Soubrane, Gisèle; Tomazzoli, Laura; Topouzis, Fotis; Vingerling, Johannes R; Vioque, Jesus; Böhringer, Stefan; Fletcher, Astrid E; Kayser, Manfred
2017-02-27
Success of genetic association and the prediction of phenotypic traits from DNA are known to depend on the accuracy of phenotype characterization, amongst other parameters. To overcome limitations in the characterization of human iris pigmentation, we introduce a fully automated approach that specifies the areal proportions proposed to represent differing pigmentation types, such as pheomelanin, eumelanin, and non-pigmented areas within the iris. We demonstrate the utility of this approach using high-resolution digital eye imagery and genotype data from 12 selected SNPs from over 3000 European samples of seven populations that are part of the EUREYE study. In comparison to previous quantification approaches, (1) we achieved an overall improvement in eye colour phenotyping, which provides a better separation of manually defined eye colour categories. (2) Single nucleotide polymorphisms (SNPs) known to be involved in human eye colour variation showed stronger associations with our approach. (3) We found new and confirmed previously noted SNP-SNP interactions. (4) We increased SNP-based prediction accuracy of quantitative eye colour. Our findings exemplify that precise quantification using the perceived biological basis of pigmentation leads to enhanced genetic association and prediction of eye colour. We expect our approach to deliver new pigmentation genes when applied to genome-wide association testing.
Leung, Louis
2014-01-01
This study used longitudinal panel survey data collected from 417 adolescents at 2 points in time 1 year apart. It examined relationships between Internet risks changes in Time 2 and social media gratifications-sought, Internet addiction symptoms, and social media use all measured at Time 1. By controlling for age, gender, education, and criterion variable scores in Internet addiction at Time 1, entertainment and instant messaging use at Time 1 significantly predicted increased Internet addiction measured at Time 2. The study also controlled for demographics and scores of criterion variables in Internet risks: targeted for harassment, privacy exposed, and pornographic or violent content consumed in Time 1. Gratifications-sought (including status-gaining, expressing opinions, and identity experimentation), Internet addiction symptoms (including withdrawal and negative life consequences), and social media use (in particular, blogs, and Facebook) significantly predicted Internet risk changes in Time 2. These findings suggest that, with their predictive power, these predictors at Time 1 could be used to identify those adolescents who are likely to develop Internet addiction symptoms and the likelihood of experiencing Internet risks based on their previous gratifications-sought, previous addiction symptoms, and their habits of social media use at Time 1. PMID:25750792
Wollstein, Andreas; Walsh, Susan; Liu, Fan; Chakravarthy, Usha; Rahu, Mati; Seland, Johan H.; Soubrane, Gisèle; Tomazzoli, Laura; Topouzis, Fotis; Vingerling, Johannes R.; Vioque, Jesus; Böhringer, Stefan; Fletcher, Astrid E.; Kayser, Manfred
2017-01-01
Success of genetic association and the prediction of phenotypic traits from DNA are known to depend on the accuracy of phenotype characterization, amongst other parameters. To overcome limitations in the characterization of human iris pigmentation, we introduce a fully automated approach that specifies the areal proportions proposed to represent differing pigmentation types, such as pheomelanin, eumelanin, and non-pigmented areas within the iris. We demonstrate the utility of this approach using high-resolution digital eye imagery and genotype data from 12 selected SNPs from over 3000 European samples of seven populations that are part of the EUREYE study. In comparison to previous quantification approaches, (1) we achieved an overall improvement in eye colour phenotyping, which provides a better separation of manually defined eye colour categories. (2) Single nucleotide polymorphisms (SNPs) known to be involved in human eye colour variation showed stronger associations with our approach. (3) We found new and confirmed previously noted SNP-SNP interactions. (4) We increased SNP-based prediction accuracy of quantitative eye colour. Our findings exemplify that precise quantification using the perceived biological basis of pigmentation leads to enhanced genetic association and prediction of eye colour. We expect our approach to deliver new pigmentation genes when applied to genome-wide association testing. PMID:28240252
The Interrelationship between Promoter Strength, Gene Expression, and Growth Rate
Klesmith, Justin R.; Detwiler, Emily E.; Tomek, Kyle J.; Whitehead, Timothy A.
2014-01-01
In exponentially growing bacteria, expression of heterologous protein impedes cellular growth rates. Quantitative understanding of the relationship between expression and growth rate will advance our ability to forward engineer bacteria, important for metabolic engineering and synthetic biology applications. Recently, a work described a scaling model based on optimal allocation of ribosomes for protein translation. This model quantitatively predicts a linear relationship between microbial growth rate and heterologous protein expression with no free parameters. With the aim of validating this model, we have rigorously quantified the fitness cost of gene expression by using a library of synthetic constitutive promoters to drive expression of two separate proteins (eGFP and amiE) in E. coli in different strains and growth media. In all cases, we demonstrate that the fitness cost is consistent with the previous findings. We expand upon the previous theory by introducing a simple promoter activity model to quantitatively predict how basal promoter strength relates to growth rate and protein expression. We then estimate the amount of protein expression needed to support high flux through a heterologous metabolic pathway and predict the sizable fitness cost associated with enzyme production. This work has broad implications across applied biological sciences because it allows for prediction of the interplay between promoter strength, protein expression, and the resulting cost to microbial growth rates. PMID:25286161
Leung, Louis
2014-01-01
This study used longitudinal panel survey data collected from 417 adolescents at 2 points in time 1 year apart. It examined relationships between Internet risks changes in Time 2 and social media gratifications-sought, Internet addiction symptoms, and social media use all measured at Time 1. By controlling for age, gender, education, and criterion variable scores in Internet addiction at Time 1, entertainment and instant messaging use at Time 1 significantly predicted increased Internet addiction measured at Time 2. The study also controlled for demographics and scores of criterion variables in Internet risks: targeted for harassment, privacy exposed, and pornographic or violent content consumed in Time 1. Gratifications-sought (including status-gaining, expressing opinions, and identity experimentation), Internet addiction symptoms (including withdrawal and negative life consequences), and social media use (in particular, blogs, and Facebook) significantly predicted Internet risk changes in Time 2. These findings suggest that, with their predictive power, these predictors at Time 1 could be used to identify those adolescents who are likely to develop Internet addiction symptoms and the likelihood of experiencing Internet risks based on their previous gratifications-sought, previous addiction symptoms, and their habits of social media use at Time 1.
Le Roux, Xavier; Bouskill, Nicholas J.; Niboyet, Audrey; ...
2016-05-17
Soil microbial diversity is huge and a few grams of soil contain more bacterial taxa than there are bird species on Earth. This high diversity often makes predicting the responses of soil bacteria to environmental change intractable and restricts our capacity to predict the responses of soil functions to global change. Here, using a long-term field experiment in a California grassland, we studied the main and interactive effects of three global change factors (increased atmospheric CO 2 concentration, precipitation and nitrogen addition, and all their factorial combinations, based on global change scenarios for central California) on the potential activity, abundancemore » and dominant taxa of soil nitrite-oxidizing bacteria (NOB). Using a trait-based model, we then tested whether categorizing NOB into a few functional groups unified by physiological traits enables understanding and predicting how soil NOB respond to global environmental change. Contrasted responses to global change treatments were observed between three main NOB functional types. In particular, putatively mixotrophic Nitrobacter, rare under most treatments, became dominant under the 'High CO 2 +Nitrogen+Precipitation' treatment. The mechanistic trait-based model, which simulated ecological niches of NOB types consistent with previous ecophysiological reports, helped predicting the observed effects of global change on NOB and elucidating the underlying biotic and abiotic controls. Our results are a starting point for representing the overwhelming diversity of soil bacteria by a few functional types that can be incorporated into models of terrestrial ecosystems and biogeochemical processes.« less
NASA Astrophysics Data System (ADS)
Wang, Dong; Zhao, Yang; Yang, Fangfang; Tsui, Kwok-Leung
2017-09-01
Brownian motion with adaptive drift has attracted much attention in prognostics because its first hitting time is highly relevant to remaining useful life prediction and it follows the inverse Gaussian distribution. Besides linear degradation modeling, nonlinear-drifted Brownian motion has been developed to model nonlinear degradation. Moreover, the first hitting time distribution of the nonlinear-drifted Brownian motion has been approximated by time-space transformation. In the previous studies, the drift coefficient is the only hidden state used in state space modeling of the nonlinear-drifted Brownian motion. Besides the drift coefficient, parameters of a nonlinear function used in the nonlinear-drifted Brownian motion should be treated as additional hidden states of state space modeling to make the nonlinear-drifted Brownian motion more flexible. In this paper, a prognostic method based on nonlinear-drifted Brownian motion with multiple hidden states is proposed and then it is applied to predict remaining useful life of rechargeable batteries. 26 sets of rechargeable battery degradation samples are analyzed to validate the effectiveness of the proposed prognostic method. Moreover, some comparisons with a standard particle filter based prognostic method, a spherical cubature particle filter based prognostic method and two classic Bayesian prognostic methods are conducted to highlight the superiority of the proposed prognostic method. Results show that the proposed prognostic method has lower average prediction errors than the particle filter based prognostic methods and the classic Bayesian prognostic methods for battery remaining useful life prediction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Roux, Xavier; Bouskill, Nicholas J.; Niboyet, Audrey
Soil microbial diversity is huge and a few grams of soil contain more bacterial taxa than there are bird species on Earth. This high diversity often makes predicting the responses of soil bacteria to environmental change intractable and restricts our capacity to predict the responses of soil functions to global change. Here, using a long-term field experiment in a California grassland, we studied the main and interactive effects of three global change factors (increased atmospheric CO 2 concentration, precipitation and nitrogen addition, and all their factorial combinations, based on global change scenarios for central California) on the potential activity, abundancemore » and dominant taxa of soil nitrite-oxidizing bacteria (NOB). Using a trait-based model, we then tested whether categorizing NOB into a few functional groups unified by physiological traits enables understanding and predicting how soil NOB respond to global environmental change. Contrasted responses to global change treatments were observed between three main NOB functional types. In particular, putatively mixotrophic Nitrobacter, rare under most treatments, became dominant under the 'High CO 2 +Nitrogen+Precipitation' treatment. The mechanistic trait-based model, which simulated ecological niches of NOB types consistent with previous ecophysiological reports, helped predicting the observed effects of global change on NOB and elucidating the underlying biotic and abiotic controls. Our results are a starting point for representing the overwhelming diversity of soil bacteria by a few functional types that can be incorporated into models of terrestrial ecosystems and biogeochemical processes.« less
Specification and Prediction of the Radiation Environment Using Data Assimilative VERB code
NASA Astrophysics Data System (ADS)
Shprits, Yuri; Kellerman, Adam
2016-07-01
We discuss how data assimilation can be used for the reconstruction of long-term evolution, bench-marking of the physics based codes and used to improve the now-casting and focusing of the radiation belts and ring current. We also discuss advanced data assimilation methods such as parameter estimation and smoothing. We present a number of data assimilation applications using the VERB 3D code. The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. 1) Model with data assimilation allows us to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics-based VERB code in an optimal way. We illustrate how to use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore, the model is as good as the initial conditions that it uses. To produce the best possible initial conditions, data from different sources (GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation, as described above. The resulting initial conditions do not have gaps. This allows us to make more accurate predictions. Real-time prediction framework operating on our website, based on GOES, RBSP A, B and ACE data, and 3D VERB, is presented and discussed.
NASA Astrophysics Data System (ADS)
Sergeev, A. P.; Tarasov, D. A.; Buevich, A. G.; Subbotina, I. E.; Shichkin, A. V.; Sergeeva, M. V.; Lvova, O. A.
2017-06-01
The work deals with the application of neural networks residual kriging (NNRK) to the spatial prediction of the abnormally distributed soil pollutant (Cr). It is known that combination of geostatistical interpolation approaches (kriging) and neural networks leads to significantly better prediction accuracy and productivity. Generalized regression neural networks and multilayer perceptrons are classes of neural networks widely used for the continuous function mapping. Each network has its own pros and cons; however both demonstrated fast training and good mapping possibilities. In the work, we examined and compared two combined techniques: generalized regression neural network residual kriging (GRNNRK) and multilayer perceptron residual kriging (MLPRK). The case study is based on the real data sets on surface contamination by chromium at a particular location of the subarctic Novy Urengoy, Russia, obtained during the previously conducted screening. The proposed models have been built, implemented and validated using ArcGIS and MATLAB environments. The networks structures have been chosen during a computer simulation based on the minimization of the RMSE. MLRPK showed the best predictive accuracy comparing to the geostatistical approach (kriging) and even to GRNNRK.
Predicting neutron damage using TEM with in situ ion irradiation and computer modeling
NASA Astrophysics Data System (ADS)
Kirk, Marquis A.; Li, Meimei; Xu, Donghua; Wirth, Brian D.
2018-01-01
We have constructed a computer model of irradiation defect production closely coordinated with TEM and in situ ion irradiation of Molybdenum at 80 °C over a range of dose, dose rate and foil thickness. We have reexamined our previous ion irradiation data to assign appropriate error and uncertainty based on more recent work. The spatially dependent cascade cluster dynamics model is updated with recent Molecular Dynamics results for cascades in Mo. After a careful assignment of both ion and neutron irradiation dose values in dpa, TEM data are compared for both ion and neutron irradiated Mo from the same source material. Using the computer model of defect formation and evolution based on the in situ ion irradiation of thin foils, the defect microstructure, consisting of densities and sizes of dislocation loops, is predicted for neutron irradiation of bulk material at 80 °C and compared with experiment. Reasonable agreement between model prediction and experimental data demonstrates a promising direction in understanding and predicting neutron damage using a closely coordinated program of in situ ion irradiation experiment and computer simulation.
Prediction of clinical behaviour and treatment for cancers.
Futschik, Matthias E; Sullivan, Mike; Reeve, Anthony; Kasabov, Nikola
2003-01-01
Prediction of clinical behaviour and treatment for cancers is based on the integration of clinical and pathological parameters. Recent reports have demonstrated that gene expression profiling provides a powerful new approach for determining disease outcome. If clinical and microarray data each contain independent information then it should be possible to combine these datasets to gain more accurate prognostic information. Here, we have used existing clinical information and microarray data to generate a combined prognostic model for outcome prediction for diffuse large B-cell lymphoma (DLBCL). A prediction accuracy of 87.5% was achieved. This constitutes a significant improvement compared to the previously most accurate prognostic model with an accuracy of 77.6%. The model introduced here may be generally applicable to the combination of various types of molecular and clinical data for improving medical decision support systems and individualising patient care.
Base drag prediction on missile configurations
NASA Technical Reports Server (NTRS)
Moore, F. G.; Hymer, T.; Wilcox, F.
1993-01-01
New wind tunnel data have been taken, and a new empirical model has been developed for predicting base drag on missile configurations. The new wind tunnel data were taken at NASA-Langley in the Unitary Wind Tunnel at Mach numbers from 2.0 to 4.5, angles of attack to 16 deg, fin control deflections up to 20 deg, fin thickness/chord of 0.05 to 0.15, and fin locations from 'flush with the base' to two chord-lengths upstream of the base. The empirical model uses these data along with previous wind tunnel data, estimating base drag as a function of all these variables as well as boat-tail and power-on/power-off effects. The new model yields improved accuracy, compared to wind tunnel data. The new model also is more robust due to inclusion of additional variables. On the other hand, additional wind tunnel data are needed to validate or modify the current empirical model in areas where data are not available.
Phan, Andy; Mailey, Katherine; Saeki, Jessica; Gu, Xiaobo; Schroeder, Susan J
2017-05-01
Accurate thermodynamic parameters improve RNA structure predictions and thus accelerate understanding of RNA function and the identification of RNA drug binding sites. Many viral RNA structures, such as internal ribosome entry sites, have internal loops and bulges that are potential drug target sites. Current models used to predict internal loops are biased toward small, symmetric purine loops, and thus poorly predict asymmetric, pyrimidine-rich loops with >6 nucleotides (nt) that occur frequently in viral RNA. This article presents new thermodynamic data for 40 pyrimidine loops, many of which can form UU or protonated CC base pairs. Uracil and protonated cytosine base pairs stabilize asymmetric internal loops. Accurate prediction rules are presented that account for all thermodynamic measurements of RNA asymmetric internal loops. New loop initiation terms for loops with >6 nt are presented that do not follow previous assumptions that increasing asymmetry destabilizes loops. Since the last 2004 update, 126 new loops with asymmetry or sizes greater than 2 × 2 have been measured. These new measurements significantly deepen and diversify the thermodynamic database for RNA. These results will help better predict internal loops that are larger, pyrimidine-rich, and occur within viral structures such as internal ribosome entry sites. © 2017 Phan et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Mahmood, Khalid; Jung, Chol-Hee; Philip, Gayle; Georgeson, Peter; Chung, Jessica; Pope, Bernard J; Park, Daniel J
2017-05-16
Genetic variant effect prediction algorithms are used extensively in clinical genomics and research to determine the likely consequences of amino acid substitutions on protein function. It is vital that we better understand their accuracies and limitations because published performance metrics are confounded by serious problems of circularity and error propagation. Here, we derive three independent, functionally determined human mutation datasets, UniFun, BRCA1-DMS and TP53-TA, and employ them, alongside previously described datasets, to assess the pre-eminent variant effect prediction tools. Apparent accuracies of variant effect prediction tools were influenced significantly by the benchmarking dataset. Benchmarking with the assay-determined datasets UniFun and BRCA1-DMS yielded areas under the receiver operating characteristic curves in the modest ranges of 0.52 to 0.63 and 0.54 to 0.75, respectively, considerably lower than observed for other, potentially more conflicted datasets. These results raise concerns about how such algorithms should be employed, particularly in a clinical setting. Contemporary variant effect prediction tools are unlikely to be as accurate at the general prediction of functional impacts on proteins as reported prior. Use of functional assay-based datasets that avoid prior dependencies promises to be valuable for the ongoing development and accurate benchmarking of such tools.
Remembered or Forgotten?—An EEG-Based Computational Prediction Approach
Sun, Xuyun; Qian, Cunle; Chen, Zhongqin; Wu, Zhaohui; Luo, Benyan; Pan, Gang
2016-01-01
Prediction of memory performance (remembered or forgotten) has various potential applications not only for knowledge learning but also for disease diagnosis. Recently, subsequent memory effects (SMEs)—the statistical differences in electroencephalography (EEG) signals before or during learning between subsequently remembered and forgotten events—have been found. This finding indicates that EEG signals convey the information relevant to memory performance. In this paper, based on SMEs we propose a computational approach to predict memory performance of an event from EEG signals. We devise a convolutional neural network for EEG, called ConvEEGNN, to predict subsequently remembered and forgotten events from EEG recorded during memory process. With the ConvEEGNN, prediction of memory performance can be achieved by integrating two main stages: feature extraction and classification. To verify the proposed approach, we employ an auditory memory task to collect EEG signals from scalp electrodes. For ConvEEGNN, the average prediction accuracy was 72.07% by using EEG data from pre-stimulus and during-stimulus periods, outperforming other approaches. It was observed that signals from pre-stimulus period and those from during-stimulus period had comparable contributions to memory performance. Furthermore, the connection weights of ConvEEGNN network can reveal prominent channels, which are consistent with the distribution of SME studied previously. PMID:27973531
NASA Astrophysics Data System (ADS)
Amora Jofipasi, Chesilia; Miftahuddin; Hizir
2018-05-01
Weather is a phenomenon that occurs in certain areas that indicate a change in natural activity. Weather can be predicted using data in previous periods over a period. The purpose of this study is to get the best ETS model to predict the weather in Aceh Besar. The ETS model is a time series univariate forecasting method; its use focuses on trend and seasonal components. The data used are air temperature, dew point, sea level pressure, station pressure, visibility, wind speed, and sea surface temperature from January 2006 to December 2016. Based on AIC, AICc and BIC the smallest values obtained the conclusion that the ETS (M, N, A) is used to predict air temperature, and sea surface temperature, ETS (A, N, A) is used to predict dew point, sea level pressure and station pressure, ETS (A, A, N) is used to predict visibility, and ETS (A, N, N) is used to predict wind speed.
Training Data Requirement for a Neural Network to Predict Aerodynamic Coefficients
NASA Technical Reports Server (NTRS)
Korsmeyer, David (Technical Monitor); Rajkumar, T.; Bardina, Jorge
2003-01-01
Basic aerodynamic coefficients are modeled as functions of angle of attack, speed brake deflection angle, Mach number, and side slip angle. Most of the aerodynamic parameters can be well-fitted using polynomial functions. We previously demonstrated that a neural network is a fast, reliable way of predicting aerodynamic coefficients. We encountered few under fitted and/or over fitted results during prediction. The training data for the neural network are derived from wind tunnel test measurements and numerical simulations. The basic questions that arise are: how many training data points are required to produce an efficient neural network prediction, and which type of transfer functions should be used between the input-hidden layer and hidden-output layer. In this paper, a comparative study of the efficiency of neural network prediction based on different transfer functions and training dataset sizes is presented. The results of the neural network prediction reflect the sensitivity of the architecture, transfer functions, and training dataset size.
Sorich, Michael J; McKinnon, Ross A; Miners, John O; Winkler, David A; Smith, Paul A
2004-10-07
This study aimed to evaluate in silico models based on quantum chemical (QC) descriptors derived using the electronegativity equalization method (EEM) and to assess the use of QC properties to predict chemical metabolism by human UDP-glucuronosyltransferase (UGT) isoforms. Various EEM-derived QC molecular descriptors were calculated for known UGT substrates and nonsubstrates. Classification models were developed using support vector machine and partial least squares discriminant analysis. In general, the most predictive models were generated with the support vector machine. Combining QC and 2D descriptors (from previous work) using a consensus approach resulted in a statistically significant improvement in predictivity (to 84%) over both the QC and 2D models and the other methods of combining the descriptors. EEM-derived QC descriptors were shown to be both highly predictive and computationally efficient. It is likely that EEM-derived QC properties will be generally useful for predicting ADMET and physicochemical properties during drug discovery.
Anwar, Mohammad Y; Lewnard, Joseph A; Parikh, Sunil; Pitzer, Virginia E
2016-11-22
Malaria remains endemic in Afghanistan. National control and prevention strategies would be greatly enhanced through a better ability to forecast future trends in disease incidence. It is, therefore, of interest to develop a predictive tool for malaria patterns based on the current passive and affordable surveillance system in this resource-limited region. This study employs data from Ministry of Public Health monthly reports from January 2005 to September 2015. Malaria incidence in Afghanistan was forecasted using autoregressive integrated moving average (ARIMA) models in order to build a predictive tool for malaria surveillance. Environmental and climate data were incorporated to assess whether they improve predictive power of models. Two models were identified, each appropriate for different time horizons. For near-term forecasts, malaria incidence can be predicted based on the number of cases in the four previous months and 12 months prior (Model 1); for longer-term prediction, malaria incidence can be predicted using the rates 1 and 12 months prior (Model 2). Next, climate and environmental variables were incorporated to assess whether the predictive power of proposed models could be improved. Enhanced vegetation index was found to have increased the predictive accuracy of longer-term forecasts. Results indicate ARIMA models can be applied to forecast malaria patterns in Afghanistan, complementing current surveillance systems. The models provide a means to better understand malaria dynamics in a resource-limited context with minimal data input, yielding forecasts that can be used for public health planning at the national level.
Bulashevska, Alla; Eils, Roland
2006-06-14
The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.
Edge-region grouping in figure-ground organization and depth perception.
Palmer, Stephen E; Brooks, Joseph L
2008-12-01
Edge-region grouping (ERG) is proposed as a unifying and previously unrecognized class of relational information that influences figure-ground organization and perceived depth across an edge. ERG occurs when the edge between two regions is differentially grouped with one region based on classic principles of similarity grouping. The ERG hypothesis predicts that the grouped side will tend to be perceived as the closer, figural region. Six experiments are reported that test the predictions of the ERG hypothesis for 6 similarity-based factors: common fate, blur similarity, color similarity, orientation similarity, proximity, and flicker synchrony. All 6 factors produce the predicted effects, although to different degrees. In a 7th experiment, the strengths of these figural/depth effects were found to correlate highly with the strength of explicit grouping ratings of the same visual displays. The relations of ERG to prior results in the literature are discussed, and possible reasons for ERG-based figural/depth effects are considered. We argue that grouping processes mediate at least some of the effects we report here, although ecological explanations are also likely to be relevant in the majority of cases.
Perceived Masculinity Predicts U.S. Supreme Court Outcomes.
Chen, Daniel; Halberstam, Yosh; Yu, Alan C L
2016-01-01
Previous studies suggest a significant role of language in the court room, yet none has identified a definitive correlation between vocal characteristics and court outcomes. This paper demonstrates that voice-based snap judgments based solely on the introductory sentence of lawyers arguing in front of the Supreme Court of the United States predict outcomes in the Court. In this study, participants rated the opening statement of male advocates arguing before the Supreme Court between 1998 and 2012 in terms of masculinity, attractiveness, confidence, intelligence, trustworthiness, and aggressiveness. We found significant correlation between vocal characteristics and court outcomes and the correlation is specific to perceived masculinity even when judgment of masculinity is based only on less than three seconds of exposure to a lawyer's speech sample. Specifically, male advocates are more likely to win when they are perceived as less masculine. No other personality dimension predicts court outcomes. While this study does not aim to establish any causal connections, our findings suggest that vocal characteristics may be relevant in even as solemn a setting as the Supreme Court of the United States.
Perceived Masculinity Predicts U.S. Supreme Court Outcomes
2016-01-01
Previous studies suggest a significant role of language in the court room, yet none has identified a definitive correlation between vocal characteristics and court outcomes. This paper demonstrates that voice-based snap judgments based solely on the introductory sentence of lawyers arguing in front of the Supreme Court of the United States predict outcomes in the Court. In this study, participants rated the opening statement of male advocates arguing before the Supreme Court between 1998 and 2012 in terms of masculinity, attractiveness, confidence, intelligence, trustworthiness, and aggressiveness. We found significant correlation between vocal characteristics and court outcomes and the correlation is specific to perceived masculinity even when judgment of masculinity is based only on less than three seconds of exposure to a lawyer’s speech sample. Specifically, male advocates are more likely to win when they are perceived as less masculine. No other personality dimension predicts court outcomes. While this study does not aim to establish any causal connections, our findings suggest that vocal characteristics may be relevant in even as solemn a setting as the Supreme Court of the United States. PMID:27737008
Unsupervised User Similarity Mining in GSM Sensor Networks
Shad, Shafqat Ali; Chen, Enhong
2013-01-01
Mobility data has attracted the researchers for the past few years because of its rich context and spatiotemporal nature, where this information can be used for potential applications like early warning system, route prediction, traffic management, advertisement, social networking, and community finding. All the mentioned applications are based on mobility profile building and user trend analysis, where mobility profile building is done through significant places extraction, user's actual movement prediction, and context awareness. However, significant places extraction and user's actual movement prediction for mobility profile building are a trivial task. In this paper, we present the user similarity mining-based methodology through user mobility profile building by using the semantic tagging information provided by user and basic GSM network architecture properties based on unsupervised clustering approach. As the mobility information is in low-level raw form, our proposed methodology successfully converts it to a high-level meaningful information by using the cell-Id location information rather than previously used location capturing methods like GPS, Infrared, and Wifi for profile mining and user similarity mining. PMID:23576905
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seaman, T.J.; Doleman, W.H.
1988-09-30
Three locations on White Sands Missile Range, New Mexico, are under consideration as alternatives for the proposed Ground-Based Free-Electron Laser Technology Integration Experiment (GBFEL-TIE). The study conducted jointly by Prewitt and Associates, Inc., and the Office of Contract Archeology, was designed to provide input into the GBFEL-TIE Draft Environmental Impact Statement concerning the potential impact of the proposed project on cultural resources in each of the alternatives. The input consists of a series of predictions based on data gathered from two sources: (1) a cultural resource sample survey (15%) of two alternatives conducted as part of this study, and (2)more » from a previous survey of the third alternative. A predictive model was devleoped and applied using these data that estimated the potential impact of the GBFEL-TIE facility on the cultural resources within each alternative. The predictions indicate that the NASA alternatives, by far, the least favorable location for the facility followed by the Orogrande and Stallion Alternatives.« less
Edge-Region Grouping in Figure-Ground Organization and Depth Perception
Palmer, Stephen E.; Brooks, Joseph L.
2008-01-01
Edge-region grouping (ERG) is proposed as a unifying and previously unrecognized class of relational information that influences figure-ground organization and perceived depth across an edge. ERG occurs when the edge between two regions is differentially grouped with one region based on classic principles of similarity grouping. The ERG hypothesis predicts that the grouped side will tend to be perceived as the closer, figural region. Six experiments are reported that test the predictions of the ERG hypothesis for six similarity-based factors: common fate, blur similarity, color similarity, orientation similarity, proximity, and flicker synchrony. All six factors produce the predicted effects, although to different degrees. In the seventh experiment, the strengths of these figural/depth effects were found to correlate highly with the strength of explicit grouping ratings of the same visual displays. The relations of ERG to prior results in the literature are discussed, and possible reasons for ERG-based figural/depth effects are considered. We argue that grouping processes mediate at least some of the effects we report here, although ecological explanations are also likely to be relevant in the majority of cases. PMID:19045980
NASA Astrophysics Data System (ADS)
Drakopoulos, John; Stavrakakis, George N.
A VAN-prediction was announced on January 6, 1991, through the French newspaper “Le Monde” and on January 8-10, 1991, through Greek newspapers and TV stations. We evaluate this prediction on the basis of a letter which was sent by Prof. Varotsos (without date) to the Greek Minister of Public Works, and by considering previous VAN-publications as well as recent seismological data for the candidate regions. We conclude that what was observed at ASS station (northern Greece) on December 31, 1990, was not SES-activity but another disturbance or noise.
A framework for qualitative reasoning about solid objects
NASA Technical Reports Server (NTRS)
Davis, E.
1987-01-01
Predicting the behavior of a qualitatively described system of solid objects requires a combination of geometrical, temporal, and physical reasoning. Methods based upon formulating and solving differential equations are not adequate for robust prediction, since the behavior of a system over extended time may be much simpler than its behavior over local time. A first-order logic, in which one can state simple physical problems and derive their solution deductively, without recourse to solving the differential equations, is discussed. This logic is substantially more expressive and powerful than any previous AI representational system in this domain.
Nguimdo, Romain Modeste; Lacot, Eric; Jacquin, Olivier; Hugon, Olivier; Van der Sande, Guy; Guillet de Chatellus, Hugues
2017-02-01
Reservoir computing (RC) systems are computational tools for information processing that can be fully implemented in optics. Here, we experimentally and numerically show that an optically pumped laser subject to optical delayed feedback can yield similar results to those obtained for electrically pumped lasers. Unlike with previous implementations, the input data are injected at a time interval that is much larger than the time-delay feedback. These data are directly coupled to the feedback light beam. Our results illustrate possible new avenues for RC implementations for prediction tasks.
Compaction of North-sea chalk by pore-failure and pressure solution in a producing reservoir
NASA Astrophysics Data System (ADS)
Keszthelyi, Daniel; Dysthe, Dag; Jamtveit, Bjorn
2016-02-01
The Ekofisk field, Norwegian North sea,is an example of compacting chalk reservoir with considerable subsequent seafloor subsidence due to petroleum production. Previously, a number of models were created to predict the compaction using different phenomenological approaches. Here we present a different approach, we use a new creep model based on microscopic mechanisms with no fitting parameters to predict strain rate at core scale and at reservoir scale. The model is able to reproduce creep experiments and the magnitude of the observed subsidence making it the first microstructural model which can explain the Ekofisk compaction.
Comparison of correlated correlations.
Cohen, A
1989-12-01
We consider a problem where kappa highly correlated variables are available, each being a candidate for predicting a dependent variable. Only one of the kappa variables can be chosen as a predictor and the question is whether there are significant differences in the quality of the predictors. We review several tests derived previously and propose a method based on the bootstrap. The motivating medical problem was to predict 24 hour proteinuria by protein-creatinine ratio measured at either 08:00, 12:00 or 16:00. The tests which we discuss are illustrated by this example and compared using a small Monte Carlo study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hobbs, Michael L.
We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less