Sample records for function prediction tool

  1. Variant effect prediction tools assessed using independent, functional assay-based datasets: implications for discovery and diagnostics.

    PubMed

    Mahmood, Khalid; Jung, Chol-Hee; Philip, Gayle; Georgeson, Peter; Chung, Jessica; Pope, Bernard J; Park, Daniel J

    2017-05-16

    Genetic variant effect prediction algorithms are used extensively in clinical genomics and research to determine the likely consequences of amino acid substitutions on protein function. It is vital that we better understand their accuracies and limitations because published performance metrics are confounded by serious problems of circularity and error propagation. Here, we derive three independent, functionally determined human mutation datasets, UniFun, BRCA1-DMS and TP53-TA, and employ them, alongside previously described datasets, to assess the pre-eminent variant effect prediction tools. Apparent accuracies of variant effect prediction tools were influenced significantly by the benchmarking dataset. Benchmarking with the assay-determined datasets UniFun and BRCA1-DMS yielded areas under the receiver operating characteristic curves in the modest ranges of 0.52 to 0.63 and 0.54 to 0.75, respectively, considerably lower than observed for other, potentially more conflicted datasets. These results raise concerns about how such algorithms should be employed, particularly in a clinical setting. Contemporary variant effect prediction tools are unlikely to be as accurate at the general prediction of functional impacts on proteins as reported prior. Use of functional assay-based datasets that avoid prior dependencies promises to be valuable for the ongoing development and accurate benchmarking of such tools.

  2. An integrative approach to ortholog prediction for disease-focused and other functional studies.

    PubMed

    Hu, Yanhui; Flockhart, Ian; Vinayagam, Arunachalam; Bergwitz, Clemens; Berger, Bonnie; Perrimon, Norbert; Mohr, Stephanie E

    2011-08-31

    Mapping of orthologous genes among species serves an important role in functional genomics by allowing researchers to develop hypotheses about gene function in one species based on what is known about the functions of orthologs in other species. Several tools for predicting orthologous gene relationships are available. However, these tools can give different results and identification of predicted orthologs is not always straightforward. We report a simple but effective tool, the Drosophila RNAi Screening Center Integrative Ortholog Prediction Tool (DIOPT; http://www.flyrnai.org/diopt), for rapid identification of orthologs. DIOPT integrates existing approaches, facilitating rapid identification of orthologs among human, mouse, zebrafish, C. elegans, Drosophila, and S. cerevisiae. As compared to individual tools, DIOPT shows increased sensitivity with only a modest decrease in specificity. Moreover, the flexibility built into the DIOPT graphical user interface allows researchers with different goals to appropriately 'cast a wide net' or limit results to highest confidence predictions. DIOPT also displays protein and domain alignments, including percent amino acid identity, for predicted ortholog pairs. This helps users identify the most appropriate matches among multiple possible orthologs. To facilitate using model organisms for functional analysis of human disease-associated genes, we used DIOPT to predict high-confidence orthologs of disease genes in Online Mendelian Inheritance in Man (OMIM) and genes in genome-wide association study (GWAS) data sets. The results are accessible through the DIOPT diseases and traits query tool (DIOPT-DIST; http://www.flyrnai.org/diopt-dist). DIOPT and DIOPT-DIST are useful resources for researchers working with model organisms, especially those who are interested in exploiting model organisms such as Drosophila to study the functions of human disease genes.

  3. An Evaluation of the Predictive Validity of Confidence Ratings in Identifying Functional Behavioral Assessment Hypothesis Statements

    ERIC Educational Resources Information Center

    Borgmeier, Chris; Horner, Robert H.

    2006-01-01

    Faced with limited resources, schools require tools that increase the accuracy and efficiency of functional behavioral assessment. Yarbrough and Carr (2000) provided evidence that informant confidence ratings of the likelihood of problem behavior in specific situations offered a promising tool for predicting the accuracy of function-based…

  4. IMHOTEP—a composite score integrating popular tools for predicting the functional consequences of non-synonymous sequence variants

    PubMed Central

    Knecht, Carolin; Mort, Matthew; Junge, Olaf; Cooper, David N.; Krawczak, Michael

    2017-01-01

    Abstract The in silico prediction of the functional consequences of mutations is an important goal of human pathogenetics. However, bioinformatic tools that classify mutations according to their functionality employ different algorithms so that predictions may vary markedly between tools. We therefore integrated nine popular prediction tools (PolyPhen-2, SNPs&GO, MutPred, SIFT, MutationTaster2, Mutation Assessor and FATHMM as well as conservation-based Grantham Score and PhyloP) into a single predictor. The optimal combination of these tools was selected by means of a wide range of statistical modeling techniques, drawing upon 10 029 disease-causing single nucleotide variants (SNVs) from Human Gene Mutation Database and 10 002 putatively ‘benign’ non-synonymous SNVs from UCSC. Predictive performance was found to be markedly improved by model-based integration, whilst maximum predictive capability was obtained with either random forest, decision tree or logistic regression analysis. A combination of PolyPhen-2, SNPs&GO, MutPred, MutationTaster2 and FATHMM was found to perform as well as all tools combined. Comparison of our approach with other integrative approaches such as Condel, CoVEC, CAROL, CADD, MetaSVM and MetaLR using an independent validation dataset, revealed the superiority of our newly proposed integrative approach. An online implementation of this approach, IMHOTEP (‘Integrating Molecular Heuristics and Other Tools for Effect Prediction’), is provided at http://www.uni-kiel.de/medinfo/cgi-bin/predictor/. PMID:28180317

  5. A traveling salesman approach for predicting protein functions.

    PubMed

    Johnson, Olin; Liu, Jing

    2006-10-12

    Protein-protein interaction information can be used to predict unknown protein functions and to help study biological pathways. Here we present a new approach utilizing the classic Traveling Salesman Problem to study the protein-protein interactions and to predict protein functions in budding yeast Saccharomyces cerevisiae. We apply the global optimization tool from combinatorial optimization algorithms to cluster the yeast proteins based on the global protein interaction information. We then use this clustering information to help us predict protein functions. We use our algorithm together with the direct neighbor algorithm 1 on characterized proteins and compare the prediction accuracy of the two methods. We show our algorithm can produce better predictions than the direct neighbor algorithm, which only considers the immediate neighbors of the query protein. Our method is a promising one to be used as a general tool to predict functions of uncharacterized proteins and a successful sample of using computer science knowledge and algorithms to study biological problems.

  6. A traveling salesman approach for predicting protein functions

    PubMed Central

    Johnson, Olin; Liu, Jing

    2006-01-01

    Background Protein-protein interaction information can be used to predict unknown protein functions and to help study biological pathways. Results Here we present a new approach utilizing the classic Traveling Salesman Problem to study the protein-protein interactions and to predict protein functions in budding yeast Saccharomyces cerevisiae. We apply the global optimization tool from combinatorial optimization algorithms to cluster the yeast proteins based on the global protein interaction information. We then use this clustering information to help us predict protein functions. We use our algorithm together with the direct neighbor algorithm [1] on characterized proteins and compare the prediction accuracy of the two methods. We show our algorithm can produce better predictions than the direct neighbor algorithm, which only considers the immediate neighbors of the query protein. Conclusion Our method is a promising one to be used as a general tool to predict functions of uncharacterized proteins and a successful sample of using computer science knowledge and algorithms to study biological problems. PMID:17147783

  7. Tool use in left brain damage and Alzheimer's disease: What about function and manipulation knowledge?

    PubMed

    Jarry, Christophe; Osiurak, François; Besnard, Jérémy; Baumard, Josselin; Lesourd, Mathieu; Croisile, Bernard; Etcharry-Bouyx, Frédérique; Chauviré, Valérie; Le Gall, Didier

    2016-03-01

    Tool use disorders are usually associated with difficulties in retrieving function and manipulation knowledge. Here, we investigate tool use (Real Tool Use, RTU), function (Functional Association, FA) and manipulation knowledge (Gesture Recognition, GR) in 17 left-brain-damaged (LBD) patients and 14 AD patients (Alzheimer disease). LBD group exhibited predicted deficit on RTU but not on FA and GR while AD patients showed deficits on GR and FA with preserved tool use skills. These findings question the role played by function and manipulation knowledge in actual tool use. © 2016 The British Psychological Society.

  8. Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS) Version 3.0 User Guide

    EPA Science Inventory

    User Guide to describe the complete functionality of the Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS) Version 3.0 online tool. The US Environmental Protection Agency Sequence Alignment to Predict Across Species Susceptibility tool (SeqAPASS; https://seqa...

  9. FSPP: A Tool for Genome-Wide Prediction of smORF-Encoded Peptides and Their Functions

    PubMed Central

    Li, Hui; Xiao, Li; Zhang, Lili; Wu, Jiarui; Wei, Bin; Sun, Ninghui; Zhao, Yi

    2018-01-01

    smORFs are small open reading frames of less than 100 codons. Recent low throughput experiments showed a lot of smORF-encoded peptides (SEPs) played crucial rule in processes such as regulation of transcription or translation, transportation through membranes and the antimicrobial activity. In order to gather more functional SEPs, it is necessary to have access to genome-wide prediction tools to give profound directions for low throughput experiments. In this study, we put forward a functional smORF-encoded peptides predictor (FSPP) which tended to predict authentic SEPs and their functions in a high throughput method. FSPP used the overlap of detected SEPs from Ribo-seq and mass spectrometry as target objects. With the expression data on transcription and translation levels, FSPP built two co-expression networks. Combing co-location relations, FSPP constructed a compound network and then annotated SEPs with functions of adjacent nodes. Tested on 38 sequenced samples of 5 human cell lines, FSPP successfully predicted 856 out of 960 annotated proteins. Interestingly, FSPP also highlighted 568 functional SEPs from these samples. After comparison, the roles predicted by FSPP were consistent with known functions. These results suggest that FSPP is a reliable tool for the identification of functional small peptides. FSPP source code can be acquired at https://www.bioinfo.org/FSPP. PMID:29675032

  10. Plasticity Tool for Predicting Shear Nonlinearity of Unidirectional Laminates Under Multiaxial Loading

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Bomarito, Geoffrey F.

    2016-01-01

    This study implements a plasticity tool to predict the nonlinear shear behavior of unidirectional composite laminates under multiaxial loadings, with an intent to further develop the tool for use in composite progressive damage analysis. The steps for developing the plasticity tool include establishing a general quadratic yield function, deriving the incremental elasto-plastic stress-strain relations using the yield function with associated flow rule, and integrating the elasto-plastic stress-strain relations with a modified Euler method and a substepping scheme. Micromechanics analyses are performed to obtain normal and shear stress-strain curves that are used in determining the plasticity parameters of the yield function. By analyzing a micromechanics model, a virtual testing approach is used to replace costly experimental tests for obtaining stress-strain responses of composites under various loadings. The predicted elastic moduli and Poisson's ratios are in good agreement with experimental data. The substepping scheme for integrating the elasto-plastic stress-strain relations is suitable for working with displacement-based finite element codes. An illustration problem is solved to show that the plasticity tool can predict the nonlinear shear behavior for a unidirectional laminate subjected to multiaxial loadings.

  11. Improved, ACMG-Compliant, in silico prediction of pathogenicity for missense substitutions encoded by TP53 variants.

    PubMed

    Fortuno, Cristina; James, Paul A; Young, Erin L; Feng, Bing; Olivier, Magali; Pesaran, Tina; Tavtigian, Sean V; Spurdle, Amanda B

    2018-05-18

    Clinical interpretation of germline missense variants represents a major challenge, including those in the TP53 Li-Fraumeni syndrome gene. Bioinformatic prediction is a key part of variant classification strategies. We aimed to optimize the performance of the Align-GVGD tool used for p53 missense variant prediction, and compare its performance to other bioinformatic tools (SIFT, PolyPhen-2) and ensemble methods (REVEL, BayesDel). Reference sets of assumed pathogenic and assumed benign variants were defined using functional and/or clinical data. Area under the curve and Matthews correlation coefficient (MCC) values were used as objective functions to select an optimized protein multi-sequence alignment with best performance for Align-GVGD. MCC comparison of tools using binary categories showed optimized Align-GVGD (C15 cut-off) combined with BayesDel (0.16 cut-off), or with REVEL (0.5 cut-off), to have the best overall performance. Further, a semi-quantitative approach using multiple tiers of bioinformatic prediction, validated using an independent set of non-functional and functional variants, supported use of Align-GVGD and BayesDel prediction for different strength of evidence levels in ACMG/AMP rules. We provide rationale for bioinformatic tool selection for TP53 variant classification, and have also computed relevant bioinformatic predictions for every possible p53 missense variant to facilitate their use by the scientific and medical community. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  12. GeneSCF: a real-time based functional enrichment tool with support for multiple organisms.

    PubMed

    Subhash, Santhilal; Kanduri, Chandrasekhar

    2016-09-13

    High-throughput technologies such as ChIP-sequencing, RNA-sequencing, DNA sequencing and quantitative metabolomics generate a huge volume of data. Researchers often rely on functional enrichment tools to interpret the biological significance of the affected genes from these high-throughput studies. However, currently available functional enrichment tools need to be updated frequently to adapt to new entries from the functional database repositories. Hence there is a need for a simplified tool that can perform functional enrichment analysis by using updated information directly from the source databases such as KEGG, Reactome or Gene Ontology etc. In this study, we focused on designing a command-line tool called GeneSCF (Gene Set Clustering based on Functional annotations), that can predict the functionally relevant biological information for a set of genes in a real-time updated manner. It is designed to handle information from more than 4000 organisms from freely available prominent functional databases like KEGG, Reactome and Gene Ontology. We successfully employed our tool on two of published datasets to predict the biologically relevant functional information. The core features of this tool were tested on Linux machines without the need for installation of more dependencies. GeneSCF is more reliable compared to other enrichment tools because of its ability to use reference functional databases in real-time to perform enrichment analysis. It is an easy-to-integrate tool with other pipelines available for downstream analysis of high-throughput data. More importantly, GeneSCF can run multiple gene lists simultaneously on different organisms thereby saving time for the users. Since the tool is designed to be ready-to-use, there is no need for any complex compilation and installation procedures.

  13. eShadow: A tool for comparing closely related sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovcharenko, Ivan; Boffelli, Dario; Loots, Gabriela G.

    2004-01-15

    Primate sequence comparisons are difficult to interpret due to the high degree of sequence similarity shared between such closely related species. Recently, a novel method, phylogenetic shadowing, has been pioneered for predicting functional elements in the human genome through the analysis of multiple primate sequence alignments. We have expanded this theoretical approach to create a computational tool, eShadow, for the identification of elements under selective pressure in multiple sequence alignments of closely related genomes, such as in comparisons of human to primate or mouse to rat DNA. This tool integrates two different statistical methods and allows for the dynamic visualizationmore » of the resulting conservation profile. eShadow also includes a versatile optimization module capable of training the underlying Hidden Markov Model to differentially predict functional sequences. This module grants the tool high flexibility in the analysis of multiple sequence alignments and in comparing sequences with different divergence rates. Here, we describe the eShadow comparative tool and its potential uses for analyzing both multiple nucleotide and protein alignments to predict putative functional elements. The eShadow tool is publicly available at http://eshadow.dcode.org/« less

  14. ATLAS trigger operations: Upgrades to ``Xmon'' rate prediction system

    NASA Astrophysics Data System (ADS)

    Myers, Ava; Aukerman, Andrew; Hong, Tae Min; Atlas Collaboration

    2017-01-01

    We present ``Xmon,'' a tool to monitor trigger rates in the Control Room of the ATLAS Experiment. We discuss Xmon's recent (1) updates, (2) upgrades, and (3) operations. (1) Xmon was updated to modify the tool written for the three-level trigger architecture in Run-1 (2009-2012) to adapt to the new two-level system for Run-2 (2015-current). The tool takes as input the beam luminosity to make a rate prediction, which is compared with incoming rates to detect anomalies that occur both globally throughout a run and locally within a run. Global offsets are more commonly caught by the predictions based upon past runs, where offline processing allows for function adjustments and fit quality through outlier rejection. (2) Xmon was upgraded to detect local offsets using on-the-fly predictions, which uses a sliding window of in-run rates to make predictions. (3) Xmon operations examples are given. Future work involves further automation of the steps to provide the predictive functions and for alerting shifters.

  15. High-fidelity modeling and impact footprint prediction for vehicle breakup analysis

    NASA Astrophysics Data System (ADS)

    Ling, Lisa

    For decades, vehicle breakup analysis had been performed for space missions that used nuclear heater or power units in order to assess aerospace nuclear safety for potential launch failures leading to inadvertent atmospheric reentry. Such pre-launch risk analysis is imperative to assess possible environmental impacts, obtain launch approval, and for launch contingency planning. In order to accurately perform a vehicle breakup analysis, the analysis tool should include a trajectory propagation algorithm coupled with thermal and structural analyses and influences. Since such a software tool was not available commercially or in the public domain, a basic analysis tool was developed by Dr. Angus McRonald prior to this study. This legacy software consisted of low-fidelity modeling and had the capability to predict vehicle breakup, but did not predict the surface impact point of the nuclear component. Thus the main thrust of this study was to develop and verify the additional dynamics modeling and capabilities for the analysis tool with the objectives to (1) have the capability to predict impact point and footprint, (2) increase the fidelity in the prediction of vehicle breakup, and (3) reduce the effort and time required to complete an analysis. The new functions developed for predicting the impact point and footprint included 3-degrees-of-freedom trajectory propagation, the generation of non-arbitrary entry conditions, sensitivity analysis, and the calculation of impact footprint. The functions to increase the fidelity in the prediction of vehicle breakup included a panel code to calculate the hypersonic aerodynamic coefficients for an arbitrary-shaped body and the modeling of local winds. The function to reduce the effort and time required to complete an analysis included the calculation of node failure criteria. The derivation and development of these new functions are presented in this dissertation, and examples are given to demonstrate the new capabilities and the improvements made, with comparisons between the results obtained from the upgraded analysis tool and the legacy software wherever applicable.

  16. WORMHOLE: Novel Least Diverged Ortholog Prediction through Machine Learning

    PubMed Central

    Sutphin, George L.; Mahoney, J. Matthew; Sheppard, Keith; Walton, David O.; Korstanje, Ron

    2016-01-01

    The rapid advancement of technology in genomics and targeted genetic manipulation has made comparative biology an increasingly prominent strategy to model human disease processes. Predicting orthology relationships between species is a vital component of comparative biology. Dozens of strategies for predicting orthologs have been developed using combinations of gene and protein sequence, phylogenetic history, and functional interaction with progressively increasing accuracy. A relatively new class of orthology prediction strategies combines aspects of multiple methods into meta-tools, resulting in improved prediction performance. Here we present WORMHOLE, a novel ortholog prediction meta-tool that applies machine learning to integrate 17 distinct ortholog prediction algorithms to identify novel least diverged orthologs (LDOs) between 6 eukaryotic species—humans, mice, zebrafish, fruit flies, nematodes, and budding yeast. Machine learning allows WORMHOLE to intelligently incorporate predictions from a wide-spectrum of strategies in order to form aggregate predictions of LDOs with high confidence. In this study we demonstrate the performance of WORMHOLE across each combination of query and target species. We show that WORMHOLE is particularly adept at improving LDO prediction performance between distantly related species, expanding the pool of LDOs while maintaining low evolutionary distance and a high level of functional relatedness between genes in LDO pairs. We present extensive validation, including cross-validated prediction of PANTHER LDOs and evaluation of evolutionary divergence and functional similarity, and discuss future applications of machine learning in ortholog prediction. A WORMHOLE web tool has been developed and is available at http://wormhole.jax.org/. PMID:27812085

  17. WORMHOLE: Novel Least Diverged Ortholog Prediction through Machine Learning.

    PubMed

    Sutphin, George L; Mahoney, J Matthew; Sheppard, Keith; Walton, David O; Korstanje, Ron

    2016-11-01

    The rapid advancement of technology in genomics and targeted genetic manipulation has made comparative biology an increasingly prominent strategy to model human disease processes. Predicting orthology relationships between species is a vital component of comparative biology. Dozens of strategies for predicting orthologs have been developed using combinations of gene and protein sequence, phylogenetic history, and functional interaction with progressively increasing accuracy. A relatively new class of orthology prediction strategies combines aspects of multiple methods into meta-tools, resulting in improved prediction performance. Here we present WORMHOLE, a novel ortholog prediction meta-tool that applies machine learning to integrate 17 distinct ortholog prediction algorithms to identify novel least diverged orthologs (LDOs) between 6 eukaryotic species-humans, mice, zebrafish, fruit flies, nematodes, and budding yeast. Machine learning allows WORMHOLE to intelligently incorporate predictions from a wide-spectrum of strategies in order to form aggregate predictions of LDOs with high confidence. In this study we demonstrate the performance of WORMHOLE across each combination of query and target species. We show that WORMHOLE is particularly adept at improving LDO prediction performance between distantly related species, expanding the pool of LDOs while maintaining low evolutionary distance and a high level of functional relatedness between genes in LDO pairs. We present extensive validation, including cross-validated prediction of PANTHER LDOs and evaluation of evolutionary divergence and functional similarity, and discuss future applications of machine learning in ortholog prediction. A WORMHOLE web tool has been developed and is available at http://wormhole.jax.org/.

  18. Integration of biological data by kernels on graph nodes allows prediction of new genes involved in mitotic chromosome condensation

    PubMed Central

    Hériché, Jean-Karim; Lees, Jon G.; Morilla, Ian; Walter, Thomas; Petrova, Boryana; Roberti, M. Julia; Hossain, M. Julius; Adler, Priit; Fernández, José M.; Krallinger, Martin; Haering, Christian H.; Vilo, Jaak; Valencia, Alfonso; Ranea, Juan A.; Orengo, Christine; Ellenberg, Jan

    2014-01-01

    The advent of genome-wide RNA interference (RNAi)–based screens puts us in the position to identify genes for all functions human cells carry out. However, for many functions, assay complexity and cost make genome-scale knockdown experiments impossible. Methods to predict genes required for cell functions are therefore needed to focus RNAi screens from the whole genome on the most likely candidates. Although different bioinformatics tools for gene function prediction exist, they lack experimental validation and are therefore rarely used by experimentalists. To address this, we developed an effective computational gene selection strategy that represents public data about genes as graphs and then analyzes these graphs using kernels on graph nodes to predict functional relationships. To demonstrate its performance, we predicted human genes required for a poorly understood cellular function—mitotic chromosome condensation—and experimentally validated the top 100 candidates with a focused RNAi screen by automated microscopy. Quantitative analysis of the images demonstrated that the candidates were indeed strongly enriched in condensation genes, including the discovery of several new factors. By combining bioinformatics prediction with experimental validation, our study shows that kernels on graph nodes are powerful tools to integrate public biological data and predict genes involved in cellular functions of interest. PMID:24943848

  19. Phagonaute: A web-based interface for phage synteny browsing and protein function prediction.

    PubMed

    Delattre, Hadrien; Souiai, Oussema; Fagoonee, Khema; Guerois, Raphaël; Petit, Marie-Agnès

    2016-09-01

    Distant homology search tools are of great help to predict viral protein functions. However, due to the lack of profile databases dedicated to viruses, they can lack sensitivity. We constructed HMM profiles for more than 80,000 proteins from both phages and archaeal viruses, and performed all pairwise comparisons with HHsearch program. The whole resulting database can be explored through a user-friendly "Phagonaute" interface to help predict functions. Results are displayed together with their genetic context, to strengthen inferences based on remote homology. Beyond function prediction, this tool permits detections of co-occurrences, often indicative of proteins completing a task together, and observation of conserved patterns across large evolutionary distances. As a test, Herpes simplex virus I was added to Phagonaute, and 25% of its proteome matched to bacterial or archaeal viral protein counterparts. Phagonaute should therefore help virologists in their quest for protein functions and evolutionary relationships. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A computational tool to predict the evolutionarily conserved protein-protein interaction hot-spot residues from the structure of the unbound protein.

    PubMed

    Agrawal, Neeraj J; Helk, Bernhard; Trout, Bernhardt L

    2014-01-21

    Identifying hot-spot residues - residues that are critical to protein-protein binding - can help to elucidate a protein's function and assist in designing therapeutic molecules to target those residues. We present a novel computational tool, termed spatial-interaction-map (SIM), to predict the hot-spot residues of an evolutionarily conserved protein-protein interaction from the structure of an unbound protein alone. SIM can predict the protein hot-spot residues with an accuracy of 36-57%. Thus, the SIM tool can be used to predict the yet unknown hot-spot residues for many proteins for which the structure of the protein-protein complexes are not available, thereby providing a clue to their functions and an opportunity to design therapeutic molecules to target these proteins. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  1. Changes in event-related potential functional networks predict traumatic brain injury in piglets.

    PubMed

    Atlan, Lorre S; Lan, Ingrid S; Smith, Colin; Margulies, Susan S

    2018-06-01

    Traumatic brain injury is a leading cause of cognitive and behavioral deficits in children in the US each year. None of the current diagnostic tools, such as quantitative cognitive and balance tests, have been validated to identify mild traumatic brain injury in infants, adults and animals. In this preliminary study, we report a novel, quantitative tool that has the potential to quickly and reliably diagnose traumatic brain injury and which can track the state of the brain during recovery across multiple ages and species. Using 32 scalp electrodes, we recorded involuntary auditory event-related potentials from 22 awake four-week-old piglets one day before and one, four, and seven days after two different injury types (diffuse and focal) or sham. From these recordings, we generated event-related potential functional networks and assessed whether the patterns of the observed changes in these networks could distinguish brain-injured piglets from non-injured. Piglet brains exhibited significant changes after injury, as evaluated by five network metrics. The injury prediction algorithm developed from our analysis of the changes in the event-related potentials functional networks ultimately produced a tool with 82% predictive accuracy. This novel approach is the first application of auditory event-related potential functional networks to the prediction of traumatic brain injury. The resulting tool is a robust, objective and predictive method that offers promise for detecting mild traumatic brain injury, in particular because collecting event-related potentials data is noninvasive and inexpensive. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Computational Prediction of the Global Functional Genomic Landscape: Applications, Methods and Challenges

    PubMed Central

    Zhou, Weiqiang; Sherwood, Ben; Ji, Hongkai

    2017-01-01

    Technological advances have led to an explosive growth of high-throughput functional genomic data. Exploiting the correlation among different data types, it is possible to predict one functional genomic data type from other data types. Prediction tools are valuable in understanding the relationship among different functional genomic signals. They also provide a cost-efficient solution to inferring the unknown functional genomic profiles when experimental data are unavailable due to resource or technological constraints. The predicted data may be used for generating hypotheses, prioritizing targets, interpreting disease variants, facilitating data integration, quality control, and many other purposes. This article reviews various applications of prediction methods in functional genomics, discusses analytical challenges, and highlights some common and effective strategies used to develop prediction methods for functional genomic data. PMID:28076869

  3. Identifying gnostic predictors of the vaccine response.

    PubMed

    Haining, W Nicholas; Pulendran, Bali

    2012-06-01

    Molecular predictors of the response to vaccination could transform vaccine development. They would allow larger numbers of vaccine candidates to be rapidly screened, shortening the development time for new vaccines. Gene-expression based predictors of vaccine response have shown early promise. However, a limitation of gene-expression based predictors is that they often fail to reveal the mechanistic basis of their ability to classify response. Linking predictive signatures to the function of their component genes would advance basic understanding of vaccine immunity and also improve the robustness of vaccine prediction. New analytic tools now allow more biological meaning to be extracted from predictive signatures. Functional genomic approaches to perturb gene expression in mammalian cells permit the function of predictive genes to be surveyed in highly parallel experiments. The challenge for vaccinologists is therefore to use these tools to embed mechanistic insights into predictors of vaccine response. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Application of backpropagation artificial neural network prediction model for the PAH bioremediation of polluted soil.

    PubMed

    Olawoyin, Richard

    2016-10-01

    The backpropagation (BP) artificial neural network (ANN) is a renowned and extensively functional mathematical tool used for time-series predictions and approximations; which also define results for non-linear functions. ANNs are vital tools in the predictions of toxicant levels, such as polycyclic aromatic hydrocarbons (PAH) potentially derived from anthropogenic activities in the microenvironment. In the present work, BP ANN was used as a prediction tool to study the potential toxicity of PAH carcinogens (PAHcarc) in soils. Soil samples (16 × 4 = 64) were collected from locations in South-southern Nigeria. The concentration of PAHcarc in laboratory cultivated white melilot, Melilotus alba roots grown on treated soils was predicted using ANN model training. Results indicated the Levenberg-Marquardt back-propagation training algorithm converged in 2.5E+04 epochs at an average RMSE value of 1.06E-06. The averagedR(2) comparison between the measured and predicted outputs was 0.9994. It may be deduced from this study that, analytical processes involving environmental risk assessment as used in this study can successfully provide prompt prediction and source identification of major soil toxicants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Ensemble gene function prediction database reveals genes important for complex I formation in Arabidopsis thaliana.

    PubMed

    Hansen, Bjoern Oest; Meyer, Etienne H; Ferrari, Camilla; Vaid, Neha; Movahedi, Sara; Vandepoele, Klaas; Nikoloski, Zoran; Mutwil, Marek

    2018-03-01

    Recent advances in gene function prediction rely on ensemble approaches that integrate results from multiple inference methods to produce superior predictions. Yet, these developments remain largely unexplored in plants. We have explored and compared two methods to integrate 10 gene co-function networks for Arabidopsis thaliana and demonstrate how the integration of these networks produces more accurate gene function predictions for a larger fraction of genes with unknown function. These predictions were used to identify genes involved in mitochondrial complex I formation, and for five of them, we confirmed the predictions experimentally. The ensemble predictions are provided as a user-friendly online database, EnsembleNet. The methods presented here demonstrate that ensemble gene function prediction is a powerful method to boost prediction performance, whereas the EnsembleNet database provides a cutting-edge community tool to guide experimentalists. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  6. Systematic prediction of gene function in Arabidopsis thaliana using a probabilistic functional gene network

    PubMed Central

    Hwang, Sohyun; Rhee, Seung Y; Marcotte, Edward M; Lee, Insuk

    2012-01-01

    AraNet is a functional gene network for the reference plant Arabidopsis and has been constructed in order to identify new genes associated with plant traits. It is highly predictive for diverse biological pathways and can be used to prioritize genes for functional screens. Moreover, AraNet provides a web-based tool with which plant biologists can efficiently discover novel functions of Arabidopsis genes (http://www.functionalnet.org/aranet/). This protocol explains how to conduct network-based prediction of gene functions using AraNet and how to interpret the prediction results. Functional discovery in plant biology is facilitated by combining candidate prioritization by AraNet with focused experimental tests. PMID:21886106

  7. Protein Sequence Annotation Tool (PSAT): A centralized web-based meta-server for high-throughput sequence annotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leung, Elo; Huang, Amy; Cadag, Eithon

    In this study, we introduce the Protein Sequence Annotation Tool (PSAT), a web-based, sequence annotation meta-server for performing integrated, high-throughput, genome-wide sequence analyses. Our goals in building PSAT were to (1) create an extensible platform for integration of multiple sequence-based bioinformatics tools, (2) enable functional annotations and enzyme predictions over large input protein fasta data sets, and (3) provide a web interface for convenient execution of the tools. In this paper, we demonstrate the utility of PSAT by annotating the predicted peptide gene products of Herbaspirillum sp. strain RV1423, importing the results of PSAT into EC2KEGG, and using the resultingmore » functional comparisons to identify a putative catabolic pathway, thereby distinguishing RV1423 from a well annotated Herbaspirillum species. This analysis demonstrates that high-throughput enzyme predictions, provided by PSAT processing, can be used to identify metabolic potential in an otherwise poorly annotated genome. Lastly, PSAT is a meta server that combines the results from several sequence-based annotation and function prediction codes, and is available at http://psat.llnl.gov/psat/. PSAT stands apart from other sequencebased genome annotation systems in providing a high-throughput platform for rapid de novo enzyme predictions and sequence annotations over large input protein sequence data sets in FASTA. PSAT is most appropriately applied in annotation of large protein FASTA sets that may or may not be associated with a single genome.« less

  8. Protein Sequence Annotation Tool (PSAT): A centralized web-based meta-server for high-throughput sequence annotations

    DOE PAGES

    Leung, Elo; Huang, Amy; Cadag, Eithon; ...

    2016-01-20

    In this study, we introduce the Protein Sequence Annotation Tool (PSAT), a web-based, sequence annotation meta-server for performing integrated, high-throughput, genome-wide sequence analyses. Our goals in building PSAT were to (1) create an extensible platform for integration of multiple sequence-based bioinformatics tools, (2) enable functional annotations and enzyme predictions over large input protein fasta data sets, and (3) provide a web interface for convenient execution of the tools. In this paper, we demonstrate the utility of PSAT by annotating the predicted peptide gene products of Herbaspirillum sp. strain RV1423, importing the results of PSAT into EC2KEGG, and using the resultingmore » functional comparisons to identify a putative catabolic pathway, thereby distinguishing RV1423 from a well annotated Herbaspirillum species. This analysis demonstrates that high-throughput enzyme predictions, provided by PSAT processing, can be used to identify metabolic potential in an otherwise poorly annotated genome. Lastly, PSAT is a meta server that combines the results from several sequence-based annotation and function prediction codes, and is available at http://psat.llnl.gov/psat/. PSAT stands apart from other sequencebased genome annotation systems in providing a high-throughput platform for rapid de novo enzyme predictions and sequence annotations over large input protein sequence data sets in FASTA. PSAT is most appropriately applied in annotation of large protein FASTA sets that may or may not be associated with a single genome.« less

  9. Common features of microRNA target prediction tools

    PubMed Central

    Peterson, Sarah M.; Thompson, Jeffrey A.; Ufkin, Melanie L.; Sathyanarayana, Pradeep; Liaw, Lucy; Congdon, Clare Bates

    2014-01-01

    The human genome encodes for over 1800 microRNAs (miRNAs), which are short non-coding RNA molecules that function to regulate gene expression post-transcriptionally. Due to the potential for one miRNA to target multiple gene transcripts, miRNAs are recognized as a major mechanism to regulate gene expression and mRNA translation. Computational prediction of miRNA targets is a critical initial step in identifying miRNA:mRNA target interactions for experimental validation. The available tools for miRNA target prediction encompass a range of different computational approaches, from the modeling of physical interactions to the incorporation of machine learning. This review provides an overview of the major computational approaches to miRNA target prediction. Our discussion highlights three tools for their ease of use, reliance on relatively updated versions of miRBase, and range of capabilities, and these are DIANA-microT-CDS, miRanda-mirSVR, and TargetScan. In comparison across all miRNA target prediction tools, four main aspects of the miRNA:mRNA target interaction emerge as common features on which most target prediction is based: seed match, conservation, free energy, and site accessibility. This review explains these features and identifies how they are incorporated into currently available target prediction tools. MiRNA target prediction is a dynamic field with increasing attention on development of new analysis tools. This review attempts to provide a comprehensive assessment of these tools in a manner that is accessible across disciplines. Understanding the basis of these prediction methodologies will aid in user selection of the appropriate tools and interpretation of the tool output. PMID:24600468

  10. Common features of microRNA target prediction tools.

    PubMed

    Peterson, Sarah M; Thompson, Jeffrey A; Ufkin, Melanie L; Sathyanarayana, Pradeep; Liaw, Lucy; Congdon, Clare Bates

    2014-01-01

    The human genome encodes for over 1800 microRNAs (miRNAs), which are short non-coding RNA molecules that function to regulate gene expression post-transcriptionally. Due to the potential for one miRNA to target multiple gene transcripts, miRNAs are recognized as a major mechanism to regulate gene expression and mRNA translation. Computational prediction of miRNA targets is a critical initial step in identifying miRNA:mRNA target interactions for experimental validation. The available tools for miRNA target prediction encompass a range of different computational approaches, from the modeling of physical interactions to the incorporation of machine learning. This review provides an overview of the major computational approaches to miRNA target prediction. Our discussion highlights three tools for their ease of use, reliance on relatively updated versions of miRBase, and range of capabilities, and these are DIANA-microT-CDS, miRanda-mirSVR, and TargetScan. In comparison across all miRNA target prediction tools, four main aspects of the miRNA:mRNA target interaction emerge as common features on which most target prediction is based: seed match, conservation, free energy, and site accessibility. This review explains these features and identifies how they are incorporated into currently available target prediction tools. MiRNA target prediction is a dynamic field with increasing attention on development of new analysis tools. This review attempts to provide a comprehensive assessment of these tools in a manner that is accessible across disciplines. Understanding the basis of these prediction methodologies will aid in user selection of the appropriate tools and interpretation of the tool output.

  11. PredictSNP: Robust and Accurate Consensus Classifier for Prediction of Disease-Related Mutations

    PubMed Central

    Bendl, Jaroslav; Stourac, Jan; Salanda, Ondrej; Pavelka, Antonin; Wieben, Eric D.; Zendulka, Jaroslav; Brezovsky, Jan; Damborsky, Jiri

    2014-01-01

    Single nucleotide variants represent a prevalent form of genetic variation. Mutations in the coding regions are frequently associated with the development of various genetic diseases. Computational tools for the prediction of the effects of mutations on protein function are very important for analysis of single nucleotide variants and their prioritization for experimental characterization. Many computational tools are already widely employed for this purpose. Unfortunately, their comparison and further improvement is hindered by large overlaps between the training datasets and benchmark datasets, which lead to biased and overly optimistic reported performances. In this study, we have constructed three independent datasets by removing all duplicities, inconsistencies and mutations previously used in the training of evaluated tools. The benchmark dataset containing over 43,000 mutations was employed for the unbiased evaluation of eight established prediction tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT and SNAP. The six best performing tools were combined into a consensus classifier PredictSNP, resulting into significantly improved prediction performance, and at the same time returned results for all mutations, confirming that consensus prediction represents an accurate and robust alternative to the predictions delivered by individual tools. A user-friendly web interface enables easy access to all eight prediction tools, the consensus classifier PredictSNP and annotations from the Protein Mutant Database and the UniProt database. The web server and the datasets are freely available to the academic community at http://loschmidt.chemi.muni.cz/predictsnp. PMID:24453961

  12. Comparison of two bioinformatics tools used to characterize the microbial diversity and predictive functional attributes of microbial mats from Lake Obersee, Antarctica.

    PubMed

    Koo, Hyunmin; Hakim, Joseph A; Morrow, Casey D; Eipers, Peter G; Davila, Alfonso; Andersen, Dale T; Bej, Asim K

    2017-09-01

    In this study, using NextGen sequencing of the collective 16S rRNA genes obtained from two sets of samples collected from Lake Obersee, Antarctica, we compared and contrasted two bioinformatics tools, PICRUSt and Tax4Fun. We then developed an R script to assess the taxonomic and predictive functional profiles of the microbial communities within the samples. Taxa such as Pseudoxanthomonas, Planctomycetaceae, Cyanobacteria Subsection III, Nitrosomonadaceae, Leptothrix, and Rhodobacter were exclusively identified by Tax4Fun that uses SILVA database; whereas PICRUSt that uses Greengenes database uniquely identified Pirellulaceae, Gemmatimonadetes A1-B1, Pseudanabaena, Salinibacterium and Sinobacteraceae. Predictive functional profiling of the microbial communities using Tax4Fun and PICRUSt separately revealed common metabolic capabilities, while also showing specific functional IDs not shared between the two approaches. Combining these functional predictions using a customized R script revealed a more inclusive metabolic profile, such as hydrolases, oxidoreductases, transferases; enzymes involved in carbohydrate and amino acid metabolisms; and membrane transport proteins known for nutrient uptake from the surrounding environment. Our results present the first molecular-phylogenetic characterization and predictive functional profiles of the microbial mat communities in Lake Obersee, while demonstrating the efficacy of combining both the taxonomic assignment information and functional IDs using the R script created in this study for a more streamlined evaluation of predictive functional profiles of microbial communities. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Evaluation of recently validated non-invasive formula using basic lung functions as new screening tool for pulmonary hypertension in idiopathic pulmonary fibrosis patients

    PubMed Central

    Ghanem, Maha K.; Makhlouf, Hoda A.; Agmy, Gamal R.; Imam, Hisham M. K.; Fouad, Doaa A.

    2009-01-01

    BACKGROUND: A prediction formula for mean pulmonary artery pressure (MPAP) using standard lung function measurement has been recently validated to screen for pulmonary hypertension (PH) in idiopathic pulmonary fibrosis (IPF) patients. OBJECTIVE: To test the usefulness of this formula as a new non invasive screening tool for PH in IPF patients. Also, to study its correlation with patients' clinical data, pulmonary function tests, arterial blood gases (ABGs) and other commonly used screening methods for PH including electrocardiogram (ECG), chest X ray (CXR), trans-thoracic echocardiography (TTE) and computerized tomography pulmonary angiography (CTPA). MATERIALS AND METHODS: Cross-sectional study of 37 IPF patients from tertiary hospital. The accuracy of MPAP estimation was assessed by examining the correlation between the predicted MPAP using the formula and PH diagnosed by other screening tools and patients' clinical signs of PH. RESULTS: There was no statistically significant difference in the prediction of PH using cut off point of 21 or 25 mm Hg (P = 0.24). The formula-predicted MPAP greater than 25 mm Hg strongly correlated in the expected direction with O2 saturation (r = −0.95, P < 0.000), partial arterial O2 tension (r = −0.71, P < 0.000), right ventricular systolic pressure measured by TTE (r = 0.6, P < 0.000) and hilar width on CXR (r = 0.31, P = 0.03). Chest symptoms, ECG and CTPA signs of PH poorly correlated with the same formula (P > 0.05). CONCLUSIONS: The prediction formula for MPAP using standard lung function measurements is a simple non invasive tool that can be used as TTE to screen for PH in IPF patients and select those who need right heart catheterization. PMID:19881164

  14. Geriatric Assessment and Tools for Predicting Treatment Toxicity in Older Adults With Cancer.

    PubMed

    Li, Daneng; Soto-Perez-de-Celis, Enrique; Hurria, Arti

    Cancer is a disease of older adults, and the majority of new cancer cases and deaths occur in people 65 years or older. However, fewer data are available regarding the risks and benefits of cancer treatment in older adults, and commonly used assessments in oncology fail to adequately evaluate factors that affect treatment efficacy and outcomes in the older patients. The geriatric assessment is a multidisciplinary evaluation that provides detailed information about a patient's functional status, comorbidities, psychological state, social support, nutritional status, and cognitive function. Among older patients with cancer, geriatric assessment has been shown to identify patients at risk of poorer overall survival, and geriatric assessment-based tools are significantly more effective in predicting chemotherapy toxicity than other currently utilized measures. In this review, we summarize the components of the geriatric assessment and provide information about existing tools used to predict treatment toxicity in older patients with cancer.

  15. ClubSub-P: Cluster-Based Subcellular Localization Prediction for Gram-Negative Bacteria and Archaea

    PubMed Central

    Paramasivam, Nagarajan; Linke, Dirk

    2011-01-01

    The subcellular localization (SCL) of proteins provides important clues to their function in a cell. In our efforts to predict useful vaccine targets against Gram-negative bacteria, we noticed that misannotated start codons frequently lead to wrongly assigned SCLs. This and other problems in SCL prediction, such as the relatively high false-positive and false-negative rates of some tools, can be avoided by applying multiple prediction tools to groups of homologous proteins. Here we present ClubSub-P, an online database that combines existing SCL prediction tools into a consensus pipeline from more than 600 proteomes of fully sequenced microorganisms. On top of the consensus prediction at the level of single sequences, the tool uses clusters of homologous proteins from Gram-negative bacteria and from Archaea to eliminate false-positive and false-negative predictions. ClubSub-P can assign the SCL of proteins from Gram-negative bacteria and Archaea with high precision. The database is searchable, and can easily be expanded using either new bacterial genomes or new prediction tools as they become available. This will further improve the performance of the SCL prediction, as well as the detection of misannotated start codons and other annotation errors. ClubSub-P is available online at http://toolkit.tuebingen.mpg.de/clubsubp/ PMID:22073040

  16. Benchmarking of density functionals for a soft but accurate prediction and assignment of (1) H and (13)C NMR chemical shifts in organic and biological molecules.

    PubMed

    Benassi, Enrico

    2017-01-15

    A number of programs and tools that simulate 1 H and 13 C nuclear magnetic resonance (NMR) chemical shifts using empirical approaches are available. These tools are user-friendly, but they provide a very rough (and sometimes misleading) estimation of the NMR properties, especially for complex systems. Rigorous and reliable ways to predict and interpret NMR properties of simple and complex systems are available in many popular computational program packages. Nevertheless, experimentalists keep relying on these "unreliable" tools in their daily work because, to have a sufficiently high accuracy, these rigorous quantum mechanical methods need high levels of theory. An alternative, efficient, semi-empirical approach has been proposed by Bally, Rablen, Tantillo, and coworkers. This idea consists of creating linear calibrations models, on the basis of the application of different combinations of functionals and basis sets. Following this approach, the predictive capability of a wider range of popular functionals was systematically investigated and tested. The NMR chemical shifts were computed in solvated phase at density functional theory level, using 30 different functionals coupled with three different triple-ζ basis sets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Can time-dependent density functional theory predict intersystem crossing in organic chromophores? A case study on benzo(bis)-X-diazole based donor-acceptor-donor type molecules.

    PubMed

    Tam, Teck Lip Dexter; Lin, Ting Ting; Chua, Ming Hui

    2017-06-21

    Here we utilized new diagnostic tools in time-dependent density functional theory to explain the trend of intersystem crossing in benzo(bis)-X-diazole based donor-acceptor-donor type molecules. These molecules display a wide range of fluorescence quantum yields and triplet yields, making them excellent candidates for testing the validity of these diagnostic tools. We believe that these tools are cost-effective and can be applied to structurally similar organic chromophores to predict/explain the trends of intersystem crossing, and thus fluorescence quantum yields and triplet yields without the use of complex and expensive multireference configuration interaction or multireference pertubation theory methods.

  18. StructRNAfinder: an automated pipeline and web server for RNA families prediction.

    PubMed

    Arias-Carrasco, Raúl; Vásquez-Morán, Yessenia; Nakaya, Helder I; Maracaja-Coutinho, Vinicius

    2018-02-17

    The function of many noncoding RNAs (ncRNAs) depend upon their secondary structures. Over the last decades, several methodologies have been developed to predict such structures or to use them to functionally annotate RNAs into RNA families. However, to fully perform this analysis, researchers should utilize multiple tools, which require the constant parsing and processing of several intermediate files. This makes the large-scale prediction and annotation of RNAs a daunting task even to researchers with good computational or bioinformatics skills. We present an automated pipeline named StructRNAfinder that predicts and annotates RNA families in transcript or genome sequences. This single tool not only displays the sequence/structural consensus alignments for each RNA family, according to Rfam database but also provides a taxonomic overview for each assigned functional RNA. Moreover, we implemented a user-friendly web service that allows researchers to upload their own nucleotide sequences in order to perform the whole analysis. Finally, we provided a stand-alone version of StructRNAfinder to be used in large-scale projects. The tool was developed under GNU General Public License (GPLv3) and is freely available at http://structrnafinder.integrativebioinformatics.me . The main advantage of StructRNAfinder relies on the large-scale processing and integrating the data obtained by each tool and database employed along the workflow, of which several files are generated and displayed in user-friendly reports, useful for downstream analyses and data exploration.

  19. Identifying gnostic predictors of the vaccine response

    PubMed Central

    Haining, W. Nicholas; Pulendran, Bali

    2012-01-01

    Molecular predictors of the response to vaccination could transform vaccine development. They would allow larger numbers of vaccine candidates to be rapidly screened, shortening the development time for new vaccines. Gene-expression based predictors of vaccine response have shown early promise. However, a limitation of gene-expression based predictors is that they often fail to reveal the mechanistic basis for their ability to classify response. Linking predictive signatures to the function of their component genes would advance basic understanding of vaccine immunity and also improve the robustness of outcome classification. New analytic tools now allow more biological meaning to be extracted from predictive signatures. Functional genomic approaches to perturb gene expression in mammalian cells permit the function of predictive genes to be surveyed in highly parallel experiments. The challenge for vaccinologists is therefore to use these tools to embed mechanistic insights into predictors of vaccine response. PMID:22633886

  20. Thermal modelling of cooling tool cutting when milling by electrical analogy

    NASA Astrophysics Data System (ADS)

    Benabid, F.; Arrouf, M.; Assas, M.; Benmoussa, H.

    2010-06-01

    Measurement temperatures by (some devises) are applied immediately after shut-down and may be corrected for the temperature drop that occurs in the interval between shut-down and measurement. This paper presents a new procedure for thermal modelling of the tool cutting used just after machining; when the tool is out off the chip in order to extrapolate the cutting temperature from the temperature measured when the tool is at stand still. A fin approximation is made in enhancing heat loss (by conduction and convection) to air stream is used. In the modelling we introduce an equivalent thermal network to estimate the cutting temperature as a function of specific energy. In another hand, a local modified element lumped conduction equation is used to predict the temperature gradient with time when the tool is being cooled, with initial and boundary conditions. These predictions provide a detailed view of the global heat transfer coefficient as a function of cutting speed because the heat loss for the tool in air stream is an order of magnitude larger than in normal environment. Finally we deduct the cutting temperature by inverse method.

  1. Computational approaches to metabolic engineering utilizing systems biology and synthetic biology.

    PubMed

    Fong, Stephen S

    2014-08-01

    Metabolic engineering modifies cellular function to address various biochemical applications. Underlying metabolic engineering efforts are a host of tools and knowledge that are integrated to enable successful outcomes. Concurrent development of computational and experimental tools has enabled different approaches to metabolic engineering. One approach is to leverage knowledge and computational tools to prospectively predict designs to achieve the desired outcome. An alternative approach is to utilize combinatorial experimental tools to empirically explore the range of cellular function and to screen for desired traits. This mini-review focuses on computational systems biology and synthetic biology tools that can be used in combination for prospective in silico strain design.

  2. Interactome INSIDER: a structural interactome browser for genomic studies.

    PubMed

    Meyer, Michael J; Beltrán, Juan Felipe; Liang, Siqi; Fragoza, Robert; Rumack, Aaron; Liang, Jin; Wei, Xiaomu; Yu, Haiyuan

    2018-01-01

    We present Interactome INSIDER, a tool to link genomic variant information with structural protein-protein interactomes. Underlying this tool is the application of machine learning to predict protein interaction interfaces for 185,957 protein interactions with previously unresolved interfaces in human and seven model organisms, including the entire experimentally determined human binary interactome. Predicted interfaces exhibit functional properties similar to those of known interfaces, including enrichment for disease mutations and recurrent cancer mutations. Through 2,164 de novo mutagenesis experiments, we show that mutations of predicted and known interface residues disrupt interactions at a similar rate and much more frequently than mutations outside of predicted interfaces. To spur functional genomic studies, Interactome INSIDER (http://interactomeinsider.yulab.org) enables users to identify whether variants or disease mutations are enriched in known and predicted interaction interfaces at various resolutions. Users may explore known population variants, disease mutations, and somatic cancer mutations, or they may upload their own set of mutations for this purpose.

  3. Combining Machine Learning Systems and Multiple Docking Simulation Packages to Improve Docking Prediction Reliability for Network Pharmacology

    PubMed Central

    Hsin, Kun-Yi; Ghosh, Samik; Kitano, Hiroaki

    2013-01-01

    Increased availability of bioinformatics resources is creating opportunities for the application of network pharmacology to predict drug effects and toxicity resulting from multi-target interactions. Here we present a high-precision computational prediction approach that combines two elaborately built machine learning systems and multiple molecular docking tools to assess binding potentials of a test compound against proteins involved in a complex molecular network. One of the two machine learning systems is a re-scoring function to evaluate binding modes generated by docking tools. The second is a binding mode selection function to identify the most predictive binding mode. Results from a series of benchmark validations and a case study show that this approach surpasses the prediction reliability of other techniques and that it also identifies either primary or off-targets of kinase inhibitors. Integrating this approach with molecular network maps makes it possible to address drug safety issues by comprehensively investigating network-dependent effects of a drug or drug candidate. PMID:24391846

  4. Power hand tool kinetics associated with upper limb injuries in an automobile assembly plant.

    PubMed

    Ku, Chia-Hua; Radwin, Robert G; Karsh, Ben-Tzion

    2007-06-01

    This study investigated the relationship between pneumatic nutrunner handle reactions, workstation characteristics, and prevalence of upper limb injuries in an automobile assembly plant. Tool properties (geometry, inertial properties, and motor characteristics), fastener properties, orientation relative to the fastener, and the position of the tool operator (horizontal and vertical distances) were measured for 69 workstations using 15 different pneumatic nutrunners. Handle reaction response was predicted using a deterministic mechanical model of the human operator and tool that was previously developed in our laboratory, specific to the measured tool, workstation, and job factors. Handle force was a function of target torque, tool geometry and inertial properties, motor speed, work orientation, and joint hardness. The study found that tool target torque was not well correlated with predicted handle reaction force (r=0.495) or displacement (r=0.285). The individual tool, tool shape, and threaded fastener joint hardness all affected predicted forces and displacements (p<0.05). The average peak handle force and displacement for right-angle tools were twice as great as pistol grip tools. Soft-threaded fastener joints had the greatest average handle forces and displacements. Upper limb injury cases were identified using plant OSHA 200 log and personnel records. Predicted handle forces for jobs where injuries were reported were significantly greater than those jobs free of injuries (p<0.05), whereas target torque and predicted handle displacement did not show statistically significant differences. The study concluded that quantification of handle reaction force, rather than target torque alone, is necessary for identifying stressful power hand tool operations and for controlling exposure to forces in manufacturing jobs involving power nutrunners. Therefore, a combination of tool, work station, and task requirements should be considered.

  5. HoloVir: A Workflow for Investigating the Diversity and Function of Viruses in Invertebrate Holobionts

    PubMed Central

    Laffy, Patrick W.; Wood-Charlson, Elisha M.; Turaev, Dmitrij; Weynberg, Karen D.; Botté, Emmanuelle S.; van Oppen, Madeleine J. H.; Webster, Nicole S.; Rattei, Thomas

    2016-01-01

    Abundant bioinformatics resources are available for the study of complex microbial metagenomes, however their utility in viral metagenomics is limited. HoloVir is a robust and flexible data analysis pipeline that provides an optimized and validated workflow for taxonomic and functional characterization of viral metagenomes derived from invertebrate holobionts. Simulated viral metagenomes comprising varying levels of viral diversity and abundance were used to determine the optimal assembly and gene prediction strategy, and multiple sequence assembly methods and gene prediction tools were tested in order to optimize our analysis workflow. HoloVir performs pairwise comparisons of single read and predicted gene datasets against the viral RefSeq database to assign taxonomy and additional comparison to phage-specific and cellular markers is undertaken to support the taxonomic assignments and identify potential cellular contamination. Broad functional classification of the predicted genes is provided by assignment of COG microbial functional category classifications using EggNOG and higher resolution functional analysis is achieved by searching for enrichment of specific Swiss-Prot keywords within the viral metagenome. Application of HoloVir to viral metagenomes from the coral Pocillopora damicornis and the sponge Rhopaloeides odorabile demonstrated that HoloVir provides a valuable tool to characterize holobiont viral communities across species, environments, or experiments. PMID:27375564

  6. Human microRNA target analysis and gene ontology clustering by GOmir, a novel stand-alone application

    PubMed Central

    Roubelakis, Maria G; Zotos, Pantelis; Papachristoudis, Georgios; Michalopoulos, Ioannis; Pappa, Kalliopi I; Anagnou, Nicholas P; Kossida, Sophia

    2009-01-01

    Background microRNAs (miRNAs) are single-stranded RNA molecules of about 20–23 nucleotides length found in a wide variety of organisms. miRNAs regulate gene expression, by interacting with target mRNAs at specific sites in order to induce cleavage of the message or inhibit translation. Predicting or verifying mRNA targets of specific miRNAs is a difficult process of great importance. Results GOmir is a novel stand-alone application consisting of two separate tools: JTarget and TAGGO. JTarget integrates miRNA target prediction and functional analysis by combining the predicted target genes from TargetScan, miRanda, RNAhybrid and PicTar computational tools as well as the experimentally supported targets from TarBase and also providing a full gene description and functional analysis for each target gene. On the other hand, TAGGO application is designed to automatically group gene ontology annotations, taking advantage of the Gene Ontology (GO), in order to extract the main attributes of sets of proteins. GOmir represents a new tool incorporating two separate Java applications integrated into one stand-alone Java application. Conclusion GOmir (by using up to five different databases) introduces miRNA predicted targets accompanied by (a) full gene description, (b) functional analysis and (c) detailed gene ontology clustering. Additionally, a reverse search initiated by a potential target can also be conducted. GOmir can freely be downloaded BRFAA. PMID:19534746

  7. Human microRNA target analysis and gene ontology clustering by GOmir, a novel stand-alone application.

    PubMed

    Roubelakis, Maria G; Zotos, Pantelis; Papachristoudis, Georgios; Michalopoulos, Ioannis; Pappa, Kalliopi I; Anagnou, Nicholas P; Kossida, Sophia

    2009-06-16

    microRNAs (miRNAs) are single-stranded RNA molecules of about 20-23 nucleotides length found in a wide variety of organisms. miRNAs regulate gene expression, by interacting with target mRNAs at specific sites in order to induce cleavage of the message or inhibit translation. Predicting or verifying mRNA targets of specific miRNAs is a difficult process of great importance. GOmir is a novel stand-alone application consisting of two separate tools: JTarget and TAGGO. JTarget integrates miRNA target prediction and functional analysis by combining the predicted target genes from TargetScan, miRanda, RNAhybrid and PicTar computational tools as well as the experimentally supported targets from TarBase and also providing a full gene description and functional analysis for each target gene. On the other hand, TAGGO application is designed to automatically group gene ontology annotations, taking advantage of the Gene Ontology (GO), in order to extract the main attributes of sets of proteins. GOmir represents a new tool incorporating two separate Java applications integrated into one stand-alone Java application. GOmir (by using up to five different databases) introduces miRNA predicted targets accompanied by (a) full gene description, (b) functional analysis and (c) detailed gene ontology clustering. Additionally, a reverse search initiated by a potential target can also be conducted. GOmir can freely be downloaded BRFAA.

  8. SU-C-204-01: A Fast Analytical Approach for Prompt Gamma and PET Predictions in a TPS for Proton Range Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, K; Herzog, M; Landry, G

    2015-06-15

    Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used asmore » irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.« less

  9. Biological interpretation of genome-wide association studies using predicted gene functions.

    PubMed

    Pers, Tune H; Karjalainen, Juha M; Chan, Yingleong; Westra, Harm-Jan; Wood, Andrew R; Yang, Jian; Lui, Julian C; Vedantam, Sailaja; Gustafsson, Stefan; Esko, Tonu; Frayling, Tim; Speliotes, Elizabeth K; Boehnke, Michael; Raychaudhuri, Soumya; Fehrmann, Rudolf S N; Hirschhorn, Joel N; Franke, Lude

    2015-01-19

    The main challenge for gaining biological insights from genetic associations is identifying which genes and pathways explain the associations. Here we present DEPICT, an integrative tool that employs predicted gene functions to systematically prioritize the most likely causal genes at associated loci, highlight enriched pathways and identify tissues/cell types where genes from associated loci are highly expressed. DEPICT is not limited to genes with established functions and prioritizes relevant gene sets for many phenotypes.

  10. Individual differences in children's innovative problem-solving are not predicted by divergent thinking or executive functions

    PubMed Central

    2016-01-01

    Recent studies of children's tool innovation have revealed that there is variation in children's success in middle-childhood. In two individual differences studies, we sought to identify personal characteristics that might predict success on an innovation task. In Study 1, we found that although measures of divergent thinking were related to each other they did not predict innovation success. In Study 2, we measured executive functioning including: inhibition, working memory, attentional flexibility and ill-structured problem-solving. None of these measures predicted innovation, but, innovation was predicted by children's performance on a receptive vocabulary scale that may function as a proxy for general intelligence. We did not find evidence that children's innovation was predicted by specific personal characteristics. PMID:26926280

  11. On the abundance of extreme voids II: a survey of void mass functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chongchitnan, Siri; Hunt, Matthew, E-mail: s.chongchitnan@hull.ac.uk, E-mail: m.d.hunt@2012.hull.ac.uk

    2017-03-01

    The abundance of cosmic voids can be described by an analogue of halo mass functions for galaxy clusters. In this work, we explore a number of void mass functions: from those based on excursion-set theory to new mass functions obtained by modifying halo mass functions. We show how different void mass functions vary in their predictions for the largest void expected in an observational volume, and compare those predictions to observational data. Our extreme-value formalism is shown to be a new practical tool for testing void theories against simulation and observation.

  12. A Deep Space Orbit Determination Software: Overview and Event Prediction Capability

    NASA Astrophysics Data System (ADS)

    Kim, Youngkwang; Park, Sang-Young; Lee, Eunji; Kim, Minsik

    2017-06-01

    This paper presents an overview of deep space orbit determination software (DSODS), as well as validation and verification results on its event prediction capabilities. DSODS was developed in the MATLAB object-oriented programming environment to support the Korea Pathfinder Lunar Orbiter (KPLO) mission. DSODS has three major capabilities: celestial event prediction for spacecraft, orbit determination with deep space network (DSN) tracking data, and DSN tracking data simulation. To achieve its functionality requirements, DSODS consists of four modules: orbit propagation (OP), event prediction (EP), data simulation (DS), and orbit determination (OD) modules. This paper explains the highest-level data flows between modules in event prediction, orbit determination, and tracking data simulation processes. Furthermore, to address the event prediction capability of DSODS, this paper introduces OP and EP modules. The role of the OP module is to handle time and coordinate system conversions, to propagate spacecraft trajectories, and to handle the ephemerides of spacecraft and celestial bodies. Currently, the OP module utilizes the General Mission Analysis Tool (GMAT) as a third-party software component for highfidelity deep space propagation, as well as time and coordinate system conversions. The role of the EP module is to predict celestial events, including eclipses, and ground station visibilities, and this paper presents the functionality requirements of the EP module. The validation and verification results show that, for most cases, event prediction errors were less than 10 millisec when compared with flight proven mission analysis tools such as GMAT and Systems Tool Kit (STK). Thus, we conclude that DSODS is capable of predicting events for the KPLO in real mission applications.

  13. Predicting the Functional Impact of KCNQ1 Variants of Unknown Significance.

    PubMed

    Li, Bian; Mendenhall, Jeffrey L; Kroncke, Brett M; Taylor, Keenan C; Huang, Hui; Smith, Derek K; Vanoye, Carlos G; Blume, Jeffrey D; George, Alfred L; Sanders, Charles R; Meiler, Jens

    2017-10-01

    An emerging standard-of-care for long-QT syndrome uses clinical genetic testing to identify genetic variants of the KCNQ1 potassium channel. However, interpreting results from genetic testing is confounded by the presence of variants of unknown significance for which there is inadequate evidence of pathogenicity. In this study, we curated from the literature a high-quality set of 107 functionally characterized KCNQ1 variants. Based on this data set, we completed a detailed quantitative analysis on the sequence conservation patterns of subdomains of KCNQ1 and the distribution of pathogenic variants therein. We found that conserved subdomains generally are critical for channel function and are enriched with dysfunctional variants. Using this experimentally validated data set, we trained a neural network, designated Q1VarPred, specifically for predicting the functional impact of KCNQ1 variants of unknown significance. The estimated predictive performance of Q1VarPred in terms of Matthew's correlation coefficient and area under the receiver operating characteristic curve were 0.581 and 0.884, respectively, superior to the performance of 8 previous methods tested in parallel. Q1VarPred is publicly available as a web server at http://meilerlab.org/q1varpred. Although a plethora of tools are available for making pathogenicity predictions over a genome-wide scale, previous tools fail to perform in a robust manner when applied to KCNQ1. The contrasting and favorable results for Q1VarPred suggest a promising approach, where a machine-learning algorithm is tailored to a specific protein target and trained with a functionally validated data set to calibrate informatics tools. © 2017 American Heart Association, Inc.

  14. Predicting space telerobotic operator training performance from human spatial ability assessment

    NASA Astrophysics Data System (ADS)

    Liu, Andrew M.; Oman, Charles M.; Galvan, Raquel; Natapoff, Alan

    2013-11-01

    Our goal was to determine whether existing tests of spatial ability can predict an astronaut's qualification test performance after robotic training. Because training astronauts to be qualified robotics operators is so long and expensive, NASA is interested in tools that can predict robotics performance before training begins. Currently, the Astronaut Office does not have a validated tool to predict robotics ability as part of its astronaut selection or training process. Commonly used tests of human spatial ability may provide such a tool to predict robotics ability. We tested the spatial ability of 50 active astronauts who had completed at least one robotics training course, then used logistic regression models to analyze the correlation between spatial ability test scores and the astronauts' performance in their evaluation test at the end of the training course. The fit of the logistic function to our data is statistically significant for several spatial tests. However, the prediction performance of the logistic model depends on the criterion threshold assumed. To clarify the critical selection issues, we show how the probability of correct classification vs. misclassification varies as a function of the mental rotation test criterion level. Since the costs of misclassification are low, the logistic models of spatial ability and robotic performance are reliable enough only to be used to customize regular and remedial training. We suggest several changes in tracking performance throughout robotics training that could improve the range and reliability of predictive models.

  15. Can we predict functional decline in hospitalized older people admitted through the emergency department? Reanalysis of a predictive tool ten years after its conception.

    PubMed

    De Brauwer, Isabelle; Cornette, Pascale; Boland, Benoît; Verschuren, Franck; D'Hoore, William

    2017-05-12

    In the Emergency Department (ED), early and rapid identification of older people at risk of adverse outcomes, who could best benefit from complex geriatric intervention, would avoid wasting time, especially in terms of prevention of adverse outcomes, and ensure optimal orientation of vulnerable patients. We wanted to test the predictive ability of a screening tool assessing risk of functional decline (FD), named SHERPA, 10 years after its conception, and to assess the added value of other clinical or biological factors associated with FD. A prospective cohort study of older patients (n = 305, ≥ 75 years) admitted through the emergency department, for at least 48 h in non-geriatric wards (mean age 82.5 ± 4.9, 55% women). SHERPA variables (i.e. age, pre-admission instrumental Activity of Daily Living (ADL) status, falls within a year, self-rated health and 21-point MMSE) were collected within 48 h of admission, along with socio-demographic, medical and biological data. Functional status was followed at 3 months by phone. FD was defined as a decrease at 3 months of at least one point in the pre-admission basic ADL score. Predictive ability of SHERPA was assessed using c-statistic, predictive values and likelihood ratios. Measures of discrimination improvement were Net Reclassification Improvement and Integrated Discrimination Improvement. One hundred and five patients (34%) developed 3-month FD. Predictive ability of SHERPA decreased dramatically over 10 years (c = 0.73 vs. 0.64). Only two of its constitutive variables, i.e. falls and instrumental ADL, were significant in logistic regression analysis for functional decline, while 21-point MMSE was kept in the model for clinical relevance. Demographic, comorbidity or laboratory data available upon admission did not improve the SHERPA predictive yield. Prediction of FD with SHERPA is difficult, but predictive factors, i.e. falls, pre-existing functional limitation and cognitive impairment, stay consistent across time and with literature. As accuracy of SHERPA and others existing screening tools for FD is moderate, using these predictors as flags instead of using composite scales can be a way to screen for high-risk patients.

  16. A large-scale evaluation of computational protein function prediction

    PubMed Central

    Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650

  17. Biological interpretation of genome-wide association studies using predicted gene functions

    PubMed Central

    Pers, Tune H.; Karjalainen, Juha M.; Chan, Yingleong; Westra, Harm-Jan; Wood, Andrew R.; Yang, Jian; Lui, Julian C.; Vedantam, Sailaja; Gustafsson, Stefan; Esko, Tonu; Frayling, Tim; Speliotes, Elizabeth K.; Boehnke, Michael; Raychaudhuri, Soumya; Fehrmann, Rudolf S.N.; Hirschhorn, Joel N.; Franke, Lude

    2015-01-01

    The main challenge for gaining biological insights from genetic associations is identifying which genes and pathways explain the associations. Here we present DEPICT, an integrative tool that employs predicted gene functions to systematically prioritize the most likely causal genes at associated loci, highlight enriched pathways and identify tissues/cell types where genes from associated loci are highly expressed. DEPICT is not limited to genes with established functions and prioritizes relevant gene sets for many phenotypes. PMID:25597830

  18. The Development of PIPA: An Integrated and Automated Pipeline for Genome-Wide Protein Function Annotation

    DTIC Science & Technology

    2008-01-25

    limitations and plans for improvement Perhaps, one of PIPA’s main limitations is that all of its currently integrated resources to predict protein function...are planning on expending PIPA’s function prediction capabilities by incorporating comparative analysis approaches, e.g., phy- logenetic tree analysis...tools and services. Nucleic Acids Res 2005/12/31 edition. 2006, 34(Database issue):D247-51. 6. Bru C, Courcelle E, Carrere S, Beausse Y, Dalmar S

  19. cisMEP: an integrated repository of genomic epigenetic profiles and cis-regulatory modules in Drosophila

    PubMed Central

    2014-01-01

    Background Cis-regulatory modules (CRMs), or the DNA sequences required for regulating gene expression, play the central role in biological researches on transcriptional regulation in metazoan species. Nowadays, the systematic understanding of CRMs still mainly resorts to computational methods due to the time-consuming and small-scale nature of experimental methods. But the accuracy and reliability of different CRM prediction tools are still unclear. Without comparative cross-analysis of the results and combinatorial consideration with extra experimental information, there is no easy way to assess the confidence of the predicted CRMs. This limits the genome-wide understanding of CRMs. Description It is known that transcription factor binding and epigenetic profiles tend to determine functions of CRMs in gene transcriptional regulation. Thus integration of the genome-wide epigenetic profiles with systematically predicted CRMs can greatly help researchers evaluate and decipher the prediction confidence and possible transcriptional regulatory functions of these potential CRMs. However, these data are still fragmentary in the literatures. Here we performed the computational genome-wide screening for potential CRMs using different prediction tools and constructed the pioneer database, cisMEP (cis-regulatory module epigenetic profile database), to integrate these computationally identified CRMs with genomic epigenetic profile data. cisMEP collects the literature-curated TFBS location data and nine genres of epigenetic data for assessing the confidence of these potential CRMs and deciphering the possible CRM functionality. Conclusions cisMEP aims to provide a user-friendly interface for researchers to assess the confidence of different potential CRMs and to understand the functions of CRMs through experimentally-identified epigenetic profiles. The deposited potential CRMs and experimental epigenetic profiles for confidence assessment provide experimentally testable hypotheses for the molecular mechanisms of metazoan gene regulation. We believe that the information deposited in cisMEP will greatly facilitate the comparative usage of different CRM prediction tools and will help biologists to study the modular regulatory mechanisms between different TFs and their target genes. PMID:25521507

  20. A modified fall risk assessment tool that is specific to physical function predicts falls in community-dwelling elderly people.

    PubMed

    Hirase, Tatsuya; Inokuchi, Shigeru; Matsusaka, Nobuou; Nakahara, Kazumi; Okita, Minoru

    2014-01-01

    Developing a practical fall risk assessment tool to predict the occurrence of falls in the primary care setting is important because investigators have reported deterioration of physical function associated with falls. Researchers have used many performance tests to predict the occurrence of falls. These performance tests predict falls and also assess physical function and determine exercise interventions. However, the need for such specialists as physical therapists to accurately conduct these tests limits their use in the primary care setting. Questionnaires for fall prediction offer an easy way to identify high-risk fallers without requiring specialists. Using an existing fall assessment questionnaire, this study aimed to identify items specific to physical function and determine whether those items were able to predict falls and estimate physical function of high-risk fallers. The analysis consisted of both retrospective and prospective studies and used 2 different samples (retrospective, n = 1871; prospective, n = 292). The retrospective study and 3-month prospective study comprised community-dwelling individuals aged 65 years or older and older adults using community day centers. The number of falls, risk factors for falls (15 risk factors on the questionnaire), and physical function determined by chair standing test (CST) and Timed Up and Go Test (TUGT) were assessed. The retrospective study selected fall risk factors related to physical function. The prospective study investigated whether the number of selected risk factors could predict falls. The predictive power was determined using the area under the receiver operating characteristic curve. Seven of the 15 risk factors were related to physical function. The area under the receiver operating characteristic curve for the sum of the selected risk factors of previous falls plus the other risk factors was 0.82 (P = .00). The best cutoff point was 4 risk factors, with sensitivity and specificity of 84% and 68%, respectively. The mean values for the CST and TUGT at the best cutoff point were 12.9 and 12.5 seconds, respectively. In the retrospective study, the values for the CST and TUGT corresponding to the best cutoff point from the prospective study were 13.2 and 11.4 seconds, respectively. This study confirms that a screening tool comprising 7 fall risk factors can be used to predict falls. The values for the CST and TUGT corresponding to the best cutoff point for the selected 7 risk factors determined in our prospective study were similar to the cutoff points for the CST and TUGT in previous studies for fall prediction. We propose that the sum of the selected risk factors of previous falls plus the other risk factors may be identified as the estimated value for physical function. These findings may contribute to earlier identification of high-risk fallers and intervention for fall prevention.

  1. GAP Final Technical Report 12-14-04

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrew J. Bordner, PhD, Senior Research Scientist

    2004-12-14

    The Genomics Annotation Platform (GAP) was designed to develop new tools for high throughput functional annotation and characterization of protein sequences and structures resulting from genomics and structural proteomics, benchmarking and application of those tools. Furthermore, this platform integrated the genomic scale sequence and structural analysis and prediction tools with the advanced structure prediction and bioinformatics environment of ICM. The development of GAP was primarily oriented towards the annotation of new biomolecular structures using both structural and sequence data. Even though the amount of protein X-ray crystal data is growing exponentially, the volume of sequence data is growing even moremore » rapidly. This trend was exploited by leveraging the wealth of sequence data to provide functional annotation for protein structures. The additional information provided by GAP is expected to assist the majority of the commercial users of ICM, who are involved in drug discovery, in identifying promising drug targets as well in devising strategies for the rational design of therapeutics directed at the protein of interest. The GAP also provided valuable tools for biochemistry education, and structural genomics centers. In addition, GAP incorporates many novel prediction and analysis methods not available in other molecular modeling packages. This development led to signing the first Molsoft agreement in the structural genomics annotation area with the University of oxford Structural Genomics Center. This commercial agreement validated the Molsoft efforts under the GAP project and provided the basis for further development of the large scale functional annotation platform.« less

  2. From Structure to Function: A Comprehensive Compendium of Tools to Unveil Protein Domains and Understand Their Role in Cytokinesis.

    PubMed

    Rincon, Sergio A; Paoletti, Anne

    2016-01-01

    Unveiling the function of a novel protein is a challenging task that requires careful experimental design. Yeast cytokinesis is a conserved process that involves modular structural and regulatory proteins. For such proteins, an important step is to identify their domains and structural organization. Here we briefly discuss a collection of methods commonly used for sequence alignment and prediction of protein structure that represent powerful tools for the identification homologous domains and design of structure-function approaches to test experimentally the function of multi-domain proteins such as those implicated in yeast cytokinesis.

  3. Fuzzy regression modeling for tool performance prediction and degradation detection.

    PubMed

    Li, X; Er, M J; Lim, B S; Zhou, J H; Gan, O P; Rutkowski, L

    2010-10-01

    In this paper, the viability of using Fuzzy-Rule-Based Regression Modeling (FRM) algorithm for tool performance and degradation detection is investigated. The FRM is developed based on a multi-layered fuzzy-rule-based hybrid system with Multiple Regression Models (MRM) embedded into a fuzzy logic inference engine that employs Self Organizing Maps (SOM) for clustering. The FRM converts a complex nonlinear problem to a simplified linear format in order to further increase the accuracy in prediction and rate of convergence. The efficacy of the proposed FRM is tested through a case study - namely to predict the remaining useful life of a ball nose milling cutter during a dry machining process of hardened tool steel with a hardness of 52-54 HRc. A comparative study is further made between four predictive models using the same set of experimental data. It is shown that the FRM is superior as compared with conventional MRM, Back Propagation Neural Networks (BPNN) and Radial Basis Function Networks (RBFN) in terms of prediction accuracy and learning speed.

  4. DomSign: a top-down annotation pipeline to enlarge enzyme space in the protein universe.

    PubMed

    Wang, Tianmin; Mori, Hiroshi; Zhang, Chong; Kurokawa, Ken; Xing, Xin-Hui; Yamada, Takuji

    2015-03-21

    Computational predictions of catalytic function are vital for in-depth understanding of enzymes. Because several novel approaches performing better than the common BLAST tool are rarely applied in research, we hypothesized that there is a large gap between the number of known annotated enzymes and the actual number in the protein universe, which significantly limits our ability to extract additional biologically relevant functional information from the available sequencing data. To reliably expand the enzyme space, we developed DomSign, a highly accurate domain signature-based enzyme functional prediction tool to assign Enzyme Commission (EC) digits. DomSign is a top-down prediction engine that yields results comparable, or superior, to those from many benchmark EC number prediction tools, including BLASTP, when a homolog with an identity >30% is not available in the database. Performance tests showed that DomSign is a highly reliable enzyme EC number annotation tool. After multiple tests, the accuracy is thought to be greater than 90%. Thus, DomSign can be applied to large-scale datasets, with the goal of expanding the enzyme space with high fidelity. Using DomSign, we successfully increased the percentage of EC-tagged enzymes from 12% to 30% in UniProt-TrEMBL. In the Kyoto Encyclopedia of Genes and Genomes bacterial database, the percentage of EC-tagged enzymes for each bacterial genome could be increased from 26.0% to 33.2% on average. Metagenomic mining was also efficient, as exemplified by the application of DomSign to the Human Microbiome Project dataset, recovering nearly one million new EC-labeled enzymes. Our results offer preliminarily confirmation of the existence of the hypothesized huge number of "hidden enzymes" in the protein universe, the identification of which could substantially further our understanding of the metabolisms of diverse organisms and also facilitate bioengineering by providing a richer enzyme resource. Furthermore, our results highlight the necessity of using more advanced computational tools than BLAST in protein database annotations to extract additional biologically relevant functional information from the available biological sequences.

  5. Exploring Human Diseases and Biological Mechanisms by Protein Structure Prediction and Modeling.

    PubMed

    Wang, Juexin; Luttrell, Joseph; Zhang, Ning; Khan, Saad; Shi, NianQing; Wang, Michael X; Kang, Jing-Qiong; Wang, Zheng; Xu, Dong

    2016-01-01

    Protein structure prediction and modeling provide a tool for understanding protein functions by computationally constructing protein structures from amino acid sequences and analyzing them. With help from protein prediction tools and web servers, users can obtain the three-dimensional protein structure models and gain knowledge of functions from the proteins. In this chapter, we will provide several examples of such studies. As an example, structure modeling methods were used to investigate the relation between mutation-caused misfolding of protein and human diseases including epilepsy and leukemia. Protein structure prediction and modeling were also applied in nucleotide-gated channels and their interaction interfaces to investigate their roles in brain and heart cells. In molecular mechanism studies of plants, rice salinity tolerance mechanism was studied via structure modeling on crucial proteins identified by systems biology analysis; trait-associated protein-protein interactions were modeled, which sheds some light on the roles of mutations in soybean oil/protein content. In the age of precision medicine, we believe protein structure prediction and modeling will play more and more important roles in investigating biomedical mechanism of diseases and drug design.

  6. Guidelines for reporting and using prediction tools for genetic variation analysis.

    PubMed

    Vihinen, Mauno

    2013-02-01

    Computational prediction methods are widely used for the analysis of human genome sequence variants and their effects on gene/protein function, splice site aberration, pathogenicity, and disease risk. New methods are frequently developed. We believe that guidelines are essential for those writing articles about new prediction methods, as well as for those applying these tools in their research, so that the necessary details are reported. This will enable readers to gain the full picture of technical information, performance, and interpretation of results, and to facilitate comparisons of related methods. Here, we provide instructions on how to describe new methods, report datasets, and assess the performance of predictive tools. We also discuss what details of predictor implementation are essential for authors to understand. Similarly, these guidelines for the use of predictors provide instructions on what needs to be delineated in the text, as well as how researchers can avoid unwarranted conclusions. They are applicable to most prediction methods currently utilized. By applying these guidelines, authors will help reviewers, editors, and readers to more fully comprehend prediction methods and their use. © 2012 Wiley Periodicals, Inc.

  7. Mini-Nutritional Assessment, Malnutrition Universal Screening Tool, and Nutrition Risk Screening Tool for the Nutritional Evaluation of Older Nursing Home Residents.

    PubMed

    Donini, Lorenzo M; Poggiogalle, Eleonora; Molfino, Alessio; Rosano, Aldo; Lenzi, Andrea; Rossi Fanelli, Filippo; Muscaritoli, Maurizio

    2016-10-01

    Malnutrition plays a major role in clinical and functional impairment in older adults. The use of validated, user-friendly and rapid screening tools for malnutrition in the elderly may improve the diagnosis and, possibly, the prognosis. The aim of this study was to assess the agreement between Mini-Nutritional Assessment (MNA), considered as a reference tool, MNA short form (MNA-SF), Malnutrition Universal Screening Tool (MUST), and Nutrition Risk Screening (NRS-2002) in elderly institutionalized participants. Participants were enrolled among nursing home residents and underwent a multidimensional evaluation. Predictive value and survival analysis were performed to compare the nutritional classifications obtained from the different tools. A total of 246 participants (164 women, age: 82.3 ± 9 years, and 82 men, age: 76.5 ± 11 years) were enrolled. Based on MNA, 22.6% of females and 17% of males were classified as malnourished; 56.7% of women and 61% of men were at risk of malnutrition. Agreement between MNA and MUST or NRS-2002 was classified as "fair" (k = 0.270 and 0.291, respectively; P < .001), whereas the agreement between MNA and MNA-SF was classified as "moderate" (k = 0.588; P < .001). Because of the high percentage of false negative participants, MUST and NRS-2002 presented a low overall predictive value compared with MNA and MNA-SF. Clinical parameters were significantly different in false negative participants with MUST or NRS-2002 from true negative and true positive individuals using the reference tool. For all screening tools, there was a significant association between malnutrition and mortality. MNA showed the best predictive value for survival among well-nourished participants. Functional, psychological, and cognitive parameters, not considered in MUST and NRS-2002 tools, are probably more important risk factors for malnutrition than acute illness in geriatric long-term care inpatient settings and may account for the low predictive value of these tests. MNA-SF seems to combine the predictive capacity of the full version of the MNA with a sufficiently short time of administration. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  8. A statistical framework to predict functional non-coding regions in the human genome through integrated analysis of annotation data.

    PubMed

    Lu, Qiongshi; Hu, Yiming; Sun, Jiehuan; Cheng, Yuwei; Cheung, Kei-Hoi; Zhao, Hongyu

    2015-05-27

    Identifying functional regions in the human genome is a major goal in human genetics. Great efforts have been made to functionally annotate the human genome either through computational predictions, such as genomic conservation, or high-throughput experiments, such as the ENCODE project. These efforts have resulted in a rich collection of functional annotation data of diverse types that need to be jointly analyzed for integrated interpretation and annotation. Here we present GenoCanyon, a whole-genome annotation method that performs unsupervised statistical learning using 22 computational and experimental annotations thereby inferring the functional potential of each position in the human genome. With GenoCanyon, we are able to predict many of the known functional regions. The ability of predicting functional regions as well as its generalizable statistical framework makes GenoCanyon a unique and powerful tool for whole-genome annotation. The GenoCanyon web server is available at http://genocanyon.med.yale.edu.

  9. Accident occurrence and functional health patterns: a pilot study of relationships in a graduate population.

    PubMed

    Sheerin, Fintan K; Curtis, Elizabeth; de Vries, Jan

    2012-06-01

    This pilot study sought to examine the relationship between functional health patterns and accident proneness. A quantitative-descriptive design was employed assessing accident proneness by collecting data on the occurrence of accidents among a sample of university graduates, and examining this in relation to biographical data and information collated using the Functional Health Pattern Assessment Screening Tool (FHPAST). Data were analyzed using descriptive and inferential statistics. One FHPAST factor predicted more frequent sports accidents. Age was also shown to be a significant predictor but in a counterintuitive way, with greater age predicting less accident proneness. The FHPAST may have a role to play in accident prediction. Functional health pattern assessment may be useful for predicting accidents. © 2012, The Authors. International Journal of Nursing Knowledge © 2012, NANDA International.

  10. Predicting RNA 3D structure using a coarse-grain helix-centered model

    PubMed Central

    Kerpedjiev, Peter; Höner zu Siederdissen, Christian; Hofacker, Ivo L.

    2015-01-01

    A 3D model of RNA structure can provide information about its function and regulation that is not possible with just the sequence or secondary structure. Current models suffer from low accuracy and long running times and either neglect or presume knowledge of the long-range interactions which stabilize the tertiary structure. Our coarse-grained, helix-based, tertiary structure model operates with only a few degrees of freedom compared with all-atom models while preserving the ability to sample tertiary structures given a secondary structure. It strikes a balance between the precision of an all-atom tertiary structure model and the simplicity and effectiveness of a secondary structure representation. It provides a simplified tool for exploring global arrangements of helices and loops within RNA structures. We provide an example of a novel energy function relying only on the positions of stems and loops. We show that coupling our model to this energy function produces predictions as good as or better than the current state of the art tools. We propose that given the wide range of conformational space that needs to be explored, a coarse-grain approach can explore more conformations in less iterations than an all-atom model coupled to a fine-grain energy function. Finally, we emphasize the overarching theme of providing an ensemble of predicted structures, something which our tool excels at, rather than providing a handful of the lowest energy structures. PMID:25904133

  11. Data Mining and Knowledge Management in Higher Education -Potential Applications.

    ERIC Educational Resources Information Center

    Luan, Jing

    This paper introduces a new decision support tool, data mining, in the context of knowledge management. The most striking features of data mining techniques are clustering and prediction. The clustering aspect of data mining offers comprehensive characteristics analysis of students, while the predicting function estimates the likelihood for a…

  12. The evolving role of physiotherapists in pre-employment screening for workplace injury prevention: are functional capacity evaluations the answer?

    PubMed

    Legge, Jennifer

    2013-10-01

    Musculoskeletal injuries account for the largest proportion of workplace injuries. In an attempt to predict, and subsequently manage, the risk of sprains and strains in the workplace, employers are turning to pre-employment screening. Functional capacity evaluations (FCEs) are increasing in popularity as a tool for pre-employment screening despite limited published evidence for their validity in healthy working populations. This narrative review will present an overview of the state of the evidence for pre-employment functional testing, propose a framework for decision-making to determine the suitability of assessment tools, and discuss the role and potential ethical challenges for physiotherapists conducting pre-employment functional testing. Much of the evidence surrounding the validity of functional testing is in the context of the injured worker and prediction of return to work. In healthy populations, FCE components, such as aerobic fitness and manual handling activities, have demonstrated predictability of workplace injury in a small number of studies. This predictability improves when workers' performance is compared with the job demands. This job-specific approach is also required to meet anti-discrimination requirements. There are a number of practical limitations to functional testing, although these are not limited to the pre-employment domain. Physiotherapists need to have a clear understanding of the legal requirements and potential ethical challenges that they may face when conducting pre-employment functional assessments (PEFAs). Further research is needed into the efficacy of pre-employment testing for workplace injury prevention. Physiotherapists and PEFAs are just one part of a holistic approach to workplace injury prevention.

  13. The Application of Function Points to Predict Source Lines of Code for Software Development

    DTIC Science & Technology

    1992-09-01

    there are some disadvantages. Software estimating tools are expensive. A single tool may cost more than $15,000 due to the high market value of the...term and Lang variables simultaneously onlN added marginal improvements over models with these terms included singularly. Using all the available

  14. Protein function prediction--the power of multiplicity.

    PubMed

    Rentzsch, Robert; Orengo, Christine A

    2009-04-01

    Advances in experimental and computational methods have quietly ushered in a new era in protein function annotation. This 'age of multiplicity' is marked by the notion that only the use of multiple tools, multiple evidence and considering the multiple aspects of function can give us the broad picture that 21st century biology will need to link and alter micro- and macroscopic phenotypes. It might also help us to undo past mistakes by removing errors from our databases and prevent us from producing more. On the downside, multiplicity is often confusing. We therefore systematically review methods and resources for automated protein function prediction, looking at individual (biochemical) and contextual (network) functions, respectively.

  15. Predicted Arabidopsis Interactome Resource and Gene Set Linkage Analysis: A Transcriptomic Analysis Resource.

    PubMed

    Yao, Heng; Wang, Xiaoxuan; Chen, Pengcheng; Hai, Ling; Jin, Kang; Yao, Lixia; Mao, Chuanzao; Chen, Xin

    2018-05-01

    An advanced functional understanding of omics data is important for elucidating the design logic of physiological processes in plants and effectively controlling desired traits in plants. We present the latest versions of the Predicted Arabidopsis Interactome Resource (PAIR) and of the gene set linkage analysis (GSLA) tool, which enable the interpretation of an observed transcriptomic change (differentially expressed genes [DEGs]) in Arabidopsis ( Arabidopsis thaliana ) with respect to its functional impact for biological processes. PAIR version 5.0 integrates functional association data between genes in multiple forms and infers 335,301 putative functional interactions. GSLA relies on this high-confidence inferred functional association network to expand our perception of the functional impacts of an observed transcriptomic change. GSLA then interprets the biological significance of the observed DEGs using established biological concepts (annotation terms), describing not only the DEGs themselves but also their potential functional impacts. This unique analytical capability can help researchers gain deeper insights into their experimental results and highlight prospective directions for further investigation. We demonstrate the utility of GSLA with two case studies in which GSLA uncovered how molecular events may have caused physiological changes through their collective functional influence on biological processes. Furthermore, we showed that typical annotation-enrichment tools were unable to produce similar insights to PAIR/GSLA. The PAIR version 5.0-inferred interactome and GSLA Web tool both can be accessed at http://public.synergylab.cn/pair/. © 2018 American Society of Plant Biologists. All Rights Reserved.

  16. Building a generalized distributed system model

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi; Foudriat, E. C.

    1991-01-01

    A modeling tool for both analysis and design of distributed systems is discussed. Since many research institutions have access to networks of workstations, the researchers decided to build a tool running on top of the workstations to function as a prototype as well as a distributed simulator for a computing system. The effects of system modeling on performance prediction in distributed systems and the effect of static locking and deadlocks on the performance predictions of distributed transactions are also discussed. While the probability of deadlock is considerably small, its effects on performance could be significant.

  17. Assessment of Lightning Transients on a De-Iced Rotor Blade with Predictive Tools and Coaxial Return Measurements

    NASA Astrophysics Data System (ADS)

    Guillet, S.; Gosmain, A.; Ducoux, W.; Ponçon, M.; Fontaine, G.; Desseix, P.; Perraud, P.

    2012-05-01

    The increasing use of composite materials in aircrafts primary structures has led to different problematics in the field of safety of flight in lightning conditions. The consequences of this technological mutation, which occurs in a parallel context of extension of electrified critical functions, are addressed by aircraft manufacturers through the enhancement of their available assessment means of lightning transient. On the one hand, simulation tools, provided an accurate description of aircraft design, are today valuable assessment tools, in both predictive and operative terms. On the other hand, in-house test means allow confirmation and consolidation of design office hardening solutions. The combined use of predictive simulation tools and in- house test means offers an efficient and reliable support for all aircraft developments in their various life-time stages. The present paper provides PREFACE research project results that illustrate the above introduced strategy on the de-icing system of the NH90 composite main rotor blade.

  18. BrEPS 2.0: Optimization of sequence pattern prediction for enzyme annotation.

    PubMed

    Dudek, Christian-Alexander; Dannheim, Henning; Schomburg, Dietmar

    2017-01-01

    The prediction of gene functions is crucial for a large number of different life science areas. Faster high throughput sequencing techniques generate more and larger datasets. The manual annotation by classical wet-lab experiments is not suitable for these large amounts of data. We showed earlier that the automatic sequence pattern-based BrEPS protocol, based on manually curated sequences, can be used for the prediction of enzymatic functions of genes. The growing sequence databases provide the opportunity for more reliable patterns, but are also a challenge for the implementation of automatic protocols. We reimplemented and optimized the BrEPS pattern generation to be applicable for larger datasets in an acceptable timescale. Primary improvement of the new BrEPS protocol is the enhanced data selection step. Manually curated annotations from Swiss-Prot are used as reliable source for function prediction of enzymes observed on protein level. The pool of sequences is extended by highly similar sequences from TrEMBL and SwissProt. This allows us to restrict the selection of Swiss-Prot entries, without losing the diversity of sequences needed to generate significant patterns. Additionally, a supporting pattern type was introduced by extending the patterns at semi-conserved positions with highly similar amino acids. Extended patterns have an increased complexity, increasing the chance to match more sequences, without losing the essential structural information of the pattern. To enhance the usability of the database, we introduced enzyme function prediction based on consensus EC numbers and IUBMB enzyme nomenclature. BrEPS is part of the Braunschweig Enzyme Database (BRENDA) and is available on a completely redesigned website and as download. The database can be downloaded and used with the BrEPScmd command line tool for large scale sequence analysis. The BrEPS website and downloads for the database creation tool, command line tool and database are freely accessible at http://breps.tu-bs.de.

  19. BrEPS 2.0: Optimization of sequence pattern prediction for enzyme annotation

    PubMed Central

    Schomburg, Dietmar

    2017-01-01

    The prediction of gene functions is crucial for a large number of different life science areas. Faster high throughput sequencing techniques generate more and larger datasets. The manual annotation by classical wet-lab experiments is not suitable for these large amounts of data. We showed earlier that the automatic sequence pattern-based BrEPS protocol, based on manually curated sequences, can be used for the prediction of enzymatic functions of genes. The growing sequence databases provide the opportunity for more reliable patterns, but are also a challenge for the implementation of automatic protocols. We reimplemented and optimized the BrEPS pattern generation to be applicable for larger datasets in an acceptable timescale. Primary improvement of the new BrEPS protocol is the enhanced data selection step. Manually curated annotations from Swiss-Prot are used as reliable source for function prediction of enzymes observed on protein level. The pool of sequences is extended by highly similar sequences from TrEMBL and SwissProt. This allows us to restrict the selection of Swiss-Prot entries, without losing the diversity of sequences needed to generate significant patterns. Additionally, a supporting pattern type was introduced by extending the patterns at semi-conserved positions with highly similar amino acids. Extended patterns have an increased complexity, increasing the chance to match more sequences, without losing the essential structural information of the pattern. To enhance the usability of the database, we introduced enzyme function prediction based on consensus EC numbers and IUBMB enzyme nomenclature. BrEPS is part of the Braunschweig Enzyme Database (BRENDA) and is available on a completely redesigned website and as download. The database can be downloaded and used with the BrEPScmd command line tool for large scale sequence analysis. The BrEPS website and downloads for the database creation tool, command line tool and database are freely accessible at http://breps.tu-bs.de. PMID:28750104

  20. The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study

    PubMed Central

    Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.

    2008-01-01

    Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150

  1. Tool Condition Monitoring and Remaining Useful Life Prognostic Based on a Wireless Sensor in Dry Milling Operations.

    PubMed

    Zhang, Cunji; Yao, Xifan; Zhang, Jianming; Jin, Hong

    2016-05-31

    Tool breakage causes losses of surface polishing and dimensional accuracy for machined part, or possible damage to a workpiece or machine. Tool Condition Monitoring (TCM) is considerably vital in the manufacturing industry. In this paper, an indirect TCM approach is introduced with a wireless triaxial accelerometer. The vibrations in the three vertical directions (x, y and z) are acquired during milling operations, and the raw signals are de-noised by wavelet analysis. These features of de-noised signals are extracted in the time, frequency and time-frequency domains. The key features are selected based on Pearson's Correlation Coefficient (PCC). The Neuro-Fuzzy Network (NFN) is adopted to predict the tool wear and Remaining Useful Life (RUL). In comparison with Back Propagation Neural Network (BPNN) and Radial Basis Function Network (RBFN), the results show that the NFN has the best performance in the prediction of tool wear and RUL.

  2. RELATIONSHIP BETWEEN PHYLOGENETIC DISTRIBUTION AND GENOMIC FEATURES IN NEUROSPORA CRASSA

    USDA-ARS?s Scientific Manuscript database

    In the post-genome era, insufficient functional annotation of predicted genes greatly restricts the potential of mining genome data. We demonstrate that an evolutionary approach, which is independent of functional annotation, has great potential as a tool for genome analysis. We chose the genome o...

  3. Rtools: a web server for various secondary structural analyses on single RNA sequences.

    PubMed

    Hamada, Michiaki; Ono, Yukiteru; Kiryu, Hisanori; Sato, Kengo; Kato, Yuki; Fukunaga, Tsukasa; Mori, Ryota; Asai, Kiyoshi

    2016-07-08

    The secondary structures, as well as the nucleotide sequences, are the important features of RNA molecules to characterize their functions. According to the thermodynamic model, however, the probability of any secondary structure is very small. As a consequence, any tool to predict the secondary structures of RNAs has limited accuracy. On the other hand, there are a few tools to compensate the imperfect predictions by calculating and visualizing the secondary structural information from RNA sequences. It is desirable to obtain the rich information from those tools through a friendly interface. We implemented a web server of the tools to predict secondary structures and to calculate various structural features based on the energy models of secondary structures. By just giving an RNA sequence to the web server, the user can get the different types of solutions of the secondary structures, the marginal probabilities such as base-paring probabilities, loop probabilities and accessibilities of the local bases, the energy changes by arbitrary base mutations as well as the measures for validations of the predicted secondary structures. The web server is available at http://rtools.cbrc.jp, which integrates software tools, CentroidFold, CentroidHomfold, IPKnot, CapR, Raccess, Rchange and RintD. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Modeling and performance assessment in QinetiQ of EO and IR airborne reconnaissance systems

    NASA Astrophysics Data System (ADS)

    Williams, John W.; Potter, Gary E.

    2002-11-01

    QinetiQ are the technical authority responsible for specifying the performance requirements for the procurement of airborne reconnaissance systems, on behalf of the UK MoD. They are also responsible for acceptance of delivered systems, overseeing and verifying the installed system performance as predicted and then assessed by the contractor. Measures of functional capability are central to these activities. The conduct of these activities utilises the broad technical insight and wide range of analysis tools and models available within QinetiQ. This paper focuses on the tools, methods and models that are applicable to systems based on EO and IR sensors. The tools, methods and models are described, and representative output for systems that QinetiQ has been responsible for is presented. The principle capability applicable to EO and IR airborne reconnaissance systems is the STAR (Simulation Tools for Airborne Reconnaissance) suite of models. STAR generates predictions of performance measures such as GRD (Ground Resolved Distance) and GIQE (General Image Quality) NIIRS (National Imagery Interpretation Rating Scales). It also generates images representing sensor output, using the scene generation software CAMEO-SIM and the imaging sensor model EMERALD. The simulated image 'quality' is fully correlated with the predicted non-imaging performance measures. STAR also generates image and table data that is compliant with STANAG 7023, which may be used to test ground station functionality.

  5. ORCAN-a web-based meta-server for real-time detection and functional annotation of orthologs.

    PubMed

    Zielezinski, Andrzej; Dziubek, Michal; Sliski, Jan; Karlowski, Wojciech M

    2017-04-15

    ORCAN (ORtholog sCANner) is a web-based meta-server for one-click evolutionary and functional annotation of protein sequences. The server combines information from the most popular orthology-prediction resources, including four tools and four online databases. Functional annotation utilizes five additional comparisons between the query and identified homologs, including: sequence similarity, protein domain architectures, functional motifs, Gene Ontology term assignments and a list of associated articles. Furthermore, the server uses a plurality-based rating system to evaluate the orthology relationships and to rank the reference proteins by their evolutionary and functional relevance to the query. Using a dataset of ∼1 million true yeast orthologs as a sample reference set, we show that combining multiple orthology-prediction tools in ORCAN increases the sensitivity and precision by 1-2 percent points. The service is available for free at http://www.combio.pl/orcan/ . wmk@amu.edu.pl. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. A comparative study of clonal selection algorithm for effluent removal forecasting in septic sludge treatment plant.

    PubMed

    Chun, Ting Sie; Malek, M A; Ismail, Amelia Ritahani

    2015-01-01

    The development of effluent removal prediction is crucial in providing a planning tool necessary for the future development and the construction of a septic sludge treatment plant (SSTP), especially in the developing countries. In order to investigate the expected functionality of the required standard, the prediction of the effluent quality, namely biological oxygen demand, chemical oxygen demand and total suspended solid of an SSTP was modelled using an artificial intelligence approach. In this paper, we adopt the clonal selection algorithm (CSA) to set up a prediction model, with a well-established method - namely the least-square support vector machine (LS-SVM) as a baseline model. The test results of the case study showed that the prediction of the CSA-based SSTP model worked well and provided model performance as satisfactory as the LS-SVM model. The CSA approach shows that fewer control and training parameters are required for model simulation as compared with the LS-SVM approach. The ability of a CSA approach in resolving limited data samples, non-linear sample function and multidimensional pattern recognition makes it a powerful tool in modelling the prediction of effluent removals in an SSTP.

  7. A predicted protein interactome identifies conserved global networks and disease resistance subnetworks in maize

    PubMed Central

    Musungu, Bryan; Bhatnagar, Deepak; Brown, Robert L.; Fakhoury, Ahmad M.; Geisler, Matt

    2015-01-01

    Interactomes are genome-wide roadmaps of protein-protein interactions. They have been produced for humans, yeast, the fruit fly, and Arabidopsis thaliana and have become invaluable tools for generating and testing hypotheses. A predicted interactome for Zea mays (PiZeaM) is presented here as an aid to the research community for this valuable crop species. PiZeaM was built using a proven method of interologs (interacting orthologs) that were identified using both one-to-one and many-to-many orthology between genomes of maize and reference species. Where both maize orthologs occurred for an experimentally determined interaction in the reference species, we predicted a likely interaction in maize. A total of 49,026 unique interactions for 6004 maize proteins were predicted. These interactions are enriched for processes that are evolutionarily conserved, but include many otherwise poorly annotated proteins in maize. The predicted maize interactions were further analyzed by comparing annotation of interacting proteins, including different layers of ontology. A map of pairwise gene co-expression was also generated and compared to predicted interactions. Two global subnetworks were constructed for highly conserved interactions. These subnetworks showed clear clustering of proteins by function. Another subnetwork was created for disease response using a bait and prey strategy to capture interacting partners for proteins that respond to other organisms. Closer examination of this subnetwork revealed the connectivity between biotic and abiotic hormone stress pathways. We believe PiZeaM will provide a useful tool for the prediction of protein function and analysis of pathways for Z. mays researchers and is presented in this paper as a reference tool for the exploration of protein interactions in maize. PMID:26089837

  8. 04-ERD-052-Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loots, G G; Ovcharenko, I; Collette, N

    2007-02-26

    Generating the sequence of the human genome represents a colossal achievement for science and mankind. The technical use for the human genome project information holds great promise to cure disease, prevent bioterror threats, as well as to learn about human origins. Yet converting the sequence data into biological meaningful information has not been immediately obvious, and we are still in the preliminary stages of understanding how the genome is organized, what are the functional building blocks and how do these sequences mediate complex biological processes. The overarching goal of this program was to develop novel methods and high throughput strategiesmore » for determining the functions of ''anonymous'' human genes that are evolutionarily deeply conserved in other vertebrates. We coupled analytical tool development and computational predictions regarding gene function with novel high throughput experimental strategies and tested biological predictions in the laboratory. The tools required for comparative genomic data-mining are fundamentally the same whether they are applied to scientific studies of related microbes or the search for functions of novel human genes. For this reason the tools, conceptual framework and the coupled informatics-experimental biology paradigm we developed in this LDRD has many potential scientific applications relevant to LLNL multidisciplinary research in bio-defense, bioengineering, bionanosciences and microbial and environmental genomics.« less

  9. Predicting Long-Term Global Outcome after Traumatic Brain Injury: Development of a Practical Prognostic Tool Using the Traumatic Brain Injury Model Systems National Database.

    PubMed

    Walker, William C; Stromberg, Katharine A; Marwitz, Jennifer H; Sima, Adam P; Agyemang, Amma A; Graham, Kristin M; Harrison-Felix, Cynthia; Hoffman, Jeanne M; Brown, Allen W; Kreutzer, Jeffrey S; Merchant, Randall

    2018-05-16

    For patients surviving serious traumatic brain injury (TBI), families and other stakeholders often desire information on long-term functional prognosis, but accurate and easy-to-use clinical tools are lacking. We aimed to build utilitarian decision trees from commonly collected clinical variables to predict Glasgow Outcome Scale (GOS) functional levels at 1, 2, and 5 years after moderate-to-severe closed TBI. Flexible classification tree statistical modeling was used on prospectively collected data from the TBI-Model Systems (TBIMS) inception cohort study. Enrollments occurred at 17 designated, or previously designated, TBIMS inpatient rehabilitation facilities. Analysis included all participants with nonpenetrating TBI injured between January 1997 and January 2017. Sample sizes were 10,125 (year-1), 8,821 (year-2), and 6,165 (year-5) after cross-sectional exclusions (death, vegetative state, insufficient post-injury time, and unavailable outcome). In our final models, post-traumatic amnesia (PTA) duration consistently dominated branching hierarchy and was the lone injury characteristic significantly contributing to GOS predictability. Lower-order variables that added predictability were age, pre-morbid education, productivity, and occupational category. Generally, patient outcomes improved with shorter PTA, younger age, greater pre-morbid productivity, and higher pre-morbid vocational or educational achievement. Across all prognostic groups, the best and worst good recovery rates were 65.7% and 10.9%, respectively, and the best and worst severe disability rates were 3.9% and 64.1%. Predictability in test data sets ranged from C-statistic of 0.691 (year-1; confidence interval [CI], 0.675, 0.711) to 0.731 (year-2; CI, 0.724, 0.738). In conclusion, we developed a clinically useful tool to provide prognostic information on long-term functional outcomes for adult survivors of moderate and severe closed TBI. Predictive accuracy for GOS level was demonstrated in an independent test sample. Length of PTA, a clinical marker of injury severity, was by far the most critical outcome determinant.

  10. MusiteDeep: a deep-learning framework for general and kinase-specific phosphorylation site prediction.

    PubMed

    Wang, Duolin; Zeng, Shuai; Xu, Chunhui; Qiu, Wangren; Liang, Yanchun; Joshi, Trupti; Xu, Dong

    2017-12-15

    Computational methods for phosphorylation site prediction play important roles in protein function studies and experimental design. Most existing methods are based on feature extraction, which may result in incomplete or biased features. Deep learning as the cutting-edge machine learning method has the ability to automatically discover complex representations of phosphorylation patterns from the raw sequences, and hence it provides a powerful tool for improvement of phosphorylation site prediction. We present MusiteDeep, the first deep-learning framework for predicting general and kinase-specific phosphorylation sites. MusiteDeep takes raw sequence data as input and uses convolutional neural networks with a novel two-dimensional attention mechanism. It achieves over a 50% relative improvement in the area under the precision-recall curve in general phosphorylation site prediction and obtains competitive results in kinase-specific prediction compared to other well-known tools on the benchmark data. MusiteDeep is provided as an open-source tool available at https://github.com/duolinwang/MusiteDeep. xudong@missouri.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  11. The evolving role of physiotherapists in pre-employment screening for workplace injury prevention: are functional capacity evaluations the answer?

    PubMed Central

    Legge, Jennifer

    2013-01-01

    Background Musculoskeletal injuries account for the largest proportion of workplace injuries. In an attempt to predict, and subsequently manage, the risk of sprains and strains in the workplace, employers are turning to pre-employment screening. Functional capacity evaluations (FCEs) are increasing in popularity as a tool for pre-employment screening despite limited published evidence for their validity in healthy working populations. Objectives This narrative review will present an overview of the state of the evidence for pre-employment functional testing, propose a framework for decision-making to determine the suitability of assessment tools, and discuss the role and potential ethical challenges for physiotherapists conducting pre-employment functional testing. Major Findings Much of the evidence surrounding the validity of functional testing is in the context of the injured worker and prediction of return to work. In healthy populations, FCE components, such as aerobic fitness and manual handling activities, have demonstrated predictability of workplace injury in a small number of studies. This predictability improves when workers' performance is compared with the job demands. This job-specific approach is also required to meet anti-discrimination requirements. There are a number of practical limitations to functional testing, although these are not limited to the pre-employment domain. Physiotherapists need to have a clear understanding of the legal requirements and potential ethical challenges that they may face when conducting pre-employment functional assessments (PEFAs). Conclusions Further research is needed into the efficacy of pre-employment testing for workplace injury prevention. Physiotherapists and PEFAs are just one part of a holistic approach to workplace injury prevention. PMID:24124346

  12. Prediction of plant lncRNA by ensemble machine learning classifiers.

    PubMed

    Simopoulos, Caitlin M A; Weretilnyk, Elizabeth A; Golding, G Brian

    2018-05-02

    In plants, long non-protein coding RNAs are believed to have essential roles in development and stress responses. However, relative to advances on discerning biological roles for long non-protein coding RNAs in animal systems, this RNA class in plants is largely understudied. With comparatively few validated plant long non-coding RNAs, research on this potentially critical class of RNA is hindered by a lack of appropriate prediction tools and databases. Supervised learning models trained on data sets of mostly non-validated, non-coding transcripts have been previously used to identify this enigmatic RNA class with applications largely focused on animal systems. Our approach uses a training set comprised only of empirically validated long non-protein coding RNAs from plant, animal, and viral sources to predict and rank candidate long non-protein coding gene products for future functional validation. Individual stochastic gradient boosting and random forest classifiers trained on only empirically validated long non-protein coding RNAs were constructed. In order to use the strengths of multiple classifiers, we combined multiple models into a single stacking meta-learner. This ensemble approach benefits from the diversity of several learners to effectively identify putative plant long non-coding RNAs from transcript sequence features. When the predicted genes identified by the ensemble classifier were compared to those listed in GreeNC, an established plant long non-coding RNA database, overlap for predicted genes from Arabidopsis thaliana, Oryza sativa and Eutrema salsugineum ranged from 51 to 83% with the highest agreement in Eutrema salsugineum. Most of the highest ranking predictions from Arabidopsis thaliana were annotated as potential natural antisense genes, pseudogenes, transposable elements, or simply computationally predicted hypothetical protein. Due to the nature of this tool, the model can be updated as new long non-protein coding transcripts are identified and functionally verified. This ensemble classifier is an accurate tool that can be used to rank long non-protein coding RNA predictions for use in conjunction with gene expression studies. Selection of plant transcripts with a high potential for regulatory roles as long non-protein coding RNAs will advance research in the elucidation of long non-protein coding RNA function.

  13. From Using Tools to Using Language in Infant Siblings of Children with Autism.

    PubMed

    Sparaci, Laura; Northrup, Jessie B; Capirci, Olga; Iverson, Jana M

    2018-02-10

    Forty-one high-risk infants (HR) with an older sibling with autism spectrum disorder (ASD) were observed longitudinally at 10, 12, 18 and 24 months of age during a tool use task in a play-like scenario. Changes in grasp types and functional actions produced with a spoon were assessed during elicited tool use. Outcome and vocabulary measures were available at 36 months, distinguishing: 11 HR-ASD, 15 HR-language delay and 15 HR-no delay. Fewer HR-ASD infants produced grasp types facilitating spoon use at 24 months and functional actions at 10 months than HR-no delay. Production of functional actions in HR infants at 10 months predicted word comprehension at 12 months and word production at 24 and 36 months.

  14. Virtual Interactomics of Proteins from Biochemical Standpoint

    PubMed Central

    Kubrycht, Jaroslav; Sigler, Karel; Souček, Pavel

    2012-01-01

    Virtual interactomics represents a rapidly developing scientific area on the boundary line of bioinformatics and interactomics. Protein-related virtual interactomics then comprises instrumental tools for prediction, simulation, and networking of the majority of interactions important for structural and individual reproduction, differentiation, recognition, signaling, regulation, and metabolic pathways of cells and organisms. Here, we describe the main areas of virtual protein interactomics, that is, structurally based comparative analysis and prediction of functionally important interacting sites, mimotope-assisted and combined epitope prediction, molecular (protein) docking studies, and investigation of protein interaction networks. Detailed information about some interesting methodological approaches and online accessible programs or databases is displayed in our tables. Considerable part of the text deals with the searches for common conserved or functionally convergent protein regions and subgraphs of conserved interaction networks, new outstanding trends and clinically interesting results. In agreement with the presented data and relationships, virtual interactomic tools improve our scientific knowledge, help us to formulate working hypotheses, and they frequently also mediate variously important in silico simulations. PMID:22928109

  15. Isotope and Chemical Methods in Support of the U.S. Geological Survey Science Strategy, 2003-2008

    USGS Publications Warehouse

    Rye, R.O.; Johnson, C.A.; Landis, G.P.; Hofstra, A.H.; Emsbo, P.; Stricker, C.A.; Hunt, A.G.; Rusk, B.G.

    2008-01-01

    Principal functions of the Mineral Resources Program are providing information to decision-makers related to mineral deposits on federal lands and predicting the environmental consequences of the mining or natural weathering of those deposits. Performing these functions requires that predictions be made of the likelihood of undiscovered deposits. The predictions are based on geologic and geoenvironmental models that are constructed for the various types of mineral deposits from detailed descriptions of actual deposits and detailed understanding of the processes that formed them. Over the past three decades the understanding of ore-forming processes has benefitted greatly from the integration of laboratory-based geochemical tools with field observations and other data sources. Under the aegis of the Evolution of Ore Deposits and Technology Transfer Project (EODTTP), a five-year effort that terminated in 2008, the Mineral Resources Program provided state-of-the-art analytical capabilities to support applications of several related geochemical tools.

  16. Designer's unified cost model

    NASA Technical Reports Server (NTRS)

    Freeman, William T.; Ilcewicz, L. B.; Swanson, G. D.; Gutowski, T.

    1992-01-01

    A conceptual and preliminary designers' cost prediction model has been initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a data base and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. The approach, goals, plans, and progress is presented for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).

  17. Electron-Ion Dynamics with Time-Dependent Density Functional Theory: Towards Predictive Solar Cell Modeling: Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maitra, Neepa

    2016-07-14

    This project investigates the accuracy of currently-used functionals in time-dependent density functional theory, which is today routinely used to predict and design materials and computationally model processes in solar energy conversion. The rigorously-based electron-ion dynamics method developed here sheds light on traditional methods and overcomes challenges those methods have. The fundamental research undertaken here is important for building reliable and practical methods for materials discovery. The ultimate goal is to use these tools for the computational design of new materials for solar cell devices of high efficiency.

  18. Using the genome aggregation database, computational pathogenicity prediction tools, and patch clamp heterologous expression studies to demote previously published long QT syndrome type 1 mutations from pathogenic to benign.

    PubMed

    Clemens, Daniel J; Lentino, Anne R; Kapplinger, Jamie D; Ye, Dan; Zhou, Wei; Tester, David J; Ackerman, Michael J

    2018-04-01

    Mutations in the KCNQ1-encoded Kv7.1 potassium channel cause long QT syndrome (LQTS) type 1 (LQT1). It has been suggested that ∼10%-20% of rare LQTS case-derived variants in the literature may have been published erroneously as LQT1-causative mutations and may be "false positives." The purpose of this study was to determine which previously published KCNQ1 case variants are likely false positives. A list of all published, case-derived KCNQ1 missense variants (MVs) was compiled. The occurrence of each MV within the Genome Aggregation Database (gnomAD) was assessed. Eight in silico tools were used to predict each variant's pathogenicity. Case-derived variants that were either (1) too frequently found in gnomAD or (2) absent in gnomAD but predicted to be pathogenic by ≤2 tools were considered potential false positives. Three of these variants were characterized functionally using whole-cell patch clamp technique. Overall, there were 244 KCNQ1 case-derived MVs. Of these, 29 (12%) were seen in ≥10 individuals in gnomAD and are demotable. However, 157 of 244 MVs (64%) were absent in gnomAD. Of these, 7 (4%) were predicted to be pathogenic by ≤2 tools, 3 of which we characterized functionally. There was no significant difference in current density between heterozygous KCNQ1-F127L, -P477L, or -L619M variant-containing channels compared to KCNQ1-WT. This study offers preliminary evidence for the demotion of 32 (13%) previously published LQT1 MVs. Of these, 29 were demoted because of their frequent sighting in gnomAD. Additionally, in silico analysis and in vitro functional studies have facilitated the demotion of 3 ultra-rare MVs (F127L, P477L, L619M). Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  19. Software Tools for Developing and Simulating the NASA LaRC CMF Motion Base

    NASA Technical Reports Server (NTRS)

    Bryant, Richard B., Jr.; Carrelli, David J.

    2006-01-01

    The NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base has provided many design and analysis challenges. In the process of addressing these challenges, a comprehensive suite of software tools was developed. The software tools development began with a detailed MATLAB/Simulink model of the motion base which was used primarily for safety loads prediction, design of the closed loop compensator and development of the motion base safety systems1. A Simulink model of the digital control law, from which a portion of the embedded code is directly generated, was later added to this model to form a closed loop system model. Concurrently, software that runs on a PC was created to display and record motion base parameters. It includes a user interface for controlling time history displays, strip chart displays, data storage, and initializing of function generators used during motion base testing. Finally, a software tool was developed for kinematic analysis and prediction of mechanical clearances for the motion system. These tools work together in an integrated package to support normal operations of the motion base, simulate the end to end operation of the motion base system providing facilities for software-in-the-loop testing, mechanical geometry and sensor data visualizations, and function generator setup and evaluation.

  20. Implementation of channel-routing routines in the Water Erosion Prediction Project (WEPP) model

    Treesearch

    Li Wang; Joan Q. Wu; William J. Elliott; Shuhui Dun; Sergey Lapin; Fritz R. Fiedler; Dennis C. Flanagan

    2010-01-01

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous-simulation, watershed hydrology and erosion model. It is an important tool for water erosion simulation owing to its unique functionality in representing diverse landuse and management conditions. Its applicability is limited to relatively small watersheds since its current version does...

  1. Prediction of Acoustic Loads Generated by Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Perez, Linamaria; Allgood, Daniel C.

    2011-01-01

    NASA Stennis Space Center is one of the nation's premier facilities for conducting large-scale rocket engine testing. As liquid rocket engines vary in size, so do the acoustic loads that they produce. When these acoustic loads reach very high levels they may cause damages both to humans and to actual structures surrounding the testing area. To prevent these damages, prediction tools are used to estimate the spectral content and levels of the acoustics being generated by the rocket engine plumes and model their propagation through the surrounding atmosphere. Prior to the current work, two different acoustic prediction tools were being implemented at Stennis Space Center, each having their own advantages and disadvantages depending on the application. Therefore, a new prediction tool was created, using NASA SP-8072 handbook as a guide, which would replicate the same prediction methods as the previous codes, but eliminate any of the drawbacks the individual codes had. Aside from replicating the previous modeling capability in a single framework, additional modeling functions were added thereby expanding the current modeling capability. To verify that the new code could reproduce the same predictions as the previous codes, two verification test cases were defined. These verification test cases also served as validation cases as the predicted results were compared to actual test data.

  2. Enzyme Function Initiative-Enzyme Similarity Tool (EFI-EST): A web tool for generating protein sequence similarity networks.

    PubMed

    Gerlt, John A; Bouvier, Jason T; Davidson, Daniel B; Imker, Heidi J; Sadkhin, Boris; Slater, David R; Whalen, Katie L

    2015-08-01

    The Enzyme Function Initiative, an NIH/NIGMS-supported Large-Scale Collaborative Project (EFI; U54GM093342; http://enzymefunction.org/), is focused on devising and disseminating bioinformatics and computational tools as well as experimental strategies for the prediction and assignment of functions (in vitro activities and in vivo physiological/metabolic roles) to uncharacterized enzymes discovered in genome projects. Protein sequence similarity networks (SSNs) are visually powerful tools for analyzing sequence relationships in protein families (H.J. Atkinson, J.H. Morris, T.E. Ferrin, and P.C. Babbitt, PLoS One 2009, 4, e4345). However, the members of the biological/biomedical community have not had access to the capability to generate SSNs for their "favorite" protein families. In this article we announce the EFI-EST (Enzyme Function Initiative-Enzyme Similarity Tool) web tool (http://efi.igb.illinois.edu/efi-est/) that is available without cost for the automated generation of SSNs by the community. The tool can create SSNs for the "closest neighbors" of a user-supplied protein sequence from the UniProt database (Option A) or of members of any user-supplied Pfam and/or InterPro family (Option B). We provide an introduction to SSNs, a description of EFI-EST, and a demonstration of the use of EFI-EST to explore sequence-function space in the OMP decarboxylase superfamily (PF00215). This article is designed as a tutorial that will allow members of the community to use the EFI-EST web tool for exploring sequence/function space in protein families. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Predicting individual brain functional connectivity using a Bayesian hierarchical model.

    PubMed

    Dai, Tian; Guo, Ying

    2017-02-15

    Network-oriented analysis of functional magnetic resonance imaging (fMRI), especially resting-state fMRI, has revealed important association between abnormal connectivity and brain disorders such as schizophrenia, major depression and Alzheimer's disease. Imaging-based brain connectivity measures have become a useful tool for investigating the pathophysiology, progression and treatment response of psychiatric disorders and neurodegenerative diseases. Recent studies have started to explore the possibility of using functional neuroimaging to help predict disease progression and guide treatment selection for individual patients. These studies provide the impetus to develop statistical methodology that would help provide predictive information on disease progression-related or treatment-related changes in neural connectivity. To this end, we propose a prediction method based on Bayesian hierarchical model that uses individual's baseline fMRI scans, coupled with relevant subject characteristics, to predict the individual's future functional connectivity. A key advantage of the proposed method is that it can improve the accuracy of individualized prediction of connectivity by combining information from both group-level connectivity patterns that are common to subjects with similar characteristics as well as individual-level connectivity features that are particular to the specific subject. Furthermore, our method also offers statistical inference tools such as predictive intervals that help quantify the uncertainty or variability of the predicted outcomes. The proposed prediction method could be a useful approach to predict the changes in individual patient's brain connectivity with the progression of a disease. It can also be used to predict a patient's post-treatment brain connectivity after a specified treatment regimen. Another utility of the proposed method is that it can be applied to test-retest imaging data to develop a more reliable estimator for individual functional connectivity. We show there exists a nice connection between our proposed estimator and a recently developed shrinkage estimator of connectivity measures in the neuroimaging community. We develop an expectation-maximization (EM) algorithm for estimation of the proposed Bayesian hierarchical model. Simulations studies are performed to evaluate the accuracy of our proposed prediction methods. We illustrate the application of the methods with two data examples: the longitudinal resting-state fMRI from ADNI2 study and the test-retest fMRI data from Kirby21 study. In both the simulation studies and the fMRI data applications, we demonstrate that the proposed methods provide more accurate prediction and more reliable estimation of individual functional connectivity as compared with alternative methods. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Software tool for data mining and its applications

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Ye, Chenzhou; Chen, Nianyi

    2002-03-01

    A software tool for data mining is introduced, which integrates pattern recognition (PCA, Fisher, clustering, hyperenvelop, regression), artificial intelligence (knowledge representation, decision trees), statistical learning (rough set, support vector machine), computational intelligence (neural network, genetic algorithm, fuzzy systems). It consists of nine function models: pattern recognition, decision trees, association rule, fuzzy rule, neural network, genetic algorithm, Hyper Envelop, support vector machine, visualization. The principle and knowledge representation of some function models of data mining are described. The software tool of data mining is realized by Visual C++ under Windows 2000. Nonmonotony in data mining is dealt with by concept hierarchy and layered mining. The software tool of data mining has satisfactorily applied in the prediction of regularities of the formation of ternary intermetallic compounds in alloy systems, and diagnosis of brain glioma.

  5. Overcoming redundancies in bedside nursing assessments by validating a parsimonious meta-tool: findings from a methodological exercise study.

    PubMed

    Palese, Alvisa; Marini, Eva; Guarnier, Annamaria; Barelli, Paolo; Zambiasi, Paola; Allegrini, Elisabetta; Bazoli, Letizia; Casson, Paola; Marin, Meri; Padovan, Marisa; Picogna, Michele; Taddia, Patrizia; Chiari, Paolo; Salmaso, Daniele; Marognolli, Oliva; Canzan, Federica; Ambrosi, Elisa; Saiani, Luisa; Grassetti, Luca

    2016-10-01

    There is growing interest in validating tools aimed at supporting the clinical decision-making process and research. However, an increased bureaucratization of clinical practice and redundancies in the measures collected have been reported by clinicians. Redundancies in clinical assessments affect negatively both patients and nurses. To validate a meta-tool measuring the risks/problems currently estimated by multiple tools used in daily practice. A secondary analysis of a database was performed, using a cross-validation and a longitudinal study designs. In total, 1464 patients admitted to 12 medical units in 2012 were assessed at admission with the Brass, Barthel, Conley and Braden tools. Pertinent outcomes such as the occurrence of post-discharge need for resources and functional decline at discharge, as well as falls and pressure sores, were measured. Explorative factor analysis of each tool, inter-tool correlations and a conceptual evaluation of the redundant/similar items across tools were performed. Therefore, the validation of the meta-tool was performed through explorative factor analysis, confirmatory factor analysis and the structural equation model to establish the ability of the meta-tool to predict the outcomes estimated by the original tools. High correlations between the tools have emerged (from r 0.428 to 0.867) with a common variance from 18.3% to 75.1%. Through a conceptual evaluation and explorative factor analysis, the items were reduced from 42 to 20, and the three factors that emerged were confirmed by confirmatory factor analysis. According to the structural equation model results, two out of three emerged factors predicted the outcomes. From the initial 42 items, the meta-tool is composed of 20 items capable of predicting the outcomes as with the original tools. © 2016 John Wiley & Sons, Ltd.

  6. The Protein Interactome of Mycobacteriophage Giles Predicts Functions for Unknown Proteins.

    PubMed

    Mehla, Jitender; Dedrick, Rebekah M; Caufield, J Harry; Siefring, Rachel; Mair, Megan; Johnson, Allison; Hatfull, Graham F; Uetz, Peter

    2015-08-01

    Mycobacteriophages are viruses that infect mycobacterial hosts and are prevalent in the environment. Nearly 700 mycobacteriophage genomes have been completely sequenced, revealing considerable diversity and genetic novelty. Here, we have determined the protein complement of mycobacteriophage Giles by mass spectrometry and mapped its genome-wide protein interactome to help elucidate the roles of its 77 predicted proteins, 50% of which have no known function. About 22,000 individual yeast two-hybrid (Y2H) tests with four different Y2H vectors, followed by filtering and retest screens, resulted in 324 reproducible protein-protein interactions, including 171 (136 nonredundant) high-confidence interactions. The complete set of high-confidence interactions among Giles proteins reveals new mechanistic details and predicts functions for unknown proteins. The Giles interactome is the first for any mycobacteriophage and one of just five known phage interactomes so far. Our results will help in understanding mycobacteriophage biology and aid in development of new genetic and therapeutic tools to understand Mycobacterium tuberculosis. Mycobacterium tuberculosis causes over 9 million new cases of tuberculosis each year. Mycobacteriophages, viruses of mycobacterial hosts, hold considerable potential to understand phage diversity, evolution, and mycobacterial biology, aiding in the development of therapeutic tools to control mycobacterial infections. The mycobacteriophage Giles protein-protein interaction network allows us to predict functions for unknown proteins and shed light on major biological processes in phage biology. For example, Giles gp76, a protein of unknown function, is found to associate with phage packaging and maturation. The functions of mycobacteriophage-derived proteins may suggest novel therapeutic approaches for tuberculosis. Our ORFeome clone set of Giles proteins and the interactome data will be useful resources for phage interactomics. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  7. Cognitive Demands of Lower Paleolithic Toolmaking

    PubMed Central

    Stout, Dietrich; Hecht, Erin; Khreisheh, Nada; Bradley, Bruce; Chaminade, Thierry

    2015-01-01

    Stone tools provide some of the most abundant, continuous, and high resolution evidence of behavioral change over human evolution, but their implications for cognitive evolution have remained unclear. We investigated the neurophysiological demands of stone toolmaking by training modern subjects in known Paleolithic methods (“Oldowan”, “Acheulean”) and collecting structural and functional brain imaging data as they made technical judgments (outcome prediction, strategic appropriateness) about planned actions on partially completed tools. Results show that this task affected neural activity and functional connectivity in dorsal prefrontal cortex, that effect magnitude correlated with the frequency of correct strategic judgments, and that the frequency of correct strategic judgments was predictive of success in Acheulean, but not Oldowan, toolmaking. This corroborates hypothesized cognitive control demands of Acheulean toolmaking, specifically including information monitoring and manipulation functions attributed to the "central executive" of working memory. More broadly, it develops empirical methods for assessing the differential cognitive demands of Paleolithic technologies, and expands the scope of evolutionary hypotheses that can be tested using the available archaeological record. PMID:25875283

  8. The Potential of Virtual Reality to Assess Functional Communication in Aphasia

    ERIC Educational Resources Information Center

    Garcia, Linda J.; Rebolledo, Mercedes; Metthe, Lynn; Lefebvre, Renee

    2007-01-01

    Speech-language pathologists (SLPs) who work with adults with cognitive-linguistic impairments, including aphasia, have long needed an assessment tool that predicts ability to function in the real world. In this article, it is argued that virtual reality (VR)-supported approaches can address this need. Using models of disability such as the…

  9. A Computational Study of the Energy Dissipation Through an Acrylic Target Impacted by Various Size FSP

    DTIC Science & Technology

    2009-06-01

    data, and then returns an array that describes the line. This function, when compared to the LOGEST statistical function of the Microsoft Excel, which...threats continues to grow, the ability to predict materials performances using advanced modeling tools increases. The current paper has demonstrated

  10. Force Project Technology Presentation to the NRCC

    DTIC Science & Technology

    2014-02-04

    Functional Bridge components Smart Odometer Adv Pretreatment Smart Bridge Multi-functional Gap Crossing Fuel Automated Tracking System Adv...comprehensive matrix of candidate composite material systems and textile reinforcement architectures via modeling/analyses and testing. Product(s...Validated Dynamic Modeling tool based on parametric study using material models to reliably predict the textile mechanics of the hose

  11. Automatic Tool for Local Assembly Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whole community shotgun sequencing of total DNA (i.e. metagenomics) and total RNA (i.e. metatranscriptomics) has provided a wealth of information in the microbial community structure, predicted functions, metabolic networks, and is even able to reconstruct complete genomes directly. Here we present ATLAS (Automatic Tool for Local Assembly Structures) a comprehensive pipeline for assembly, annotation, genomic binning of metagenomic and metatranscriptomic data with an integrated framework for Multi-Omics. This will provide an open source tool for the Multi-Omic community at large.

  12. Decoding genes with coexpression networks and metabolomics - 'majority report by precogs'.

    PubMed

    Saito, Kazuki; Hirai, Masami Y; Yonekura-Sakakibara, Keiko

    2008-01-01

    Following the sequencing of whole genomes of model plants, high-throughput decoding of gene function is a major challenge in modern plant biology. In view of remarkable technical advances in transcriptomics and metabolomics, integrated analysis of these 'omics' by data-mining informatics is an excellent tool for prediction and identification of gene function, particularly for genes involved in complicated metabolic pathways. The availability of Arabidopsis public transcriptome datasets containing data of >1000 microarrays reinforces the potential for prediction of gene function by transcriptome coexpression analysis. Here, we review the strategy of combining transcriptome and metabolome as a powerful technology for studying the functional genomics of model plants and also crop and medicinal plants.

  13. Risk Prediction Models for Acute Kidney Injury in Critically Ill Patients: Opus in Progressu.

    PubMed

    Neyra, Javier A; Leaf, David E

    2018-05-31

    Acute kidney injury (AKI) is a complex systemic syndrome associated with high morbidity and mortality. Among critically ill patients admitted to intensive care units (ICUs), the incidence of AKI is as high as 50% and is associated with dismal outcomes. Thus, the development and validation of clinical risk prediction tools that accurately identify patients at high risk for AKI in the ICU is of paramount importance. We provide a comprehensive review of 3 clinical risk prediction tools that have been developed for incident AKI occurring in the first few hours or days following admission to the ICU. We found substantial heterogeneity among the clinical variables that were examined and included as significant predictors of AKI in the final models. The area under the receiver operating characteristic curves was ∼0.8 for all 3 models, indicating satisfactory model performance, though positive predictive values ranged from only 23 to 38%. Hence, further research is needed to develop more accurate and reproducible clinical risk prediction tools. Strategies for improved assessment of AKI susceptibility in the ICU include the incorporation of dynamic (time-varying) clinical parameters, as well as biomarker, functional, imaging, and genomic data. © 2018 S. Karger AG, Basel.

  14. A Consensus Method for the Prediction of ‘Aggregation-Prone’ Peptides in Globular Proteins

    PubMed Central

    Tsolis, Antonios C.; Papandreou, Nikos C.; Iconomidou, Vassiliki A.; Hamodrakas, Stavros J.

    2013-01-01

    The purpose of this work was to construct a consensus prediction algorithm of ‘aggregation-prone’ peptides in globular proteins, combining existing tools. This allows comparison of the different algorithms and the production of more objective and accurate results. Eleven (11) individual methods are combined and produce AMYLPRED2, a publicly, freely available web tool to academic users (http://biophysics.biol.uoa.gr/AMYLPRED2), for the consensus prediction of amyloidogenic determinants/‘aggregation-prone’ peptides in proteins, from sequence alone. The performance of AMYLPRED2 indicates that it functions better than individual aggregation-prediction algorithms, as perhaps expected. AMYLPRED2 is a useful tool for identifying amyloid-forming regions in proteins that are associated with several conformational diseases, called amyloidoses, such as Altzheimer's, Parkinson's, prion diseases and type II diabetes. It may also be useful for understanding the properties of protein folding and misfolding and for helping to the control of protein aggregation/solubility in biotechnology (recombinant proteins forming bacterial inclusion bodies) and biotherapeutics (monoclonal antibodies and biopharmaceutical proteins). PMID:23326595

  15. Tool Condition Monitoring and Remaining Useful Life Prognostic Based on a Wireless Sensor in Dry Milling Operations

    PubMed Central

    Zhang, Cunji; Yao, Xifan; Zhang, Jianming; Jin, Hong

    2016-01-01

    Tool breakage causes losses of surface polishing and dimensional accuracy for machined part, or possible damage to a workpiece or machine. Tool Condition Monitoring (TCM) is considerably vital in the manufacturing industry. In this paper, an indirect TCM approach is introduced with a wireless triaxial accelerometer. The vibrations in the three vertical directions (x, y and z) are acquired during milling operations, and the raw signals are de-noised by wavelet analysis. These features of de-noised signals are extracted in the time, frequency and time–frequency domains. The key features are selected based on Pearson’s Correlation Coefficient (PCC). The Neuro-Fuzzy Network (NFN) is adopted to predict the tool wear and Remaining Useful Life (RUL). In comparison with Back Propagation Neural Network (BPNN) and Radial Basis Function Network (RBFN), the results show that the NFN has the best performance in the prediction of tool wear and RUL. PMID:27258277

  16. Quokka: a comprehensive tool for rapid and accurate prediction of kinase family-specific phosphorylation sites in the human proteome.

    PubMed

    Li, Fuyi; Li, Chen; Marquez-Lago, Tatiana T; Leier, André; Akutsu, Tatsuya; Purcell, Anthony W; Smith, A Ian; Lithgow, Trevor; Daly, Roger J; Song, Jiangning; Chou, Kuo-Chen

    2018-06-27

    Kinase-regulated phosphorylation is a ubiquitous type of post-translational modification (PTM) in both eukaryotic and prokaryotic cells. Phosphorylation plays fundamental roles in many signalling pathways and biological processes, such as protein degradation and protein-protein interactions. Experimental studies have revealed that signalling defects caused by aberrant phosphorylation are highly associated with a variety of human diseases, especially cancers. In light of this, a number of computational methods aiming to accurately predict protein kinase family-specific or kinase-specific phosphorylation sites have been established, thereby facilitating phosphoproteomic data analysis. In this work, we present Quokka, a novel bioinformatics tool that allows users to rapidly and accurately identify human kinase family-regulated phosphorylation sites. Quokka was developed by using a variety of sequence scoring functions combined with an optimized logistic regression algorithm. We evaluated Quokka based on well-prepared up-to-date benchmark and independent test datasets, curated from the Phospho.ELM and UniProt databases, respectively. The independent test demonstrates that Quokka improves the prediction performance compared with state-of-the-art computational tools for phosphorylation prediction. In summary, our tool provides users with high-quality predicted human phosphorylation sites for hypothesis generation and biological validation. The Quokka webserver and datasets are freely available at http://quokka.erc.monash.edu/. Supplementary data are available at Bioinformatics online.

  17. Small hydropower spot prediction using SWAT and a diversion algorithm, case study: Upper Citarum Basin

    NASA Astrophysics Data System (ADS)

    Kardhana, Hadi; Arya, Doni Khaira; Hadihardaja, Iwan K.; Widyaningtyas, Riawan, Edi; Lubis, Atika

    2017-11-01

    Small-Scale Hydropower (SHP) had been important electric energy power source in Indonesia. Indonesia is vast countries, consists of more than 17.000 islands. It has large fresh water resource about 3 m of rainfall and 2 m of runoff. Much of its topography is mountainous, remote but abundant with potential energy. Millions of people do not have sufficient access to electricity, some live in the remote places. Recently, SHP development was encouraged for energy supply of the places. Development of global hydrology data provides opportunity to predict distribution of hydropower potential. In this paper, we demonstrate run-of-river type SHP spot prediction tool using SWAT and a river diversion algorithm. The use of Soil and Water Assessment Tool (SWAT) with input of CFSR (Climate Forecast System Re-analysis) of 10 years period had been implemented to predict spatially distributed flow cumulative distribution function (CDF). A simple algorithm to maximize potential head of a location by a river diversion expressing head race and penstock had been applied. Firm flow and power of the SHP were estimated from the CDF and the algorithm. The tool applied to Upper Citarum River Basin and three out of four existing hydropower locations had been well predicted. The result implies that this tool is able to support acceleration of SHP development at earlier phase.

  18. Know Your Enemy: Successful Bioinformatic Approaches to Predict Functional RNA Structures in Viral RNAs.

    PubMed

    Lim, Chun Shen; Brown, Chris M

    2017-01-01

    Structured RNA elements may control virus replication, transcription and translation, and their distinct features are being exploited by novel antiviral strategies. Viral RNA elements continue to be discovered using combinations of experimental and computational analyses. However, the wealth of sequence data, notably from deep viral RNA sequencing, viromes, and metagenomes, necessitates computational approaches being used as an essential discovery tool. In this review, we describe practical approaches being used to discover functional RNA elements in viral genomes. In addition to success stories in new and emerging viruses, these approaches have revealed some surprising new features of well-studied viruses e.g., human immunodeficiency virus, hepatitis C virus, influenza, and dengue viruses. Some notable discoveries were facilitated by new comparative analyses of diverse viral genome alignments. Importantly, comparative approaches for finding RNA elements embedded in coding and non-coding regions differ. With the exponential growth of computer power we have progressed from stem-loop prediction on single sequences to cutting edge 3D prediction, and from command line to user friendly web interfaces. Despite these advances, many powerful, user friendly prediction tools and resources are underutilized by the virology community.

  19. Know Your Enemy: Successful Bioinformatic Approaches to Predict Functional RNA Structures in Viral RNAs

    PubMed Central

    Lim, Chun Shen; Brown, Chris M.

    2018-01-01

    Structured RNA elements may control virus replication, transcription and translation, and their distinct features are being exploited by novel antiviral strategies. Viral RNA elements continue to be discovered using combinations of experimental and computational analyses. However, the wealth of sequence data, notably from deep viral RNA sequencing, viromes, and metagenomes, necessitates computational approaches being used as an essential discovery tool. In this review, we describe practical approaches being used to discover functional RNA elements in viral genomes. In addition to success stories in new and emerging viruses, these approaches have revealed some surprising new features of well-studied viruses e.g., human immunodeficiency virus, hepatitis C virus, influenza, and dengue viruses. Some notable discoveries were facilitated by new comparative analyses of diverse viral genome alignments. Importantly, comparative approaches for finding RNA elements embedded in coding and non-coding regions differ. With the exponential growth of computer power we have progressed from stem-loop prediction on single sequences to cutting edge 3D prediction, and from command line to user friendly web interfaces. Despite these advances, many powerful, user friendly prediction tools and resources are underutilized by the virology community. PMID:29354101

  20. PhytoCRISP-Ex: a web-based and stand-alone application to find specific target sequences for CRISPR/CAS editing.

    PubMed

    Rastogi, Achal; Murik, Omer; Bowler, Chris; Tirichine, Leila

    2016-07-01

    With the emerging interest in phytoplankton research, the need to establish genetic tools for the functional characterization of genes is indispensable. The CRISPR/Cas9 system is now well recognized as an efficient and accurate reverse genetic tool for genome editing. Several computational tools have been published allowing researchers to find candidate target sequences for the engineering of the CRISPR vectors, while searching possible off-targets for the predicted candidates. These tools provide built-in genome databases of common model organisms that are used for CRISPR target prediction. Although their predictions are highly sensitive, the applicability to non-model genomes, most notably protists, makes their design inadequate. This motivated us to design a new CRISPR target finding tool, PhytoCRISP-Ex. Our software offers CRIPSR target predictions using an extended list of phytoplankton genomes and also delivers a user-friendly standalone application that can be used for any genome. The software attempts to integrate, for the first time, most available phytoplankton genomes information and provide a web-based platform for Cas9 target prediction within them with high sensitivity. By offering a standalone version, PhytoCRISP-Ex maintains an independence to be used with any organism and widens its applicability in high throughput pipelines. PhytoCRISP-Ex out pars all the existing tools by computing the availability of restriction sites over the most probable Cas9 cleavage sites, which can be ideal for mutant screens. PhytoCRISP-Ex is a simple, fast and accurate web interface with 13 pre-indexed and presently updating phytoplankton genomes. The software was also designed as a UNIX-based standalone application that allows the user to search for target sequences in the genomes of a variety of other species.

  1. BRCA1/2 missense mutations and the value of in-silico analyses.

    PubMed

    Sadowski, Carolin E; Kohlstedt, Daniela; Meisel, Cornelia; Keller, Katja; Becker, Kerstin; Mackenroth, Luisa; Rump, Andreas; Schröck, Evelin; Wimberger, Pauline; Kast, Karin

    2017-11-01

    The clinical implications of genetic variants in BRCA1/2 in healthy and affected individuals are considerable. Variant interpretation, however, is especially challenging for missense variants. The majority of them are classified as variants of unknown clinical significance (VUS). Computational (in-silico) predictive programs are easy to access, but represent only one tool out of a wide range of complemental approaches to classify VUS. With this single-center study, we aimed to evaluate the impact of in-silico analyses in a spectrum of different BRCA1/2 missense variants. We conducted mutation analysis of BRCA1/2 in 523 index patients with suspected hereditary breast and ovarian cancer (HBOC). Classification of the genetic variants was performed according to the German Consortium (GC)-HBOC database. Additionally, all missense variants were classified by the following three in-silico prediction tools: SIFT, Mutation Taster (MT2) and PolyPhen2 (PPH2). Overall 201 different variants, 68 of which constituted missense variants were ranked as pathogenic, neutral, or unknown. The classification of missense variants by in-silico tools resulted in a higher amount of pathogenic mutations (25% vs. 13.2%) compared to the GC-HBOC-classification. Altogether, more than fifty percent (38/68, 55.9%) of missense variants were ranked differently. Sensitivity of in-silico-tools for mutation prediction was 88.9% (PPH2), 100% (SIFT) and 100% (MT2). We found a relevant discrepancy in variant classification by using in-silico prediction tools, resulting in potential overestimation and/or underestimation of cancer risk. More reliable, notably gene-specific, prediction tools and functional tests are needed to improve clinical counseling. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  2. An analytical method for prediction of stability lobes diagram of milling of large-size thin-walled workpiece

    NASA Astrophysics Data System (ADS)

    Yao, Jiming; Lin, Bin; Guo, Yu

    2017-01-01

    Different from common thin-walled workpiece, in the process of milling of large-size thin-walled workpiece chatter in the axial direction along the spindle is also likely to happen because of the low stiffness of the workpiece in this direction. An analytical method for prediction of stability lobes of milling of large-size thin-walled workpiece is presented in this paper. In the method, not only frequency response function of the tool point but also frequency response function of the workpiece is considered.

  3. Parcellation of left parietal tool representations by functional connectivity

    PubMed Central

    Garcea, Frank E.; Z. Mahon, Bradford

    2014-01-01

    Manipulating a tool according to its function requires the integration of visual, conceptual, and motor information, a process subserved in part by left parietal cortex. How these different types of information are integrated and how their integration is reflected in neural responses in the parietal lobule remains an open question. Here, participants viewed images of tools and animals during functional magnetic resonance imaging (fMRI). K-means clustering over time series data was used to parcellate left parietal cortex into subregions based on functional connectivity to a whole brain network of regions involved in tool processing. One cluster, in the inferior parietal cortex, expressed privileged functional connectivity to the left ventral premotor cortex. A second cluster, in the vicinity of the anterior intraparietal sulcus, expressed privileged functional connectivity with the left medial fusiform gyrus. A third cluster in the superior parietal lobe expressed privileged functional connectivity with dorsal occipital cortex. Control analyses using Monte Carlo style permutation tests demonstrated that the clustering solutions were outside the range of what would be observed based on chance ‘lumpiness’ in random data, or mere anatomical proximity. Finally, hierarchical clustering analyses were used to formally relate the resulting parcellation scheme of left parietal tool representations to previous work that has parcellated the left parietal lobule on purely anatomical grounds. These findings demonstrate significant heterogeneity in the functional organization of manipulable object representations in left parietal cortex, and outline a framework that generates novel predictions about the causes of some forms of upper limb apraxia. PMID:24892224

  4. G-LoSA for Prediction of Protein-Ligand Binding Sites and Structures.

    PubMed

    Lee, Hui Sun; Im, Wonpil

    2017-01-01

    Recent advances in high-throughput structure determination and computational protein structure prediction have significantly enriched the universe of protein structure. However, there is still a large gap between the number of available protein structures and that of proteins with annotated function in high accuracy. Computational structure-based protein function prediction has emerged to reduce this knowledge gap. The identification of a ligand binding site and its structure is critical to the determination of a protein's molecular function. We present a computational methodology for predicting small molecule ligand binding site and ligand structure using G-LoSA, our protein local structure alignment and similarity measurement tool. All the computational procedures described here can be easily implemented using G-LoSA Toolkit, a package of standalone software programs and preprocessed PDB structure libraries. G-LoSA and G-LoSA Toolkit are freely available to academic users at http://compbio.lehigh.edu/GLoSA . We also illustrate a case study to show the potential of our template-based approach harnessing G-LoSA for protein function prediction.

  5. Evaluating Functional Annotations of Enzymes Using the Gene Ontology.

    PubMed

    Holliday, Gemma L; Davidson, Rebecca; Akiva, Eyal; Babbitt, Patricia C

    2017-01-01

    The Gene Ontology (GO) (Ashburner et al., Nat Genet 25(1):25-29, 2000) is a powerful tool in the informatics arsenal of methods for evaluating annotations in a protein dataset. From identifying the nearest well annotated homologue of a protein of interest to predicting where misannotation has occurred to knowing how confident you can be in the annotations assigned to those proteins is critical. In this chapter we explore what makes an enzyme unique and how we can use GO to infer aspects of protein function based on sequence similarity. These can range from identification of misannotation or other errors in a predicted function to accurate function prediction for an enzyme of entirely unknown function. Although GO annotation applies to any gene products, we focus here a describing our approach for hierarchical classification of enzymes in the Structure-Function Linkage Database (SFLD) (Akiva et al., Nucleic Acids Res 42(Database issue):D521-530, 2014) as a guide for informed utilisation of annotation transfer based on GO terms.

  6. The DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure as a Screening Tool.

    PubMed

    Bastiaens, Leo; Galus, James

    2018-03-01

    The DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure was developed to aid clinicians with a dimensional assessment of psychopathology; however, this measure resembles a screening tool for several symptomatic domains. The objective of the current study was to examine the basic parameters of sensitivity, specificity, positive and negative predictive power of the measure as a screening tool. One hundred and fifty patients in a correctional community center filled out the measure prior to a psychiatric evaluation, including the Mini International Neuropsychiatric Interview screen. The above parameters were calculated for the domains of depression, mania, anxiety, and psychosis. The results showed that the sensitivity and positive predictive power of the studied domains was poor because of a high rate of false positive answers on the measure. However, when the lowest threshold on the Cross-Cutting Symptom Measure was used, the sensitivity of the anxiety and psychosis domains and the negative predictive values for mania, anxiety and psychosis were good. In conclusion, while it is foreseeable that some clinicians may use the DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure as a screening tool, it should not be relied on to identify positive findings. It functioned well in the negative prediction of mania, anxiety and psychosis symptoms.

  7. A scrutiny of tools used for assessment of hospital disaster preparedness in Iran.

    PubMed

    Heidaranlu, Esmail; Ebadi, Abbas; Ardalan, Ali; Khankeh, Hamidreza

    2015-01-01

    In emergencies and disasters, hospitals are among the first and most vital organizations involved. To determine preparedness of a hospital to deal with crisis, health system requires tools compatible with the type of crisis. The present study aimed to evaluate the accuracy of tools used for assessment of hospitals preparedness for major emergencies and disasters in Iran. In this review study, all studies conducted on hospital preparedness to deal with disasters in Iran in the interim 2000-2015 were examined. The World Health Organization (WHO) criteria were used to assess focus of studies for entry in this study. Of the 36 articles obtained, 28 articles that met inclusion criteria were analyzed. In accordance with the WHO standards, focus of tools used was examined in three areas (structural, nonstructural, and functional). In nonstructural area, the most focus of preparation tools was on medical gases, and the least focus on office and storeroom furnishings and equipment. In the functional area, the most focus was on operational plan, and the least on business continuity. Half of the tools in domestic studies considered structural safety as indicator of hospital preparedness. The present study showed that tools used contain a few indicators approved by the WHO, especially in the functional area. Moreover, a lack of a standard indigenous tool was evident, especially in the functional area. Thus, to assess hospital disaster preparedness, the national health system requires new tools compatible with scientific tool design principles, to enable a more accurate prediction of hospital preparedness in disasters before they occur.

  8. Complete fold annotation of the human proteome using a novel structural feature space.

    PubMed

    Middleton, Sarah A; Illuminati, Joseph; Kim, Junhyong

    2017-04-13

    Recognition of protein structural fold is the starting point for many structure prediction tools and protein function inference. Fold prediction is computationally demanding and recognizing novel folds is difficult such that the majority of proteins have not been annotated for fold classification. Here we describe a new machine learning approach using a novel feature space that can be used for accurate recognition of all 1,221 currently known folds and inference of unknown novel folds. We show that our method achieves better than 94% accuracy even when many folds have only one training example. We demonstrate the utility of this method by predicting the folds of 34,330 human protein domains and showing that these predictions can yield useful insights into potential biological function, such as prediction of RNA-binding ability. Our method can be applied to de novo fold prediction of entire proteomes and identify candidate novel fold families.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middleton, Sarah A.; Illuminati, Joseph; Kim, Junhyong

    Recognition of protein structural fold is the starting point for many structure prediction tools and protein function inference. Fold prediction is computationally demanding and recognizing novel folds is difficult such that the majority of proteins have not been annotated for fold classification. Here we describe a new machine learning approach using a novel feature space that can be used for accurate recognition of all 1,221 currently known folds and inference of unknown novel folds. We show that our method achieves better than 94% accuracy even when many folds have only one training example. We demonstrate the utility of this methodmore » by predicting the folds of 34,330 human protein domains and showing that these predictions can yield useful insights into potential biological function, such as prediction of RNA-binding ability. Finally, our method can be applied to de novo fold prediction of entire proteomes and identify candidate novel fold families.« less

  10. Complete fold annotation of the human proteome using a novel structural feature space

    PubMed Central

    Middleton, Sarah A.; Illuminati, Joseph; Kim, Junhyong

    2017-01-01

    Recognition of protein structural fold is the starting point for many structure prediction tools and protein function inference. Fold prediction is computationally demanding and recognizing novel folds is difficult such that the majority of proteins have not been annotated for fold classification. Here we describe a new machine learning approach using a novel feature space that can be used for accurate recognition of all 1,221 currently known folds and inference of unknown novel folds. We show that our method achieves better than 94% accuracy even when many folds have only one training example. We demonstrate the utility of this method by predicting the folds of 34,330 human protein domains and showing that these predictions can yield useful insights into potential biological function, such as prediction of RNA-binding ability. Our method can be applied to de novo fold prediction of entire proteomes and identify candidate novel fold families. PMID:28406174

  11. Musite, a tool for global prediction of general and kinase-specific phosphorylation sites.

    PubMed

    Gao, Jianjiong; Thelen, Jay J; Dunker, A Keith; Xu, Dong

    2010-12-01

    Reversible protein phosphorylation is one of the most pervasive post-translational modifications, regulating diverse cellular processes in various organisms. High throughput experimental studies using mass spectrometry have identified many phosphorylation sites, primarily from eukaryotes. However, the vast majority of phosphorylation sites remain undiscovered, even in well studied systems. Because mass spectrometry-based experimental approaches for identifying phosphorylation events are costly, time-consuming, and biased toward abundant proteins and proteotypic peptides, in silico prediction of phosphorylation sites is potentially a useful alternative strategy for whole proteome annotation. Because of various limitations, current phosphorylation site prediction tools were not well designed for comprehensive assessment of proteomes. Here, we present a novel software tool, Musite, specifically designed for large scale predictions of both general and kinase-specific phosphorylation sites. We collected phosphoproteomics data in multiple organisms from several reliable sources and used them to train prediction models by a comprehensive machine-learning approach that integrates local sequence similarities to known phosphorylation sites, protein disorder scores, and amino acid frequencies. Application of Musite on several proteomes yielded tens of thousands of phosphorylation site predictions at a high stringency level. Cross-validation tests show that Musite achieves some improvement over existing tools in predicting general phosphorylation sites, and it is at least comparable with those for predicting kinase-specific phosphorylation sites. In Musite V1.0, we have trained general prediction models for six organisms and kinase-specific prediction models for 13 kinases or kinase families. Although the current pretrained models were not correlated with any particular cellular conditions, Musite provides a unique functionality for training customized prediction models (including condition-specific models) from users' own data. In addition, with its easily extensible open source application programming interface, Musite is aimed at being an open platform for community-based development of machine learning-based phosphorylation site prediction applications. Musite is available at http://musite.sourceforge.net/.

  12. ElemeNT: a computational tool for detecting core promoter elements.

    PubMed

    Sloutskin, Anna; Danino, Yehuda M; Orenstein, Yaron; Zehavi, Yonathan; Doniger, Tirza; Shamir, Ron; Juven-Gershon, Tamar

    2015-01-01

    Core promoter elements play a pivotal role in the transcriptional output, yet they are often detected manually within sequences of interest. Here, we present 2 contributions to the detection and curation of core promoter elements within given sequences. First, the Elements Navigation Tool (ElemeNT) is a user-friendly web-based, interactive tool for prediction and display of putative core promoter elements and their biologically-relevant combinations. Second, the CORE database summarizes ElemeNT-predicted core promoter elements near CAGE and RNA-seq-defined Drosophila melanogaster transcription start sites (TSSs). ElemeNT's predictions are based on biologically-functional core promoter elements, and can be used to infer core promoter compositions. ElemeNT does not assume prior knowledge of the actual TSS position, and can therefore assist in annotation of any given sequence. These resources, freely accessible at http://lifefaculty.biu.ac.il/gershon-tamar/index.php/resources, facilitate the identification of core promoter elements as active contributors to gene expression.

  13. A Focus on the Death Kinetics in Predictive Microbiology: Benefits and Limits of the Most Important Models and Some Tools Dealing with Their Application in Foods

    PubMed Central

    Bevilacqua, Antonio; Speranza, Barbara; Sinigaglia, Milena; Corbo, Maria Rosaria

    2015-01-01

    Predictive Microbiology (PM) deals with the mathematical modeling of microorganisms in foods for different applications (challenge test, evaluation of microbiological shelf life, prediction of the microbiological hazards connected with foods, etc.). An interesting and important part of PM focuses on the use of primary functions to fit data of death kinetics of spoilage, pathogenic, and useful microorganisms following thermal or non-conventional treatments and can also be used to model survivors throughout storage. The main topic of this review is a focus on the most important death models (negative Gompertz, log-linear, shoulder/tail, Weibull, Weibull+tail, re-parameterized Weibull, biphasic approach, etc.) to pinpoint the benefits and the limits of each model; in addition, the last section addresses the most important tools for the use of death kinetics and predictive microbiology in a user-friendly way. PMID:28231222

  14. The use of copula functions for predictive analysis of correlations between extreme storm tides

    NASA Astrophysics Data System (ADS)

    Domino, Krzysztof; Błachowicz, Tomasz; Ciupak, Maurycy

    2014-11-01

    In this paper we present a method used in quantitative description of weakly predictable hydrological, extreme events at inland sea. Investigations for correlations between variations of individual measuring points, employing combined statistical methods, were carried out. As a main tool for this analysis we used a two-dimensional copula function sensitive for correlated extreme effects. Additionally, a new proposed methodology, based on Detrended Fluctuations Analysis (DFA) and Anomalous Diffusion (AD), was used for the prediction of negative and positive auto-correlations and associated optimum choice of copula functions. As a practical example we analysed maximum storm tides data recorded at five spatially separated places at the Baltic Sea. For the analysis we used Gumbel, Clayton, and Frank copula functions and introduced the reversed Clayton copula. The application of our research model is associated with modelling the risk of high storm tides and possible storm flooding.

  15. The wavelength dependence and an interpretation of the photometric parameters of Mars

    NASA Technical Reports Server (NTRS)

    Weaver, W. R.; Meador, W. E.

    1976-01-01

    The photometric function developed by Meador and Weaver has been used with photometric data from the bright desert areas of Mars to determine the wavelength dependence of the three photometric parameters of that function and to provide some predictions about the physical properties of the surface. Knowledge of the parameters permits the brightness of these areas of Mars to be determined for scattering geometry over the wavelength range of 0.45 to 0.70 micrometer. The changes in the photometric parameters with wavelength are shown to be consistent with qualitative theoretical predictions, and the predictions of surface properties are shown to be consistent with conditions that might exist in these regions of Mars. The photometric function is shown to have good potential as a diagnostic tool for the determination of surface properties, and the consistency of the behavior of the photometric parameters is shown to be good support for the validity of the photometric function.

  16. Generalized Partial Least Squares Approach for Nominal Multinomial Logit Regression Models with a Functional Covariate

    ERIC Educational Resources Information Center

    Albaqshi, Amani Mohammed H.

    2017-01-01

    Functional Data Analysis (FDA) has attracted substantial attention for the last two decades. Within FDA, classifying curves into two or more categories is consistently of interest to scientists, but multi-class prediction within FDA is challenged in that most classification tools have been limited to binary response applications. The functional…

  17. Bioinformatics tools in predictive ecology: applications to fisheries

    PubMed Central

    Tucker, Allan; Duplisea, Daniel

    2012-01-01

    There has been a huge effort in the advancement of analytical techniques for molecular biological data over the past decade. This has led to many novel algorithms that are specialized to deal with data associated with biological phenomena, such as gene expression and protein interactions. In contrast, ecological data analysis has remained focused to some degree on off-the-shelf statistical techniques though this is starting to change with the adoption of state-of-the-art methods, where few assumptions can be made about the data and a more explorative approach is required, for example, through the use of Bayesian networks. In this paper, some novel bioinformatics tools for microarray data are discussed along with their ‘crossover potential’ with an application to fisheries data. In particular, a focus is made on the development of models that identify functionally equivalent species in different fish communities with the aim of predicting functional collapse. PMID:22144390

  18. Bioinformatics tools in predictive ecology: applications to fisheries.

    PubMed

    Tucker, Allan; Duplisea, Daniel

    2012-01-19

    There has been a huge effort in the advancement of analytical techniques for molecular biological data over the past decade. This has led to many novel algorithms that are specialized to deal with data associated with biological phenomena, such as gene expression and protein interactions. In contrast, ecological data analysis has remained focused to some degree on off-the-shelf statistical techniques though this is starting to change with the adoption of state-of-the-art methods, where few assumptions can be made about the data and a more explorative approach is required, for example, through the use of Bayesian networks. In this paper, some novel bioinformatics tools for microarray data are discussed along with their 'crossover potential' with an application to fisheries data. In particular, a focus is made on the development of models that identify functionally equivalent species in different fish communities with the aim of predicting functional collapse.

  19. Metagenomic Functional Potential Predicts Degradation Rates of a Model Organophosphorus Xenobiotic in Pesticide Contaminated Soils

    PubMed Central

    Jeffries, Thomas C.; Rayu, Smriti; Nielsen, Uffe N.; Lai, Kaitao; Ijaz, Ali; Nazaries, Loic; Singh, Brajesh K.

    2018-01-01

    Chemical contamination of natural and agricultural habitats is an increasing global problem and a major threat to sustainability and human health. Organophosphorus (OP) compounds are one major class of contaminant and can undergo microbial degradation, however, no studies have applied system-wide ecogenomic tools to investigate OP degradation or use metagenomics to understand the underlying mechanisms of biodegradation in situ and predict degradation potential. Thus, there is a lack of knowledge regarding the functional genes and genomic potential underpinning degradation and community responses to contamination. Here we address this knowledge gap by performing shotgun sequencing of community DNA from agricultural soils with a history of pesticide usage and profiling shifts in functional genes and microbial taxa abundance. Our results showed two distinct groups of soils defined by differing functional and taxonomic profiles. Degradation assays suggested that these groups corresponded to the organophosphorus degradation potential of soils, with the fastest degrading community being defined by increases in transport and nutrient cycling pathways and enzymes potentially involved in phosphorus metabolism. This was against a backdrop of taxonomic community shifts potentially related to contamination adaptation and reflecting the legacy of exposure. Overall our results highlight the value of using holistic system-wide metagenomic approaches as a tool to predict microbial degradation in the context of the ecology of contaminated habitats. PMID:29515526

  20. Designers' unified cost model

    NASA Technical Reports Server (NTRS)

    Freeman, W.; Ilcewicz, L.; Swanson, G.; Gutowski, T.

    1992-01-01

    The Structures Technology Program Office (STPO) at NASA LaRC has initiated development of a conceptual and preliminary designers' cost prediction model. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state-of-the-art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a database and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. This paper presents the team members, approach, goals, plans, and progress to date for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).

  1. Well-characterized sequence features of eukaryote genomes and implications for ab initio gene prediction.

    PubMed

    Huang, Ying; Chen, Shi-Yi; Deng, Feilong

    2016-01-01

    In silico analysis of DNA sequences is an important area of computational biology in the post-genomic era. Over the past two decades, computational approaches for ab initio prediction of gene structure from genome sequence alone have largely facilitated our understanding on a variety of biological questions. Although the computational prediction of protein-coding genes has already been well-established, we are also facing challenges to robustly find the non-coding RNA genes, such as miRNA and lncRNA. Two main aspects of ab initio gene prediction include the computed values for describing sequence features and used algorithm for training the discriminant function, and by which different combinations are employed into various bioinformatic tools. Herein, we briefly review these well-characterized sequence features in eukaryote genomes and applications to ab initio gene prediction. The main purpose of this article is to provide an overview to beginners who aim to develop the related bioinformatic tools.

  2. Evolutionary conservation analysis increases the colocalization of predicted exonic splicing enhancers in the BRCA1 gene with missense sequence changes and in-frame deletions, but not polymorphisms

    PubMed Central

    Pettigrew, Christopher; Wayte, Nicola; Lovelock, Paul K; Tavtigian, Sean V; Chenevix-Trench, Georgia; Spurdle, Amanda B; Brown, Melissa A

    2005-01-01

    Introduction Aberrant pre-mRNA splicing can be more detrimental to the function of a gene than changes in the length or nature of the encoded amino acid sequence. Although predicting the effects of changes in consensus 5' and 3' splice sites near intron:exon boundaries is relatively straightforward, predicting the possible effects of changes in exonic splicing enhancers (ESEs) remains a challenge. Methods As an initial step toward determining which ESEs predicted by the web-based tool ESEfinder in the breast cancer susceptibility gene BRCA1 are likely to be functional, we have determined their evolutionary conservation and compared their location with known BRCA1 sequence variants. Results Using the default settings of ESEfinder, we initially detected 669 potential ESEs in the coding region of the BRCA1 gene. Increasing the threshold score reduced the total number to 464, while taking into consideration the proximity to splice donor and acceptor sites reduced the number to 211. Approximately 11% of these ESEs (23/211) either are identical at the nucleotide level in human, primates, mouse, cow, dog and opossum Brca1 (conserved) or are detectable by ESEfinder in the same position in the Brca1 sequence (shared). The frequency of conserved and shared predicted ESEs between human and mouse is higher in BRCA1 exons (2.8 per 100 nucleotides) than in introns (0.6 per 100 nucleotides). Of conserved or shared putative ESEs, 61% (14/23) were predicted to be affected by sequence variants reported in the Breast Cancer Information Core database. Applying the filters described above increased the colocalization of predicted ESEs with missense changes, in-frame deletions and unclassified variants predicted to be deleterious to protein function, whereas they decreased the colocalization with known polymorphisms or unclassified variants predicted to be neutral. Conclusion In this report we show that evolutionary conservation analysis may be used to improve the specificity of an ESE prediction tool. This is the first report on the prediction of the frequency and distribution of ESEs in the BRCA1 gene, and it is the first reported attempt to predict which ESEs are most likely to be functional and therefore which sequence variants in ESEs are most likely to be pathogenic. PMID:16280041

  3. MysiRNA: improving siRNA efficacy prediction using a machine-learning model combining multi-tools and whole stacking energy (ΔG).

    PubMed

    Mysara, Mohamed; Elhefnawi, Mahmoud; Garibaldi, Jonathan M

    2012-06-01

    The investigation of small interfering RNA (siRNA) and its posttranscriptional gene-regulation has become an extremely important research topic, both for fundamental reasons and for potential longer-term therapeutic benefits. Several factors affect the functionality of siRNA including positional preferences, target accessibility and other thermodynamic features. State of the art tools aim to optimize the selection of target siRNAs by identifying those that may have high experimental inhibition. Such tools implement artificial neural network models as Biopredsi and ThermoComposition21, and linear regression models as DSIR, i-Score and Scales, among others. However, all these models have limitations in performance. In this work, a neural-network trained new siRNA scoring/efficacy prediction model was developed based on combining two existing scoring algorithms (ThermoComposition21 and i-Score), together with the whole stacking energy (ΔG), in a multi-layer artificial neural network. These three parameters were chosen after a comparative combinatorial study between five well known tools. Our developed model, 'MysiRNA' was trained on 2431 siRNA records and tested using three further datasets. MysiRNA was compared with 11 alternative existing scoring tools in an evaluation study to assess the predicted and experimental siRNA efficiency where it achieved the highest performance both in terms of correlation coefficient (R(2)=0.600) and receiver operating characteristics analysis (AUC=0.808), improving the prediction accuracy by up to 18% with respect to sensitivity and specificity of the best available tools. MysiRNA is a novel, freely accessible model capable of predicting siRNA inhibition efficiency with improved specificity and sensitivity. This multiclassifier approach could help improve the performance of prediction in several bioinformatics areas. MysiRNA model, part of MysiRNA-Designer package [1], is expected to play a key role in siRNA selection and evaluation. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. GPS-Lipid: a robust tool for the prediction of multiple lipid modification sites.

    PubMed

    Xie, Yubin; Zheng, Yueyuan; Li, Hongyu; Luo, Xiaotong; He, Zhihao; Cao, Shuo; Shi, Yi; Zhao, Qi; Xue, Yu; Zuo, Zhixiang; Ren, Jian

    2016-06-16

    As one of the most common post-translational modifications in eukaryotic cells, lipid modification is an important mechanism for the regulation of variety aspects of protein function. Over the last decades, three classes of lipid modifications have been increasingly studied. The co-regulation of these different lipid modifications is beginning to be noticed. However, due to the lack of integrated bioinformatics resources, the studies of co-regulatory mechanisms are still very limited. In this work, we developed a tool called GPS-Lipid for the prediction of four classes of lipid modifications by integrating the Particle Swarm Optimization with an aging leader and challengers (ALC-PSO) algorithm. GPS-Lipid was proven to be evidently superior to other similar tools. To facilitate the research of lipid modification, we hosted a publicly available web server at http://lipid.biocuckoo.org with not only the implementation of GPS-Lipid, but also an integrative database and visualization tool. We performed a systematic analysis of the co-regulatory mechanism between different lipid modifications with GPS-Lipid. The results demonstrated that the proximal dual-lipid modifications among palmitoylation, myristoylation and prenylation are key mechanism for regulating various protein functions. In conclusion, GPS-lipid is expected to serve as useful resource for the research on lipid modifications, especially on their co-regulation.

  5. Analytical Tools for Space Suit Design

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay

    2011-01-01

    As indicated by the implementation of multiple small project teams within the agency, NASA is adopting a lean approach to hardware development that emphasizes quick product realization and rapid response to shifting program and agency goals. Over the past two decades, space suit design has been evolutionary in approach with emphasis on building prototypes then testing with the largest practical range of subjects possible. The results of these efforts show continuous improvement but make scaled design and performance predictions almost impossible with limited budgets and little time. Thus, in an effort to start changing the way NASA approaches space suit design and analysis, the Advanced Space Suit group has initiated the development of an integrated design and analysis tool. It is a multi-year-if not decadal-development effort that, when fully implemented, is envisioned to generate analysis of any given space suit architecture or, conversely, predictions of ideal space suit architectures given specific mission parameters. The master tool will exchange information to and from a set of five sub-tool groups in order to generate the desired output. The basic functions of each sub-tool group, the initial relationships between the sub-tools, and a comparison to state of the art software and tools are discussed.

  6. SUPER-FOCUS: A tool for agile functional analysis of shotgun metagenomic data

    DOE PAGES

    Silva, Genivaldo Gueiros Z.; Green, Kevin T.; Dutilh, Bas E.; ...

    2015-10-09

    Analyzing the functional profile of a microbial community from unannotated shotgun sequencing reads is one of the important goals in metagenomics. Functional profiling has valuable applications in biological research because it identifies the abundances of the functional genes of the organisms present in the original sample, answering the question what they can do. Currently, available tools do not scale well with increasing data volumes, which is important because both the number and lengths of the reads produced by sequencing platforms keep increasing. Here, we introduce SUPER-FOCUS, SUbsystems Profile by databasE Reduction using FOCUS, an agile homology-based approach using a reducedmore » reference database to report the subsystems present in metagenomic datasets and profile their abundances. We tested SUPER-FOCUS with over 70 real metagenomes, the results showing that it accurately predicts the subsystems present in the profiled microbial communities, and is up to 1000 times faster than other tools.« less

  7. Sirius PSB: a generic system for analysis of biological sequences.

    PubMed

    Koh, Chuan Hock; Lin, Sharene; Jedd, Gregory; Wong, Limsoon

    2009-12-01

    Computational tools are essential components of modern biological research. For example, BLAST searches can be used to identify related proteins based on sequence homology, or when a new genome is sequenced, prediction models can be used to annotate functional sites such as transcription start sites, translation initiation sites and polyadenylation sites and to predict protein localization. Here we present Sirius Prediction Systems Builder (PSB), a new computational tool for sequence analysis, classification and searching. Sirius PSB has four main operations: (1) Building a classifier, (2) Deploying a classifier, (3) Search for proteins similar to query proteins, (4) Preliminary and post-prediction analysis. Sirius PSB supports all these operations via a simple and interactive graphical user interface. Besides being a convenient tool, Sirius PSB has also introduced two novelties in sequence analysis. Firstly, genetic algorithm is used to identify interesting features in the feature space. Secondly, instead of the conventional method of searching for similar proteins via sequence similarity, we introduced searching via features' similarity. To demonstrate the capabilities of Sirius PSB, we have built two prediction models - one for the recognition of Arabidopsis polyadenylation sites and another for the subcellular localization of proteins. Both systems are competitive against current state-of-the-art models based on evaluation of public datasets. More notably, the time and effort required to build each model is greatly reduced with the assistance of Sirius PSB. Furthermore, we show that under certain conditions when BLAST is unable to find related proteins, Sirius PSB can identify functionally related proteins based on their biophysical similarities. Sirius PSB and its related supplements are available at: http://compbio.ddns.comp.nus.edu.sg/~sirius.

  8. Modeling of edge effect in subaperture tool influence functions of computer controlled optical surfacing.

    PubMed

    Wan, Songlin; Zhang, Xiangchao; He, Xiaoying; Xu, Min

    2016-12-20

    Computer controlled optical surfacing requires an accurate tool influence function (TIF) for reliable path planning and deterministic fabrication. Near the edge of the workpieces, the TIF has a nonlinear removal behavior, which will cause a severe edge-roll phenomenon. In the present paper, a new edge pressure model is developed based on the finite element analysis results. The model is represented as the product of a basic pressure function and a correcting function. The basic pressure distribution is calculated according to the surface shape of the polishing pad, and the correcting function is used to compensate the errors caused by the edge effect. Practical experimental results demonstrate that the new model can accurately predict the edge TIFs with different overhang ratios. The relative error of the new edge model can be reduced to 15%.

  9. Confronting species distribution model predictions with species functional traits.

    PubMed

    Wittmann, Marion E; Barnes, Matthew A; Jerde, Christopher L; Jones, Lisa A; Lodge, David M

    2016-02-01

    Species distribution models are valuable tools in studies of biogeography, ecology, and climate change and have been used to inform conservation and ecosystem management. However, species distribution models typically incorporate only climatic variables and species presence data. Model development or validation rarely considers functional components of species traits or other types of biological data. We implemented a species distribution model (Maxent) to predict global climate habitat suitability for Grass Carp (Ctenopharyngodon idella). We then tested the relationship between the degree of climate habitat suitability predicted by Maxent and the individual growth rates of both wild (N = 17) and stocked (N = 51) Grass Carp populations using correlation analysis. The Grass Carp Maxent model accurately reflected the global occurrence data (AUC = 0.904). Observations of Grass Carp growth rate covered six continents and ranged from 0.19 to 20.1 g day(-1). Species distribution model predictions were correlated (r = 0.5, 95% CI (0.03, 0.79)) with observed growth rates for wild Grass Carp populations but were not correlated (r = -0.26, 95% CI (-0.5, 0.012)) with stocked populations. Further, a review of the literature indicates that the few studies for other species that have previously assessed the relationship between the degree of predicted climate habitat suitability and species functional traits have also discovered significant relationships. Thus, species distribution models may provide inferences beyond just where a species may occur, providing a useful tool to understand the linkage between species distributions and underlying biological mechanisms.

  10. Bioinformatics functional analysis of let-7a, miR-34a, and miR-199a/b reveals novel insights into immune system pathways and cancer hallmarks for hepatocellular carcinoma.

    PubMed

    Soliman, Bangly; Salem, Ahmed; Ghazy, Mohamed; Abu-Shahba, Nourhan; El Hefnawi, Mahmoud

    2018-05-01

    Let-7a, miR-34a, and miR-199 a/b have gained a great attention as master regulators for cellular processes. In particular, these three micro-RNAs act as potential onco-suppressors for hepatocellular carcinoma. Bioinformatics can reveal the functionality of these micro-RNAs through target prediction and functional annotation analysis. In the current study, in silico analysis using innovative servers (miRror Suite, DAVID, miRGator V3.0, GeneTrail) has demonstrated the combinatorial and the individual target genes of these micro-RNAs and further explored their roles in hepatocellular carcinoma progression. There were 87 common target messenger RNAs (p ≤ 0.05) that were predicted to be regulated by the three micro-RNAs using miRror 2.0 target prediction tool. In addition, the functional enrichment analysis of these targets that was performed by DAVID functional annotation and REACTOME tools revealed two major immune-related pathways, eight hepatocellular carcinoma hallmarks-linked pathways, and two pathways that mediate interconnected processes between immune system and hepatocellular carcinoma hallmarks. Moreover, protein-protein interaction network for the predicted common targets was obtained by using STRING database. The individual analysis of target genes and pathways for the three micro-RNAs of interest using miRGator V3.0 and GeneTrail servers revealed some novel predicted target oncogenes such as SOX4, which we validated experimentally, in addition to some regulated pathways of immune system and hepatocarcinogenesis such as insulin signaling pathway and adipocytokine signaling pathway. In general, our results demonstrate that let-7a, miR-34a, and miR-199 a/b have novel interactions in different immune system pathways and major hepatocellular carcinoma hallmarks. Thus, our findings shed more light on the roles of these miRNAs as cancer silencers.

  11. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  12. Evolutionary and Functional Relationships in the Truncated Hemoglobin Family.

    PubMed

    Bustamante, Juan P; Radusky, Leandro; Boechi, Leonardo; Estrin, Darío A; Ten Have, Arjen; Martí, Marcelo A

    2016-01-01

    Predicting function from sequence is an important goal in current biological research, and although, broad functional assignment is possible when a protein is assigned to a family, predicting functional specificity with accuracy is not straightforward. If function is provided by key structural properties and the relevant properties can be computed using the sequence as the starting point, it should in principle be possible to predict function in detail. The truncated hemoglobin family presents an interesting benchmark study due to their ubiquity, sequence diversity in the context of a conserved fold and the number of characterized members. Their functions are tightly related to O2 affinity and reactivity, as determined by the association and dissociation rate constants, both of which can be predicted and analyzed using in-silico based tools. In the present work we have applied a strategy, which combines homology modeling with molecular based energy calculations, to predict and analyze function of all known truncated hemoglobins in an evolutionary context. Our results show that truncated hemoglobins present conserved family features, but that its structure is flexible enough to allow the switch from high to low affinity in a few evolutionary steps. Most proteins display moderate to high oxygen affinities and multiple ligand migration paths, which, besides some minor trends, show heterogeneous distributions throughout the phylogenetic tree, again suggesting fast functional adaptation. Our data not only deepens our comprehension of the structural basis governing ligand affinity, but they also highlight some interesting functional evolutionary trends.

  13. Evolutionary and Functional Relationships in the Truncated Hemoglobin Family

    PubMed Central

    Bustamante, Juan P.; Radusky, Leandro; Boechi, Leonardo; Estrin, Darío A.; ten Have, Arjen; Martí, Marcelo A.

    2016-01-01

    Predicting function from sequence is an important goal in current biological research, and although, broad functional assignment is possible when a protein is assigned to a family, predicting functional specificity with accuracy is not straightforward. If function is provided by key structural properties and the relevant properties can be computed using the sequence as the starting point, it should in principle be possible to predict function in detail. The truncated hemoglobin family presents an interesting benchmark study due to their ubiquity, sequence diversity in the context of a conserved fold and the number of characterized members. Their functions are tightly related to O2 affinity and reactivity, as determined by the association and dissociation rate constants, both of which can be predicted and analyzed using in-silico based tools. In the present work we have applied a strategy, which combines homology modeling with molecular based energy calculations, to predict and analyze function of all known truncated hemoglobins in an evolutionary context. Our results show that truncated hemoglobins present conserved family features, but that its structure is flexible enough to allow the switch from high to low affinity in a few evolutionary steps. Most proteins display moderate to high oxygen affinities and multiple ligand migration paths, which, besides some minor trends, show heterogeneous distributions throughout the phylogenetic tree, again suggesting fast functional adaptation. Our data not only deepens our comprehension of the structural basis governing ligand affinity, but they also highlight some interesting functional evolutionary trends. PMID:26788940

  14. Daily functioning profile of children with attention deficit hyperactive disorder: A pilot study using an ecological assessment.

    PubMed

    Rosenblum, Sara; Frisch, Carmit; Deutsh-Castel, Tsofia; Josman, Naomi

    2015-01-01

    Children with attention-deficit hyperactivity disorder (ADHD) often present with activities of daily living (ADL) performance deficits. This study aimed to compare the performance characteristics of children with ADHD to those of controls based on the Do-Eat assessment tool, and to establish the tool's validity. Participants were 23 children with ADHD and 24 matched controls, aged 6-9 years. In addition to the Do-Eat, the Children Activity Scale-Parent (ChAS-P) and the Behavioral Rating Inventory of Executive Function (BRIEF) were used to measure sensorimotor abilities and executive function (EF). Significant differences were found in the Do-Eat scores between children with ADHD and controls. Significant moderate correlations were found between the Do-Eat sensorimotor scores, the ChAS-P and the BRIEF scores in the ADHD group. Significant correlations were found between performance on the Do-Eat and the ChAS-P questionnaire scores, verifying the tool's ecological validity. A single discriminant function described primarily by four Do-Eat variables, correctly classified 95.5% of the study participants into their respective study groups, establishing the tool's predictive validity within this population. These preliminary findings indicate that the Do-Eat may serve as a reliable and valid tool that provides insight into the daily functioning characteristics of children with ADHD. However, further research on larger samples is indicated.

  15. Evaluation of Radiation Belt Space Weather Forecasts for Internal Charging Analyses

    NASA Technical Reports Server (NTRS)

    Minow, Joseph I.; Coffey, Victoria N.; Jun, Insoo; Garrett, Henry B.

    2007-01-01

    A variety of static electron radiation belt models, space weather prediction tools, and energetic electron datasets are used by spacecraft designers and operations support personnel as internal charging code inputs to evaluate electrostatic discharge risks in space systems due to exposure to relativistic electron environments. Evaluating the environment inputs is often accomplished by comparing whether the data set or forecast tool reliability predicts measured electron flux (or fluence over a given period) for some chosen period. While this technique is useful as a model metric, it does not provide the information necessary to evaluate whether short term deviances of the predicted flux is important in the charging evaluations. In this paper, we use a 1-D internal charging model to compute electric fields generated in insulating materials as a function of time when exposed to relativistic electrons in the Earth's magnetosphere. The resulting fields are assumed to represent the "true" electric fields and are compared with electric field values computed from relativistic electron environments derived from a variety of space environment and forecast tools. Deviances in predicted fields compared to the "true" fields which depend on insulator charging time constants will be evaluated as a potential metric for determining the importance of predicted and measured relativistic electron flux deviations over a range of time scales.

  16. Differential Forms: A New Tool in Economics

    NASA Astrophysics Data System (ADS)

    Mimkes, Jürgen

    Econophysics is the transfer of methods from natural to socio-economic sciences. This concept has first been applied to finance1, but it is now also used in various applications of economics and social sciences [2,3]. The present paper focuses on problems in macro economics and growth. 1. Neoclassical theory [4, 5] neglects the “ex post” property of income and growth. Income Y(K, L) is assumed to be a function of capital and labor. But functions cannot model the “ex post” character of income. 2. Neoclassical theory is based on a Cobb Douglas function [6] with variable elasticity α, which may be fitted to economic data. But an undefined elasticity α leads to a descriptive rather than a predictive economic theory. The present paper introduces a new tool - differential forms and path dependent integrals - to macro economics. This is a solution to the problems above: 1. The integral of not exact differential forms is path dependent and can only be calculated “ex post” like income and economic growth. 2. Not exact differential forms can be made exact by an integrating factor, this leads to a new, well defined, unique production function F and a predictive economic theory.

  17. Machine learning classification with confidence: application of transductive conformal predictors to MRI-based diagnostic and prognostic markers in depression.

    PubMed

    Nouretdinov, Ilia; Costafreda, Sergi G; Gammerman, Alexander; Chervonenkis, Alexey; Vovk, Vladimir; Vapnik, Vladimir; Fu, Cynthia H Y

    2011-05-15

    There is rapidly accumulating evidence that the application of machine learning classification to neuroimaging measurements may be valuable for the development of diagnostic and prognostic prediction tools in psychiatry. However, current methods do not produce a measure of the reliability of the predictions. Knowing the risk of the error associated with a given prediction is essential for the development of neuroimaging-based clinical tools. We propose a general probabilistic classification method to produce measures of confidence for magnetic resonance imaging (MRI) data. We describe the application of transductive conformal predictor (TCP) to MRI images. TCP generates the most likely prediction and a valid measure of confidence, as well as the set of all possible predictions for a given confidence level. We present the theoretical motivation for TCP, and we have applied TCP to structural and functional MRI data in patients and healthy controls to investigate diagnostic and prognostic prediction in depression. We verify that TCP predictions are as accurate as those obtained with more standard machine learning methods, such as support vector machine, while providing the additional benefit of a valid measure of confidence for each prediction. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. Development and Validation of a Prediction Model for Pain and Functional Outcomes After Lumbar Spine Surgery.

    PubMed

    Khor, Sara; Lavallee, Danielle; Cizik, Amy M; Bellabarba, Carlo; Chapman, Jens R; Howe, Christopher R; Lu, Dawei; Mohit, A Alex; Oskouian, Rod J; Roh, Jeffrey R; Shonnard, Neal; Dagal, Armagan; Flum, David R

    2018-03-07

    Functional impairment and pain are common indications for the initiation of lumbar spine surgery, but information about expected improvement in these patient-reported outcome (PRO) domains is not readily available to most patients and clinicians considering this type of surgery. To assess population-level PRO response after lumbar spine surgery, and develop/validate a prediction tool for PRO improvement. This statewide multicenter cohort was based at 15 Washington state hospitals representing approximately 75% of the state's spine fusion procedures. The Spine Surgical Care and Outcomes Assessment Program and the survey center at the Comparative Effectiveness Translational Network prospectively collected clinical and PRO data from adult candidates for lumbar surgery, preoperatively and postoperatively, between 2012 and 2016. Prediction models were derived for PRO improvement 1 year after lumbar fusion surgeries on a random sample of 85% of the data and were validated in the remaining 15%. Surgical candidates from 2012 through 2015 were included; follow-up surveying continued until December 31, 2016, and data analysis was completed from July 2016 to April 2017. Functional improvement, defined as a reduction in Oswestry Disability Index score of 15 points or more; and back pain and leg pain improvement, defined a reduction in Numeric Rating Scale score of 2 points or more. A total of 1965 adult lumbar surgical candidates (mean [SD] age, 61.3 [12.5] years; 944 [59.6%] female) completed baseline surveys before surgery and at least 1 postoperative follow-up survey within 3 years. Of these, 1583 (80.6%) underwent elective lumbar fusion procedures; 1223 (77.3%) had stenosis, and 1033 (65.3%) had spondylolisthesis. Twelve-month follow-up participation rates for each outcome were between 66% and 70%. Improvements were reported in function, back pain, and leg pain at 12 months by 306 of 528 surgical patients (58.0%), 616 of 899 patients (68.5%), and 355 of 464 patients (76.5%), respectively, whose baseline scores indicated moderate to severe symptoms. Among nonoperative patients, 35 (43.8%), 47 (53.4%), and 53 (63.9%) reported improvements in function, back pain, and leg pain, respectively. Demographic and clinical characteristics included in the final prediction models were age, sex, race, insurance status, American Society of Anesthesiologists score, smoking status, diagnoses, prior surgery, prescription opioid use, asthma, and baseline PRO scores. The models had good predictive performance in the validation cohort (concordance statistic, 0.66-0.79) and were incorporated into a patient-facing, web-based interactive tool (https://becertain.shinyapps.io/lumbar_fusion_calculator). The PRO response prediction tool, informed by population-level data, explained most of the variability in pain reduction and functional improvement after surgery. Giving patients accurate information about their likelihood of outcomes may be a helpful component in surgery decision making.

  19. The Optimal Screening for Prediction of Referral and Outcome (OSPRO) in patients with musculoskeletal pain conditions: a longitudinal validation cohort from the USA

    PubMed Central

    George, Steven Z; Beneciuk, Jason M; Lentz, Trevor A; Wu, Samuel S

    2017-01-01

    Purpose There is an increased need for determining which patients with musculoskeletal pain benefit from additional diagnostic testing or psychologically informed intervention. The Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort studies were designed to develop and validate standard assessment tools for review of systems and yellow flags. This cohort profile paper provides a description of and future plans for the validation cohort. Participants Patients (n=440) with primary complaint of spine, shoulder or knee pain were recruited into the OSPRO validation cohort via a national Orthopaedic Physical Therapy-Investigative Network. Patients were followed up at 4 weeks, 6 months and 12 months for pain, functional status and quality of life outcomes. Healthcare utilisation outcomes were also collected at 6 and 12 months. Findings to date There are no longitudinal findings reported to date from the ongoing OSPRO validation cohort. The previously completed cross-sectional OSPRO development cohort yielded two assessment tools that were investigated in the validation cohort. Future plans Follow-up data collection was completed in January 2017. Primary analyses will investigate how accurately the OSPRO review of systems and yellow flag tools predict 12-month pain, functional status, quality of life and healthcare utilisation outcomes. Planned secondary analyses include prediction of pain interference and/or development of chronic pain, investigation of treatment expectation on patient outcomes and analysis of patient satisfaction following an episode of physical therapy. Trial registration number The OSPRO validation cohort was not registered. PMID:28600371

  20. A New Scheme to Characterize and Identify Protein Ubiquitination Sites.

    PubMed

    Nguyen, Van-Nui; Huang, Kai-Yao; Huang, Chien-Hsun; Lai, K Robert; Lee, Tzong-Yi

    2017-01-01

    Protein ubiquitination, involving the conjugation of ubiquitin on lysine residue, serves as an important modulator of many cellular functions in eukaryotes. Recent advancements in proteomic technology have stimulated increasing interest in identifying ubiquitination sites. However, most computational tools for predicting ubiquitination sites are focused on small-scale data. With an increasing number of experimentally verified ubiquitination sites, we were motivated to design a predictive model for identifying lysine ubiquitination sites for large-scale proteome dataset. This work assessed not only single features, such as amino acid composition (AAC), amino acid pair composition (AAPC) and evolutionary information, but also the effectiveness of incorporating two or more features into a hybrid approach to model construction. The support vector machine (SVM) was applied to generate the prediction models for ubiquitination site identification. Evaluation by five-fold cross-validation showed that the SVM models learned from the combination of hybrid features delivered a better prediction performance. Additionally, a motif discovery tool, MDDLogo, was adopted to characterize the potential substrate motifs of ubiquitination sites. The SVM models integrating the MDDLogo-identified substrate motifs could yield an average accuracy of 68.70 percent. Furthermore, the independent testing result showed that the MDDLogo-clustered SVM models could provide a promising accuracy (78.50 percent) and perform better than other prediction tools. Two cases have demonstrated the effective prediction of ubiquitination sites with corresponding substrate motifs.

  1. Modulation/demodulation techniques for satellite communications. Part 2: Advanced techniques. The linear channel

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1982-01-01

    A theory is presented for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the linear satellite channel. The underlying principle used is the development of receiver structures based on the maximum-likelihood decision rule. The application of the performance prediction tools, e.g., channel cutoff rate and bit error probability transfer function bounds to these modulation/demodulation techniques.

  2. An Alternative Procedure for Estimating Unit Learning Curves,

    DTIC Science & Technology

    1985-09-01

    the model accurately describes the real-life situation, i.e., when the model is properly applied to the data, it can be a powerful tool for...predicting unit production costs. There are, however, some unique estimation problems inherent in the model . The usual method of generating predicted unit...production costs attempts to extend properties of least squares estimators to non- linear functions of these estimators. The result is biased estimates of

  3. SUPER-FOCUS: a tool for agile functional analysis of shotgun metagenomic data

    PubMed Central

    Green, Kevin T.; Dutilh, Bas E.; Edwards, Robert A.

    2016-01-01

    Summary: Analyzing the functional profile of a microbial community from unannotated shotgun sequencing reads is one of the important goals in metagenomics. Functional profiling has valuable applications in biological research because it identifies the abundances of the functional genes of the organisms present in the original sample, answering the question what they can do. Currently, available tools do not scale well with increasing data volumes, which is important because both the number and lengths of the reads produced by sequencing platforms keep increasing. Here, we introduce SUPER-FOCUS, SUbsystems Profile by databasE Reduction using FOCUS, an agile homology-based approach using a reduced reference database to report the subsystems present in metagenomic datasets and profile their abundances. SUPER-FOCUS was tested with over 70 real metagenomes, the results showing that it accurately predicts the subsystems present in the profiled microbial communities, and is up to 1000 times faster than other tools. Availability and implementation: SUPER-FOCUS was implemented in Python, and its source code and the tool website are freely available at https://edwards.sdsu.edu/SUPERFOCUS. Contact: redwards@mail.sdsu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26454280

  4. A study on die wear model of warm and hot forgings

    NASA Astrophysics Data System (ADS)

    Kang, J. H.; Park, I. W.; Jae, J. S.; Kang, S. S.

    1998-05-01

    Factors influencing service lives of tools in warm and hot forging processes are wear, mechanical fatigue, plastic deformation and thermal fatigue, etc. Wear is the predominant factor for tool failure among these. To predict tool life by wear, Archard's model where hardness is considered as constant or function of temperature is generally applied. Usually hardness of die is a function of not only temperature but operating time of die. To consider softening of die by repeated operation it is necessary to express hardness of die by a function of temperature and time. In this study wear coefficients were measured for various temperatures and heat treatment for H13 tool steel. Also by experiment of reheating of die, die softening curves were obtained. From experimental results, relationships between tempering parameters and hardness were established to investigate effects of hardness decrease by the effect of temperatures and time. Finally modified Archard's wear model in which hardness is considered to be a function of main tempering curve was proposed. And finite element analyses were conducted by adopting suggested wear model. By comparisons of simulations and real profiles of worn die, proposed wear model was verified.

  5. SUPER-FOCUS: a tool for agile functional analysis of shotgun metagenomic data.

    PubMed

    Silva, Genivaldo Gueiros Z; Green, Kevin T; Dutilh, Bas E; Edwards, Robert A

    2016-02-01

    Analyzing the functional profile of a microbial community from unannotated shotgun sequencing reads is one of the important goals in metagenomics. Functional profiling has valuable applications in biological research because it identifies the abundances of the functional genes of the organisms present in the original sample, answering the question what they can do. Currently, available tools do not scale well with increasing data volumes, which is important because both the number and lengths of the reads produced by sequencing platforms keep increasing. Here, we introduce SUPER-FOCUS, SUbsystems Profile by databasE Reduction using FOCUS, an agile homology-based approach using a reduced reference database to report the subsystems present in metagenomic datasets and profile their abundances. SUPER-FOCUS was tested with over 70 real metagenomes, the results showing that it accurately predicts the subsystems present in the profiled microbial communities, and is up to 1000 times faster than other tools. SUPER-FOCUS was implemented in Python, and its source code and the tool website are freely available at https://edwards.sdsu.edu/SUPERFOCUS. redwards@mail.sdsu.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  6. Galaxy tools and workflows for sequence analysis with applications in molecular plant pathology.

    PubMed

    Cock, Peter J A; Grüning, Björn A; Paszkiewicz, Konrad; Pritchard, Leighton

    2013-01-01

    The Galaxy Project offers the popular web browser-based platform Galaxy for running bioinformatics tools and constructing simple workflows. Here, we present a broad collection of additional Galaxy tools for large scale analysis of gene and protein sequences. The motivating research theme is the identification of specific genes of interest in a range of non-model organisms, and our central example is the identification and prediction of "effector" proteins produced by plant pathogens in order to manipulate their host plant. This functional annotation of a pathogen's predicted capacity for virulence is a key step in translating sequence data into potential applications in plant pathology. This collection includes novel tools, and widely-used third-party tools such as NCBI BLAST+ wrapped for use within Galaxy. Individual bioinformatics software tools are typically available separately as standalone packages, or in online browser-based form. The Galaxy framework enables the user to combine these and other tools to automate organism scale analyses as workflows, without demanding familiarity with command line tools and scripting. Workflows created using Galaxy can be saved and are reusable, so may be distributed within and between research groups, facilitating the construction of a set of standardised, reusable bioinformatic protocols. The Galaxy tools and workflows described in this manuscript are open source and freely available from the Galaxy Tool Shed (http://usegalaxy.org/toolshed or http://toolshed.g2.bx.psu.edu).

  7. Use of protection motivation theory, affect, and barriers to understand and predict adherence to outpatient rehabilitation.

    PubMed

    Grindley, Emma J; Zizzi, Samuel J; Nasypany, Alan M

    2008-12-01

    Protection motivation theory (PMT) has been used in more than 20 different health-related fields to study intentions and behavior, albeit primarily outside the area of injury rehabilitation. In order to examine and predict patient adherence behavior, this study was carried out to explore the use of PMT as a screening tool in a general sample of people with orthopedic conditions. New patients who were more than 18 years old and who were prescribed 4 to 8 weeks of physical therapy treatment (n=229) were administered a screening tool (Sports Injury Rehabilitation Beliefs Scale, Positive and Negative Affect Schedule, and a barriers checklist) prior to treatment. Participants' adherence was assessed with several attendance measures and an in-clinic assessment of behavior. Statistical analyses included correlation, chi-square, multiple regression, and discriminant function analyses. A variety of relationships among affect, barriers, and PMT components were evident. In-clinic behavior and attendance were influenced by affect, whereas dropout status was predicted by affect, severity, self-efficacy, and age. The screening tool used in this study may assist in identifying patients who are at risk for poor adherence and provide valuable information to enhance provider-patient relationships and foster patient adherence. However, it is recommended that more research be conducted to further understand the impact of variables on patient adherence and that the screening tool be enhanced to increase its predictive ability.

  8. Sex Differences in Object Manipulation in Wild Immature Chimpanzees (Pan troglodytes schweinfurthii) and Bonobos (Pan paniscus): Preparation for Tool Use?

    PubMed

    Koops, Kathelijne; Furuichi, Takeshi; Hashimoto, Chie; van Schaik, Carel P

    2015-01-01

    Sex differences in immatures predict behavioural differences in adulthood in many mammal species. Because most studies have focused on sex differences in social interactions, little is known about possible sex differences in 'preparation' for adult life with regards to tool use skills. We investigated sex and age differences in object manipulation in immature apes. Chimpanzees use a variety of tools across numerous contexts, whereas bonobos use few tools and none in foraging. In both species, a female bias in adult tool use has been reported. We studied object manipulation in immature chimpanzees at Kalinzu (Uganda) and bonobos at Wamba (Democratic Republic of Congo). We tested predictions of the 'preparation for tool use' hypothesis. We confirmed that chimpanzees showed higher rates and more diverse types of object manipulation than bonobos. Against expectation, male chimpanzees showed higher object manipulation rates than females, whereas in bonobos no sex difference was found. However, object manipulation by male chimpanzees was play-dominated, whereas manipulation types of female chimpanzees were more diverse (e.g., bite, break, carry). Manipulation by young immatures of both species was similarly dominated by play, but only in chimpanzees did it become more diverse with age. Moreover, in chimpanzees, object types became more tool-like (i.e., sticks) with age, further suggesting preparation for tool use in adulthood. The male bias in object manipulation in immature chimpanzees, along with the late onset of tool-like object manipulation, indicates that not all (early) object manipulation (i.e., object play) in immatures prepares for subsistence tool use. Instead, given the similarity with gender differences in human children, object play may also function in motor skill practice for male-specific behaviours (e.g., dominance displays). In conclusion, even though immature behaviours almost certainly reflect preparation for adult roles, more detailed future work is needed to disentangle possible functions of object manipulation during development.

  9. Sex Differences in Object Manipulation in Wild Immature Chimpanzees (Pan troglodytes schweinfurthii) and Bonobos (Pan paniscus): Preparation for Tool Use?

    PubMed Central

    Koops, Kathelijne; Furuichi, Takeshi; Hashimoto, Chie; van Schaik, Carel P.

    2015-01-01

    Sex differences in immatures predict behavioural differences in adulthood in many mammal species. Because most studies have focused on sex differences in social interactions, little is known about possible sex differences in ‘preparation’ for adult life with regards to tool use skills. We investigated sex and age differences in object manipulation in immature apes. Chimpanzees use a variety of tools across numerous contexts, whereas bonobos use few tools and none in foraging. In both species, a female bias in adult tool use has been reported. We studied object manipulation in immature chimpanzees at Kalinzu (Uganda) and bonobos at Wamba (Democratic Republic of Congo). We tested predictions of the ‘preparation for tool use’ hypothesis. We confirmed that chimpanzees showed higher rates and more diverse types of object manipulation than bonobos. Against expectation, male chimpanzees showed higher object manipulation rates than females, whereas in bonobos no sex difference was found. However, object manipulation by male chimpanzees was play-dominated, whereas manipulation types of female chimpanzees were more diverse (e.g., bite, break, carry). Manipulation by young immatures of both species was similarly dominated by play, but only in chimpanzees did it become more diverse with age. Moreover, in chimpanzees, object types became more tool-like (i.e., sticks) with age, further suggesting preparation for tool use in adulthood. The male bias in object manipulation in immature chimpanzees, along with the late onset of tool-like object manipulation, indicates that not all (early) object manipulation (i.e., object play) in immatures prepares for subsistence tool use. Instead, given the similarity with gender differences in human children, object play may also function in motor skill practice for male-specific behaviours (e.g., dominance displays). In conclusion, even though immature behaviours almost certainly reflect preparation for adult roles, more detailed future work is needed to disentangle possible functions of object manipulation during development. PMID:26444011

  10. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks.

    PubMed

    Chande, Ruchi D; Hargraves, Rosalyn Hobson; Ortiz-Robinson, Norma; Wayne, Jennifer S

    2017-01-01

    Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  11. Gaussian process regression for tool wear prediction

    NASA Astrophysics Data System (ADS)

    Kong, Dongdong; Chen, Yongjie; Li, Ning

    2018-05-01

    To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.

  12. Support vector machine prediction of enzyme function with conjoint triad feature and hierarchical context.

    PubMed

    Wang, Yong-Cui; Wang, Yong; Yang, Zhi-Xia; Deng, Nai-Yang

    2011-06-20

    Enzymes are known as the largest class of proteins and their functions are usually annotated by the Enzyme Commission (EC), which uses a hierarchy structure, i.e., four numbers separated by periods, to classify the function of enzymes. Automatically categorizing enzyme into the EC hierarchy is crucial to understand its specific molecular mechanism. In this paper, we introduce two key improvements in predicting enzyme function within the machine learning framework. One is to introduce the efficient sequence encoding methods for representing given proteins. The second one is to develop a structure-based prediction method with low computational complexity. In particular, we propose to use the conjoint triad feature (CTF) to represent the given protein sequences by considering not only the composition of amino acids but also the neighbor relationships in the sequence. Then we develop a support vector machine (SVM)-based method, named as SVMHL (SVM for hierarchy labels), to output enzyme function by fully considering the hierarchical structure of EC. The experimental results show that our SVMHL with the CTF outperforms SVMHL with the amino acid composition (AAC) feature both in predictive accuracy and Matthew's correlation coefficient (MCC). In addition, SVMHL with the CTF obtains the accuracy and MCC ranging from 81% to 98% and 0.82 to 0.98 when predicting the first three EC digits on a low-homologous enzyme dataset. We further demonstrate that our method outperforms the methods which do not take account of hierarchical relationship among enzyme categories and alternative methods which incorporate prior knowledge about inter-class relationships. Our structure-based prediction model, SVMHL with the CTF, reduces the computational complexity and outperforms the alternative approaches in enzyme function prediction. Therefore our new method will be a useful tool for enzyme function prediction community.

  13. Boundary-Layer Receptivity and Integrated Transition Prediction

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Choudhari, Meelan

    2005-01-01

    The adjoint parabold stability equations (PSE) formulation is used to calculate the boundary layer receptivity to localized surface roughness and suction for compressible boundary layers. Receptivity efficiency functions predicted by the adjoint PSE approach agree well with results based on other nonparallel methods including linearized Navier-Stokes equations for both Tollmien-Schlichting waves and crossflow instability in swept wing boundary layers. The receptivity efficiency function can be regarded as the Green's function to the disturbance amplitude evolution in a nonparallel (growing) boundary layer. Given the Fourier transformed geometry factor distribution along the chordwise direction, the linear disturbance amplitude evolution for a finite size, distributed nonuniformity can be computed by evaluating the integral effects of both disturbance generation and linear amplification. The synergistic approach via the linear adjoint PSE for receptivity and nonlinear PSE for disturbance evolution downstream of the leading edge forms the basis for an integrated transition prediction tool. Eventually, such physics-based, high fidelity prediction methods could simulate the transition process from the disturbance generation through the nonlinear breakdown in a holistic manner.

  14. The identification of protein domains that mediate functional interactions between Rab-GTPases and RabGAPs using 3D protein modeling.

    PubMed

    Davie, Jeremiah J; Faitar, Silviu L

    2017-01-01

    Currently, time-consuming serial in vitro experimentation involving immunocytochemistry or radiolabeled materials is required to identify which of the numerous Rab-GTPases (Rab) and Rab-GTPase activating proteins (RabGAP) are capable of functional interactions. These interactions are essential for numerous cellular functions, and in silico methods of reducing in vitro trial and error would accelerate the pace of research in cell biology. We have utilized a combination of three-dimensional protein modeling and protein bioinformatics to identify domains present in Rab proteins that are predictive of their functional interaction with a specific RabGAP. The RabF2 and RabSF1 domains appear to play functional roles in mediating the interaction between Rabs and RabGAPs. Moreover, the RabSF1 domain can be used to make in silico predictions of functional Rab/RabGAP pairs. This method is expected to be a broadly applicable tool for predicting protein-protein interactions where existing crystal structures for homologs of the proteins of interest are available.

  15. High Precision Prediction of Functional Sites in Protein Structures

    PubMed Central

    Buturovic, Ljubomir; Wong, Mike; Tang, Grace W.; Altman, Russ B.; Petkovic, Dragutin

    2014-01-01

    We address the problem of assigning biological function to solved protein structures. Computational tools play a critical role in identifying potential active sites and informing screening decisions for further lab analysis. A critical parameter in the practical application of computational methods is the precision, or positive predictive value. Precision measures the level of confidence the user should have in a particular computed functional assignment. Low precision annotations lead to futile laboratory investigations and waste scarce research resources. In this paper we describe an advanced version of the protein function annotation system FEATURE, which achieved 99% precision and average recall of 95% across 20 representative functional sites. The system uses a Support Vector Machine classifier operating on the microenvironment of physicochemical features around an amino acid. We also compared performance of our method with state-of-the-art sequence-level annotator Pfam in terms of precision, recall and localization. To our knowledge, no other functional site annotator has been rigorously evaluated against these key criteria. The software and predictive models are incorporated into the WebFEATURE service at http://feature.stanford.edu/wf4.0-beta. PMID:24632601

  16. Chimpanzees create and modify probe tools functionally: A study with zoo-housed chimpanzees.

    PubMed

    Hopper, Lydia M; Tennie, Claudio; Ross, Stephen R; Lonsdorf, Elizabeth V

    2015-02-01

    Chimpanzees (Pan troglodytes) use tools to probe for out-of-reach food, both in the wild and in captivity. Beyond gathering appropriately-sized materials to create tools, chimpanzees also perform secondary modifications in order to create an optimized tool. In this study, we recorded the behavior of a group of zoo-housed chimpanzees when presented with opportunities to use tools to probe for liquid foods in an artificial termite mound within their enclosure. Previous research with this group of chimpanzees has shown that they are proficient at gathering materials from within their environment in order to create tools to probe for the liquid food within the artificial mound. Extending beyond this basic question, we first asked whether they only made and modified probe tools when it was appropriate to do so (i.e. when the mound was baited with food). Second, by collecting continuous data on their behavior, we also asked whether the chimpanzees first (intentionally) modified their tools prior to probing for food or whether such modifications occurred after tool use, possibly as a by-product of chewing and eating the food from the tools. Following our predictions, we found that tool modification predicted tool use; the chimpanzees began using their tools within a short delay of creating and modifying them, and the chimpanzees performed more tool modifying behaviors when food was available than when they could not gain food through the use of probe tools. We also discuss our results in terms of the chimpanzees' acquisition of the skills, and their flexibility of tool use and learning. © 2014 Wiley Periodicals, Inc.

  17. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  18. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less

  19. The designer of the 90's: A live demonstration

    NASA Technical Reports Server (NTRS)

    Green, Tommy L.; Jordan, Basil M., Jr.; Oglesby, Timothy L.

    1989-01-01

    A survey of design tools to be used by the aircraft designer is given. Structural reliability, maintainability, cost and predictability, and acoustics expert systems are discussed, as well as scheduling, drawing, engineering systems, sizing functions, and standard parts and materials data bases.

  20. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGES

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; ...

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  1. RAG-3D: a search tool for RNA 3D substructures

    PubMed Central

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-01-01

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  2. RAG-3D: A search tool for RNA 3D substructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  3. State of the art on nailfold capillaroscopy: a reliable diagnostic tool and putative biomarker in rheumatology?

    PubMed

    Cutolo, Maurizio; Smith, Vanessa

    2013-11-01

    Capillaroscopy is a non-invasive and safe tool to morphologically study the microcirculation. In rheumatology it has a dual use. First, it has a role in differential diagnosis of patients with RP. Second, it may have a role in the prediction of clinical complications in CTDs. In SSc, pilot studies have shown predictive associations with peripheral vascular and lung involvement hinting at a role of capillaroscopy as putative biomarker. Also and logically, in SSc, microangiopathy, as assessed by capillaroscopy, has been associated with markers of the disease such as angiogenic/static factors and SSc-specific antibodies. Moreover, morphological assessments of the microcirculation (capillaroscopy) seem to correlate with functional assessments (such as laser Doppler). Because of its clinical and research role, eyes are geared in Europe to expand the knowledge of this tool. Both the European League Against Rheumatism (EULAR) and the ACR are stepping forward to this need.

  4. Complete fold annotation of the human proteome using a novel structural feature space

    DOE PAGES

    Middleton, Sarah A.; Illuminati, Joseph; Kim, Junhyong

    2017-04-13

    Recognition of protein structural fold is the starting point for many structure prediction tools and protein function inference. Fold prediction is computationally demanding and recognizing novel folds is difficult such that the majority of proteins have not been annotated for fold classification. Here we describe a new machine learning approach using a novel feature space that can be used for accurate recognition of all 1,221 currently known folds and inference of unknown novel folds. We show that our method achieves better than 94% accuracy even when many folds have only one training example. We demonstrate the utility of this methodmore » by predicting the folds of 34,330 human protein domains and showing that these predictions can yield useful insights into potential biological function, such as prediction of RNA-binding ability. Finally, our method can be applied to de novo fold prediction of entire proteomes and identify candidate novel fold families.« less

  5. Structure Prediction of the Second Extracellular Loop in G-Protein-Coupled Receptors

    PubMed Central

    Kmiecik, Sebastian; Jamroz, Michal; Kolinski, Michal

    2014-01-01

    G-protein-coupled receptors (GPCRs) play key roles in living organisms. Therefore, it is important to determine their functional structures. The second extracellular loop (ECL2) is a functionally important region of GPCRs, which poses significant challenge for computational structure prediction methods. In this work, we evaluated CABS, a well-established protein modeling tool for predicting ECL2 structure in 13 GPCRs. The ECL2s (with between 13 and 34 residues) are predicted in an environment of other extracellular loops being fully flexible and the transmembrane domain fixed in its x-ray conformation. The modeling procedure used theoretical predictions of ECL2 secondary structure and experimental constraints on disulfide bridges. Our approach yielded ensembles of low-energy conformers and the most populated conformers that contained models close to the available x-ray structures. The level of similarity between the predicted models and x-ray structures is comparable to that of other state-of-the-art computational methods. Our results extend other studies by including newly crystallized GPCRs. PMID:24896119

  6. Predicting Diameter Distributions of Longleaf Pine Plantations: A Comparison Between Artificial Neural Networks and Other Accepted Methodologies

    Treesearch

    Daniel J. Leduc; Thomas G. Matney; Keith L. Belli; V. Clark Baldwin

    2001-01-01

    Artificial neural networks (NN) are becoming a popular estimation tool. Because they require no assumptions about the form of a fitting function, they can free the modeler from reliance on parametric approximating functions that may or may not satisfactorily fit the observed data. To date there have been few applications in forestry science, but as better NN software...

  7. Instrumental resolution of the chopper spectrometer 4SEASONS evaluated by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Kajimoto, Ryoichi; Sato, Kentaro; Inamura, Yasuhiro; Fujita, Masaki

    2018-05-01

    We performed simulations of the resolution function of the 4SEASONS spectrometer at J-PARC by using the Monte Carlo simulation package McStas. The simulations showed reasonably good agreement with analytical calculations of energy and momentum resolutions by using a simplified description. We implemented new functionalities in Utsusemi, the standard data analysis tool used in 4SEASONS, to enable visualization of the simulated resolution function and predict its shape for specific experimental configurations.

  8. `spup' - An R Package for Analysis of Spatial Uncertainty Propagation and Application to Trace Gas Emission Simulations

    NASA Astrophysics Data System (ADS)

    Sawicka, K.; Breuer, L.; Houska, T.; Santabarbara Ruiz, I.; Heuvelink, G. B. M.

    2016-12-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Advances in uncertainty propagation analysis and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the `spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo techniques, as well as several uncertainty visualization functions. Here we will demonstrate that the 'spup' package is an effective and easy-to-use tool to be applied even in a very complex study case, and that it can be used in multi-disciplinary research and model-based decision support. As an example, we use the ecological LandscapeDNDC model to analyse propagation of uncertainties associated with spatial variability of the model driving forces such as rainfall, nitrogen deposition and fertilizer inputs. The uncertainty propagation is analysed for the prediction of emissions of N2O and CO2 for a German low mountainous, agriculturally developed catchment. The study tests the effect of spatial correlations on spatially aggregated model outputs, and could serve as an advice for developing best management practices and model improvement strategies.

  9. Synthetic biology: tools to design microbes for the production of chemicals and fuels.

    PubMed

    Seo, Sang Woo; Yang, Jina; Min, Byung Eun; Jang, Sungho; Lim, Jae Hyung; Lim, Hyun Gyu; Kim, Seong Cheol; Kim, Se Yeon; Jeong, Jun Hong; Jung, Gyoo Yeol

    2013-11-01

    The engineering of biological systems to achieve specific purposes requires design tools that function in a predictable and quantitative manner. Recent advances in the field of synthetic biology, particularly in the programmable control of gene expression at multiple levels of regulation, have increased our ability to efficiently design and optimize biological systems to perform designed tasks. Furthermore, implementation of these designs in biological systems highlights the potential of using these tools to build microbial cell factories for the production of chemicals and fuels. In this paper, we review current developments in the design of tools for controlling gene expression at transcriptional, post-transcriptional and post-translational levels, and consider potential applications of these tools. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction

    NASA Astrophysics Data System (ADS)

    Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc

    2018-02-01

    Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.

  11. An in silico pipeline to filter the Toxoplasma gondii proteome for proteins that could traffic to the host cell nucleus and influence host cell epigenetic regulation.

    PubMed

    Syn, Genevieve; Blackwell, Jenefer M; Jamieson, Sarra E; Francis, Richard W

    2018-01-01

    Toxoplasma gondii uses epigenetic mechanisms to regulate both endogenous and host cell gene expression. To identify genes with putative epigenetic functions, we developed an in silico pipeline to interrogate the T. gondii proteome of 8313 proteins. Step 1 employs PredictNLS and NucPred to identify genes predicted to target eukaryotic nuclei. Step 2 uses GOLink to identify proteins of epigenetic function based on Gene Ontology terms. This resulted in 611 putative nuclear localised proteins with predicted epigenetic functions. Step 3 filtered for secretory proteins using SignalP, SecretomeP, and experimental data. This identified 57 of the 611 putative epigenetic proteins as likely to be secreted. The pipeline is freely available online, uses open access tools and software with user-friendly Perl scripts to automate and manage the results, and is readily adaptable to undertake any such in silico search for genes contributing to particular functions.

  12. Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data

    PubMed Central

    García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio

    2016-01-01

    Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC–MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC–MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed. PMID:28787882

  13. Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data.

    PubMed

    García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio

    2016-01-28

    Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC-MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc . Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC-MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed.

  14. Engineering bacterial translation initiation - Do we have all the tools we need?

    PubMed

    Vigar, Justin R J; Wieden, Hans-Joachim

    2017-11-01

    Reliable tools that allow precise and predictable control over gene expression are critical for the success of nearly all bioengineering applications. Translation initiation is the most regulated phase during protein biosynthesis, and is therefore a promising target for exerting control over gene expression. At the translational level, the copy number of a protein can be fine-tuned by altering the interaction between the translation initiation region of an mRNA and the ribosome. These interactions can be controlled by modulating the mRNA structure using numerous approaches, including small molecule ligands, RNAs, or RNA-binding proteins. A variety of naturally occurring regulatory elements have been repurposed, facilitating advances in synthetic gene regulation strategies. The pursuit of a comprehensive understanding of mechanisms governing translation initiation provides the framework for future engineering efforts. Here we outline state-of-the-art strategies used to predictably control translation initiation in bacteria. We also discuss current limitations in the field and future goals. Due to its function as the rate-determining step, initiation is the ideal point to exert effective translation regulation. Several engineering tools are currently available to rationally design the initiation characteristics of synthetic mRNAs. However, improvements are required to increase the predictability, effectiveness, and portability of these tools. Predictable and reliable control over translation initiation will allow greater predictability when designing, constructing, and testing genetic circuits. The ability to build more complex circuits predictably will advance synthetic biology and contribute to our fundamental understanding of the underlying principles of these processes. "This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. The Software Management Environment (SME)

    NASA Technical Reports Server (NTRS)

    Valett, Jon D.; Decker, William; Buell, John

    1988-01-01

    The Software Management Environment (SME) is a research effort designed to utilize the past experiences and results of the Software Engineering Laboratory (SEL) and to incorporate this knowledge into a tool for managing projects. SME provides the software development manager with the ability to observe, compare, predict, analyze, and control key software development parameters such as effort, reliability, and resource utilization. The major components of the SME, the architecture of the system, and examples of the functionality of the tool are discussed.

  16. Modified subaperture tool influence functions of a flat-pitch polisher with reverse-calculated material removal rate.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-04-10

    Numerical simulation of subaperture tool influence functions (TIF) is widely known as a critical procedure in computer-controlled optical surfacing. However, it may lack practicability in engineering because the emulation TIF (e-TIF) has some discrepancy with the practical TIF (p-TIF), and the removal rate could not be predicted by simulations. Prior to the polishing of a formal workpiece, opticians have to conduct TIF spot experiments on another sample to confirm the p-TIF with a quantitative removal rate, which is difficult and time-consuming for sequential polishing runs with different tools. This work is dedicated to applying these e-TIFs into practical engineering by making improvements from two aspects: (1) modifies the pressure distribution model of a flat-pitch polisher by finite element analysis and least square fitting methods to make the removal shape of e-TIFs closer to p-TIFs (less than 5% relative deviation validated by experiments); (2) predicts the removal rate of e-TIFs by reverse calculating the material removal volume of a pre-polishing run to the formal workpiece (relative deviations of peak and volume removal rate were validated to be less than 5%). This can omit TIF spot experiments for the particular flat-pitch tool employed and promote the direct usage of e-TIFs in the optimization of a dwell time map, which can largely save on cost and increase fabrication efficiency.

  17. A point-based tool to predict conversion from mild cognitive impairment to probable Alzheimer's disease.

    PubMed

    Barnes, Deborah E; Cenzer, Irena S; Yaffe, Kristine; Ritchie, Christine S; Lee, Sei J

    2014-11-01

    Our objective in this study was to develop a point-based tool to predict conversion from amnestic mild cognitive impairment (MCI) to probable Alzheimer's disease (AD). Subjects were participants in the first part of the Alzheimer's Disease Neuroimaging Initiative. Cox proportional hazards models were used to identify factors associated with development of AD, and a point score was created from predictors in the final model. The final point score could range from 0 to 9 (mean 4.8) and included: the Functional Assessment Questionnaire (2‒3 points); magnetic resonance imaging (MRI) middle temporal cortical thinning (1 point); MRI hippocampal subcortical volume (1 point); Alzheimer's Disease Cognitive Scale-cognitive subscale (2‒3 points); and the Clock Test (1 point). Prognostic accuracy was good (Harrell's c = 0.78; 95% CI 0.75, 0.81); 3-year conversion rates were 6% (0‒3 points), 53% (4‒6 points), and 91% (7‒9 points). A point-based risk score combining functional dependence, cerebral MRI measures, and neuropsychological test scores provided good accuracy for prediction of conversion from amnestic MCI to AD. Copyright © 2014 The Alzheimer's Association. All rights reserved.

  18. An expert system based software sizing tool, phase 2

    NASA Technical Reports Server (NTRS)

    Friedlander, David

    1990-01-01

    A software tool was developed for predicting the size of a future computer program at an early stage in its development. The system is intended to enable a user who is not expert in Software Engineering to estimate software size in lines of source code with an accuracy similar to that of an expert, based on the program's functional specifications. The project was planned as a knowledge based system with a field prototype as the goal of Phase 2 and a commercial system planned for Phase 3. The researchers used techniques from Artificial Intelligence and knowledge from human experts and existing software from NASA's COSMIC database. They devised a classification scheme for the software specifications, and a small set of generic software components that represent complexity and apply to large classes of programs. The specifications are converted to generic components by a set of rules and the generic components are input to a nonlinear sizing function which makes the final prediction. The system developed for this project predicted code sizes from the database with a bias factor of 1.06 and a fluctuation factor of 1.77, an accuracy similar to that of human experts but without their significant optimistic bias.

  19. Prediction of Thermal Fatigue in Tooling for Die-casting Copper via Finite Element Analysis

    NASA Astrophysics Data System (ADS)

    Sakhuja, Amit; Brevick, Jerald R.

    2004-06-01

    Recent research by the Copper Development Association (CDA) has demonstrated the feasibility of die-casting electric motor rotors using copper. Electric motors using copper rotors are significantly more energy efficient relative to motors using aluminum rotors. However, one of the challenges in copper rotor die-casting is low tool life. Experiments have shown that the higher molten metal temperature of copper (1085 °C), as compared to aluminum (660 °C) accelerates the onset of thermal fatigue or heat checking in traditional H-13 tool steel. This happens primarily because the mechanical properties of H-13 tool steel decrease significantly above 650 °C. Potential approaches to mitigate the heat checking problem include: 1) identification of potential tool materials having better high temperature mechanical properties than H-13, and 2) reduction of the magnitude of cyclic thermal excursions experienced by the tooling by increasing the bulk die temperature. A preliminary assessment of alternative tool materials has led to the selection of nickel-based alloys Haynes 230 and Inconel 617 as potential candidates. These alloys were selected based on their elevated temperature physical and mechanical properties. Therefore, the overall objective of this research work was to predict the number of copper rotor die-casting cycles to the onset of heat checking (tool life) as a function of bulk die temperature (up to 650 °C) for Haynes 230 and Inconel 617 alloys. To achieve these goals, a 2D thermo-mechanical FEA was performed to evaluate strain ranges on selected die surfaces. The method of Universal Slopes (Strain Life Method) was then employed for thermal fatigue life predictions.

  20. Multivariate Strategies in Functional Magnetic Resonance Imaging

    ERIC Educational Resources Information Center

    Hansen, Lars Kai

    2007-01-01

    We discuss aspects of multivariate fMRI modeling, including the statistical evaluation of multivariate models and means for dimensional reduction. In a case study we analyze linear and non-linear dimensional reduction tools in the context of a "mind reading" predictive multivariate fMRI model.

  1. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis

    PubMed Central

    Gong, Xiajing; Hu, Meng

    2018-01-01

    Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640

  2. The Physalis peruviana leaf transcriptome: assembly, annotation and gene model prediction

    PubMed Central

    2012-01-01

    Background Physalis peruviana commonly known as Cape gooseberry is a member of the Solanaceae family that has an increasing popularity due to its nutritional and medicinal values. A broad range of genomic tools is available for other Solanaceae, including tomato and potato. However, limited genomic resources are currently available for Cape gooseberry. Results We report the generation of a total of 652,614 P. peruviana Expressed Sequence Tags (ESTs), using 454 GS FLX Titanium technology. ESTs, with an average length of 371 bp, were obtained from a normalized leaf cDNA library prepared using a Colombian commercial variety. De novo assembling was performed to generate a collection of 24,014 isotigs and 110,921 singletons, with an average length of 1,638 bp and 354 bp, respectively. Functional annotation was performed using NCBI’s BLAST tools and Blast2GO, which identified putative functions for 21,191 assembled sequences, including gene families involved in all the major biological processes and molecular functions as well as defense response and amino acid metabolism pathways. Gene model predictions in P. peruviana were obtained by using the genomes of Solanum lycopersicum (tomato) and Solanum tuberosum (potato). We predict 9,436 P. peruviana sequences with multiple-exon models and conserved intron positions with respect to the potato and tomato genomes. Additionally, to study species diversity we developed 5,971 SSR markers from assembled ESTs. Conclusions We present the first comprehensive analysis of the Physalis peruviana leaf transcriptome, which will provide valuable resources for development of genetic tools in the species. Assembled transcripts with gene models could serve as potential candidates for marker discovery with a variety of applications including: functional diversity, conservation and improvement to increase productivity and fruit quality. P. peruviana was estimated to be phylogenetically branched out before the divergence of five other Solanaceae family members, S. lycopersicum, S. tuberosum, Capsicum spp, S. melongena and Petunia spp. PMID:22533342

  3. The Physalis peruviana leaf transcriptome: assembly, annotation and gene model prediction.

    PubMed

    Garzón-Martínez, Gina A; Zhu, Z Iris; Landsman, David; Barrero, Luz S; Mariño-Ramírez, Leonardo

    2012-04-25

    Physalis peruviana commonly known as Cape gooseberry is a member of the Solanaceae family that has an increasing popularity due to its nutritional and medicinal values. A broad range of genomic tools is available for other Solanaceae, including tomato and potato. However, limited genomic resources are currently available for Cape gooseberry. We report the generation of a total of 652,614 P. peruviana Expressed Sequence Tags (ESTs), using 454 GS FLX Titanium technology. ESTs, with an average length of 371 bp, were obtained from a normalized leaf cDNA library prepared using a Colombian commercial variety. De novo assembling was performed to generate a collection of 24,014 isotigs and 110,921 singletons, with an average length of 1,638 bp and 354 bp, respectively. Functional annotation was performed using NCBI's BLAST tools and Blast2GO, which identified putative functions for 21,191 assembled sequences, including gene families involved in all the major biological processes and molecular functions as well as defense response and amino acid metabolism pathways. Gene model predictions in P. peruviana were obtained by using the genomes of Solanum lycopersicum (tomato) and Solanum tuberosum (potato). We predict 9,436 P. peruviana sequences with multiple-exon models and conserved intron positions with respect to the potato and tomato genomes. Additionally, to study species diversity we developed 5,971 SSR markers from assembled ESTs. We present the first comprehensive analysis of the Physalis peruviana leaf transcriptome, which will provide valuable resources for development of genetic tools in the species. Assembled transcripts with gene models could serve as potential candidates for marker discovery with a variety of applications including: functional diversity, conservation and improvement to increase productivity and fruit quality. P. peruviana was estimated to be phylogenetically branched out before the divergence of five other Solanaceae family members, S. lycopersicum, S. tuberosum, Capsicum spp, S. melongena and Petunia spp.

  4. Assessment of the Clinical Relevance of BRCA2 Missense Variants by Functional and Computational Approaches.

    PubMed

    Guidugli, Lucia; Shimelis, Hermela; Masica, David L; Pankratz, Vernon S; Lipton, Gary B; Singh, Namit; Hu, Chunling; Monteiro, Alvaro N A; Lindor, Noralane M; Goldgar, David E; Karchin, Rachel; Iversen, Edwin S; Couch, Fergus J

    2018-01-17

    Many variants of uncertain significance (VUS) have been identified in BRCA2 through clinical genetic testing. VUS pose a significant clinical challenge because the contribution of these variants to cancer risk has not been determined. We conducted a comprehensive assessment of VUS in the BRCA2 C-terminal DNA binding domain (DBD) by using a validated functional assay of BRCA2 homologous recombination (HR) DNA-repair activity and defined a classifier of variant pathogenicity. Among 139 variants evaluated, 54 had ≥99% probability of pathogenicity, and 73 had ≥95% probability of neutrality. Functional assay results were compared with predictions of variant pathogenicity from the Align-GVGD protein-sequence-based prediction algorithm, which has been used for variant classification. Relative to the HR assay, Align-GVGD significantly (p < 0.05) over-predicted pathogenic variants. We subsequently combined functional and Align-GVGD prediction results in a Bayesian hierarchical model (VarCall) to estimate the overall probability of pathogenicity for each VUS. In addition, to predict the effects of all other BRCA2 DBD variants and to prioritize variants for functional studies, we used the endoPhenotype-Optimized Sequence Ensemble (ePOSE) algorithm to train classifiers for BRCA2 variants by using data from the HR functional assay. Together, the results show that systematic functional assays in combination with in silico predictors of pathogenicity provide robust tools for clinical annotation of BRCA2 VUS. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  5. The Realization of Drilling Fault Diagnosis Based on Hybrid Programming with Matlab and VB

    NASA Astrophysics Data System (ADS)

    Wang, Jiangping; Hu, Yingcai

    This paper presents a method using hybrid programming with Matlab and VB based on ActiveX to design the system of drilling accident prediction and diagnosis. So that the powerful calculating function and graphical display function of Matlab and visual development interface of VB are combined fully. The main interface of the diagnosis system is compiled in VB,and the analysis and fault diagnosis are implemented by neural network tool boxes in Matlab.The system has favorable interactive interface,and the fault example validation shows that the diagnosis result is feasible and can meet the demands of drilling accident prediction and diagnosis.

  6. Genetic Epidemiology of Glucose-6-Dehydrogenase Deficiency in the Arab World.

    PubMed

    Doss, C George Priya; Alasmar, Dima R; Bux, Reem I; Sneha, P; Bakhsh, Fadheela Dad; Al-Azwani, Iman; Bekay, Rajaa El; Zayed, Hatem

    2016-11-17

    A systematic search was implemented using four literature databases (PubMed, Embase, Science Direct and Web of Science) to capture all the causative mutations of Glucose-6-phosphate dehydrogenase (G6PD) deficiency (G6PDD) in the 22 Arab countries. Our search yielded 43 studies that captured 33 mutations (23 missense, one silent, two deletions, and seven intronic mutations), in 3,430 Arab patients with G6PDD. The 23 missense mutations were then subjected to phenotypic classification using in silico prediction tools, which were compared to the WHO pathogenicity scale as a reference. These in silico tools were tested for their predicting efficiency using rigorous statistical analyses. Of the 23 missense mutations, p.S188F, p.I48T, p.N126D, and p.V68M, were identified as the most common mutations among Arab populations, but were not unique to the Arab world, interestingly, our search strategy found four other mutations (p.N135T, p.S179N, p.R246L, and p.Q307P) that are unique to Arabs. These mutations were exposed to structural analysis and molecular dynamics simulation analysis (MDSA), which predicting these mutant forms as potentially affect the enzyme function. The combination of the MDSA, structural analysis, and in silico predictions and statistical tools we used will provide a platform for future prediction accuracy for the pathogenicity of genetic mutations.

  7. PLncPRO for prediction of long non-coding RNAs (lncRNAs) in plants and its application for discovery of abiotic stress-responsive lncRNAs in rice and chickpea

    PubMed Central

    Singh, Urminder; Rajkumar, Mohan Singh; Garg, Rohini

    2017-01-01

    Abstract Long non-coding RNAs (lncRNAs) make up a significant portion of non-coding RNAs and are involved in a variety of biological processes. Accurate identification/annotation of lncRNAs is the primary step for gaining deeper insights into their functions. In this study, we report a novel tool, PLncPRO, for prediction of lncRNAs in plants using transcriptome data. PLncPRO is based on machine learning and uses random forest algorithm to classify coding and long non-coding transcripts. PLncPRO has better prediction accuracy as compared to other existing tools and is particularly well-suited for plants. We developed consensus models for dicots and monocots to facilitate prediction of lncRNAs in non-model/orphan plants. The performance of PLncPRO was quite better with vertebrate transcriptome data as well. Using PLncPRO, we discovered 3714 and 3457 high-confidence lncRNAs in rice and chickpea, respectively, under drought or salinity stress conditions. We investigated different characteristics and differential expression under drought/salinity stress conditions, and validated lncRNAs via RT-qPCR. Overall, we developed a new tool for the prediction of lncRNAs in plants and showed its utility via identification of lncRNAs in rice and chickpea. PMID:29036354

  8. Accurate in silico prediction of species-specific methylation sites based on information gain feature optimization.

    PubMed

    Wen, Ping-Ping; Shi, Shao-Ping; Xu, Hao-Dong; Wang, Li-Na; Qiu, Jian-Ding

    2016-10-15

    As one of the most important reversible types of post-translational modification, protein methylation catalyzed by methyltransferases carries many pivotal biological functions as well as many essential biological processes. Identification of methylation sites is prerequisite for decoding methylation regulatory networks in living cells and understanding their physiological roles. Experimental methods are limitations of labor-intensive and time-consuming. While in silicon approaches are cost-effective and high-throughput manner to predict potential methylation sites, but those previous predictors only have a mixed model and their prediction performances are not fully satisfactory now. Recently, with increasing availability of quantitative methylation datasets in diverse species (especially in eukaryotes), there is a growing need to develop a species-specific predictor. Here, we designed a tool named PSSMe based on information gain (IG) feature optimization method for species-specific methylation site prediction. The IG method was adopted to analyze the importance and contribution of each feature, then select the valuable dimension feature vectors to reconstitute a new orderly feature, which was applied to build the finally prediction model. Finally, our method improves prediction performance of accuracy about 15% comparing with single features. Furthermore, our species-specific model significantly improves the predictive performance compare with other general methylation prediction tools. Hence, our prediction results serve as useful resources to elucidate the mechanism of arginine or lysine methylation and facilitate hypothesis-driven experimental design and validation. The tool online service is implemented by C# language and freely available at http://bioinfo.ncu.edu.cn/PSSMe.aspx CONTACT: jdqiu@ncu.edu.cnSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. FunCoup 3.0: database of genome-wide functional coupling networks

    PubMed Central

    Schmitt, Thomas; Ogris, Christoph; Sonnhammer, Erik L. L.

    2014-01-01

    We present an update of the FunCoup database (http://FunCoup.sbc.su.se) of functional couplings, or functional associations, between genes and gene products. Identifying these functional couplings is an important step in the understanding of higher level mechanisms performed by complex cellular processes. FunCoup distinguishes between four classes of couplings: participation in the same signaling cascade, participation in the same metabolic process, co-membership in a protein complex and physical interaction. For each of these four classes, several types of experimental and statistical evidence are combined by Bayesian integration to predict genome-wide functional coupling networks. The FunCoup framework has been completely re-implemented to allow for more frequent future updates. It contains many improvements, such as a regularization procedure to automatically downweight redundant evidences and a novel method to incorporate phylogenetic profile similarity. Several datasets have been updated and new data have been added in FunCoup 3.0. Furthermore, we have developed a new Web site, which provides powerful tools to explore the predicted networks and to retrieve detailed information about the data underlying each prediction. PMID:24185702

  10. FunCoup 3.0: database of genome-wide functional coupling networks.

    PubMed

    Schmitt, Thomas; Ogris, Christoph; Sonnhammer, Erik L L

    2014-01-01

    We present an update of the FunCoup database (http://FunCoup.sbc.su.se) of functional couplings, or functional associations, between genes and gene products. Identifying these functional couplings is an important step in the understanding of higher level mechanisms performed by complex cellular processes. FunCoup distinguishes between four classes of couplings: participation in the same signaling cascade, participation in the same metabolic process, co-membership in a protein complex and physical interaction. For each of these four classes, several types of experimental and statistical evidence are combined by Bayesian integration to predict genome-wide functional coupling networks. The FunCoup framework has been completely re-implemented to allow for more frequent future updates. It contains many improvements, such as a regularization procedure to automatically downweight redundant evidences and a novel method to incorporate phylogenetic profile similarity. Several datasets have been updated and new data have been added in FunCoup 3.0. Furthermore, we have developed a new Web site, which provides powerful tools to explore the predicted networks and to retrieve detailed information about the data underlying each prediction.

  11. Functional magnetic resonance imaging examination of two modular architectures for switching multiple internal models.

    PubMed

    Imamizu, Hiroshi; Kuroda, Tomoe; Yoshioka, Toshinori; Kawato, Mitsuo

    2004-02-04

    An internal model is a neural mechanism that can mimic the input-output properties of a controlled object such as a tool. Recent research interests have moved on to how multiple internal models are learned and switched under a given context of behavior. Two representative computational models for task switching propose distinct neural mechanisms, thus predicting different brain activity patterns in the switching of internal models. In one model, called the mixture-of-experts architecture, switching is commanded by a single executive called a "gating network," which is different from the internal models. In the other model, called the MOSAIC (MOdular Selection And Identification for Control), the internal models themselves play crucial roles in switching. Consequently, the mixture-of-experts model predicts that neural activities related to switching and internal models can be temporally and spatially segregated, whereas the MOSAIC model predicts that they are closely intermingled. Here, we directly examined the two predictions by analyzing functional magnetic resonance imaging activities during the switching of one common tool (an ordinary computer mouse) and two novel tools: a rotated mouse, the cursor of which appears in a rotated position, and a velocity mouse, the cursor velocity of which is proportional to the mouse position. The switching and internal model activities temporally and spatially overlapped each other in the cerebellum and in the parietal cortex, whereas the overlap was very small in the frontal cortex. These results suggest that switching mechanisms in the frontal cortex can be explained by the mixture-of-experts architecture, whereas those in the cerebellum and the parietal cortex are explained by the MOSAIC model.

  12. FMAP: Functional Mapping and Analysis Pipeline for metagenomics and metatranscriptomics studies.

    PubMed

    Kim, Jiwoong; Kim, Min Soo; Koh, Andrew Y; Xie, Yang; Zhan, Xiaowei

    2016-10-10

    Given the lack of a complete and comprehensive library of microbial reference genomes, determining the functional profile of diverse microbial communities is challenging. The available functional analysis pipelines lack several key features: (i) an integrated alignment tool, (ii) operon-level analysis, and (iii) the ability to process large datasets. Here we introduce our open-sourced, stand-alone functional analysis pipeline for analyzing whole metagenomic and metatranscriptomic sequencing data, FMAP (Functional Mapping and Analysis Pipeline). FMAP performs alignment, gene family abundance calculations, and statistical analysis (three levels of analyses are provided: differentially-abundant genes, operons and pathways). The resulting output can be easily visualized with heatmaps and functional pathway diagrams. FMAP functional predictions are consistent with currently available functional analysis pipelines. FMAP is a comprehensive tool for providing functional analysis of metagenomic/metatranscriptomic sequencing data. With the added features of integrated alignment, operon-level analysis, and the ability to process large datasets, FMAP will be a valuable addition to the currently available functional analysis toolbox. We believe that this software will be of great value to the wider biology and bioinformatics communities.

  13. Effects of the gap slope on the distribution of removal rate in Belt-MRF.

    PubMed

    Wang, Dekang; Hu, Haixiang; Li, Longxiang; Bai, Yang; Luo, Xiao; Xue, Donglin; Zhang, Xuejun

    2017-10-30

    Belt magnetorheological finishing (Belt-MRF) is a promising tool for large-optics processing. However, before using a spot, its shape should be designed and controlled by the polishing gap. Previous research revealed a remarkably nonlinear relationship between the removal function and normal pressure distribution. The pressure is nonlinearly related to the gap geometry, precluding prediction of the removal function given the polishing gap. Here, we used the concepts of gap slope and virtual ribbon to develop a model of removal profiles in Belt-MRF. Between the belt and the workpiece in the main polishing area, a gap which changes linearly along the flow direction was created using a flat-bottom magnet box. The pressure distribution and removal function were calculated. Simulations were consistent with experiments. Different removal functions, consistent with theoretical calculations, were obtained by adjusting the gap slope. This approach allows to predict removal functions in Belt-MRF.

  14. Galaxy tools and workflows for sequence analysis with applications in molecular plant pathology

    PubMed Central

    Grüning, Björn A.; Paszkiewicz, Konrad; Pritchard, Leighton

    2013-01-01

    The Galaxy Project offers the popular web browser-based platform Galaxy for running bioinformatics tools and constructing simple workflows. Here, we present a broad collection of additional Galaxy tools for large scale analysis of gene and protein sequences. The motivating research theme is the identification of specific genes of interest in a range of non-model organisms, and our central example is the identification and prediction of “effector” proteins produced by plant pathogens in order to manipulate their host plant. This functional annotation of a pathogen’s predicted capacity for virulence is a key step in translating sequence data into potential applications in plant pathology. This collection includes novel tools, and widely-used third-party tools such as NCBI BLAST+ wrapped for use within Galaxy. Individual bioinformatics software tools are typically available separately as standalone packages, or in online browser-based form. The Galaxy framework enables the user to combine these and other tools to automate organism scale analyses as workflows, without demanding familiarity with command line tools and scripting. Workflows created using Galaxy can be saved and are reusable, so may be distributed within and between research groups, facilitating the construction of a set of standardised, reusable bioinformatic protocols. The Galaxy tools and workflows described in this manuscript are open source and freely available from the Galaxy Tool Shed (http://usegalaxy.org/toolshed or http://toolshed.g2.bx.psu.edu). PMID:24109552

  15. Mobility scores as a predictor of length of stay in general surgery: a prospective cohort study.

    PubMed

    Carroll, Georgia M; Hampton, Jacob; Carroll, Rosemary; Smith, Stephen R

    2018-05-22

    Post-operative length of stay (LOS) is an increasingly important clinical indicator in general surgery. Despite this, no tool has been validated to predict LOS or readiness for discharge in general surgical patients. The de Morton Mobility Index (DEMMI) is a functional mobility assessment tool that has been validated in rehabilitation patient populations. In this prospective cohort study, we aimed to identify if trends in DEMMI scores were associated with discharge within 1 week and overall LOS in general surgical patients. A total of 161 patients who underwent elective gastrointestinal resections were included. DEMMI scores were performed preoperatively, on days 1, 2, 3 and 30 post-operative. Statistical analysis was performed to identify any association between DEMMI scores and discharge within 1 week and LOS. Functional recovery (measured by achieving 80% of baseline DEMMI score by post-operative day 1) was significantly associated with discharge within 1 week. Presence of a stoma was associated with longer LOS. The area under the receiver operating characteristic curve using functional recovery on post-operative day 1 as a predictor of discharge within 1 week is 0.772. The DEMMI score is a fast, easy and useful tool to, on post-operative day 1, predict discharge within 1 week. The utility of this is to act as an anticipatory trigger for more proactive and efficient discharge planning in the early post-operative period, and there is potential to use the DEMMI as a comparator in clinical trials to assess functional recovery. © 2018 Royal Australasian College of Surgeons.

  16. Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.

    2011-01-01

    An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars entry vehicles. A survey was conducted of existing experimental heat-transfer and shock-shape data for high enthalpy, reacting-gas CO2 flows and five relevant test series were selected for comparison to predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared to these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.

  17. Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars-Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.

    2013-01-01

    An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars-entry vehicles. A survey was conducted of existing experimental heat transfer and shock-shape data for high-enthalpy reacting-gas CO2 flows, and five relevant test series were selected for comparison with predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared with these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.

  18. Chimpanzees create and modify probe tools functionally: A study with zoo-housed chimpanzees

    PubMed Central

    Hopper, Lydia M; Tennie, Claudio; Ross, Stephen R; Lonsdorf, Elizabeth V

    2015-01-01

    Chimpanzees (Pan troglodytes) use tools to probe for out-of-reach food, both in the wild and in captivity. Beyond gathering appropriately-sized materials to create tools, chimpanzees also perform secondary modifications in order to create an optimized tool. In this study, we recorded the behavior of a group of zoo-housed chimpanzees when presented with opportunities to use tools to probe for liquid foods in an artificial termite mound within their enclosure. Previous research with this group of chimpanzees has shown that they are proficient at gathering materials from within their environment in order to create tools to probe for the liquid food within the artificial mound. Extending beyond this basic question, we first asked whether they only made and modified probe tools when it was appropriate to do so (i.e. when the mound was baited with food). Second, by collecting continuous data on their behavior, we also asked whether the chimpanzees first (intentionally) modified their tools prior to probing for food or whether such modifications occurred after tool use, possibly as a by-product of chewing and eating the food from the tools. Following our predictions, we found that tool modification predicted tool use; the chimpanzees began using their tools within a short delay of creating and modifying them, and the chimpanzees performed more tool modifying behaviors when food was available than when they could not gain food through the use of probe tools. We also discuss our results in terms of the chimpanzees’ acquisition of the skills, and their flexibility of tool use and learning. Am. J. Primatol. 77:162–170, 2015. © 2014 The Authors. American Journal of Primatology Published by Wiley Periodicals Inc. PMID:25220050

  19. Do doctors know what children know?

    PubMed

    Steward, Margaret; Regalbuto, Gary

    1975-01-01

    Adults often assume that if they explain something to a child calmly and rationally, the child will understand. Informed by piaget's theory about cognitive development, children of preschool and elementary age were asked to use two common pediatric tools, and to explain how they functioned. Predicted differences were found.

  20. The Phyre2 web portal for protein modelling, prediction and analysis

    PubMed Central

    Kelley, Lawrence A; Mezulis, Stefans; Yates, Christopher M; Wass, Mark N; Sternberg, Michael JE

    2017-01-01

    Summary Phyre2 is a suite of tools available on the web to predict and analyse protein structure, function and mutations. The focus of Phyre2 is to provide biologists with a simple and intuitive interface to state-of-the-art protein bioinformatics tools. Phyre2 replaces Phyre, the original version of the server for which we previously published a protocol. In this updated protocol, we describe Phyre2, which uses advanced remote homology detection methods to build 3D models, predict ligand binding sites, and analyse the effect of amino-acid variants (e.g. nsSNPs) for a user’s protein sequence. Users are guided through results by a simple interface at a level of detail determined by them. This protocol will guide a user from submitting a protein sequence to interpreting the secondary and tertiary structure of their models, their domain composition and model quality. A range of additional available tools is described to find a protein structure in a genome, to submit large number of sequences at once and to automatically run weekly searches for proteins difficult to model. The server is available at http://www.sbg.bio.ic.ac.uk/phyre2. A typical structure prediction will be returned between 30mins and 2 hours after submission. PMID:25950237

  1. The Proteasix Ontology.

    PubMed

    Arguello Casteleiro, Mercedes; Klein, Julie; Stevens, Robert

    2016-06-04

    The Proteasix Ontology (PxO) is an ontology that supports the Proteasix tool; an open-source peptide-centric tool that can be used to predict automatically and in a large-scale fashion in silico the proteases involved in the generation of proteolytic cleavage fragments (peptides) The PxO re-uses parts of the Protein Ontology, the three Gene Ontology sub-ontologies, the Chemical Entities of Biological Interest Ontology, the Sequence Ontology and bespoke extensions to the PxO in support of a series of roles: 1. To describe the known proteases and their target cleaveage sites. 2. To enable the description of proteolytic cleaveage fragments as the outputs of observed and predicted proteolysis. 3. To use knowledge about the function, species and cellular location of a protease and protein substrate to support the prioritisation of proteases in observed and predicted proteolysis. The PxO is designed to describe the biological underpinnings of the generation of peptides. The peptide-centric PxO seeks to support the Proteasix tool by separating domain knowledge from the operational knowledge used in protease prediction by Proteasix and to support the confirmation of its analyses and results. The Proteasix Ontology may be found at: http://bioportal.bioontology.org/ontologies/PXO . This ontology is free and open for use by everyone.

  2. In silico study of breast cancer associated gene 3 using LION Target Engine and other tools.

    PubMed

    León, Darryl A; Cànaves, Jaume M

    2003-12-01

    Sequence analysis of individual targets is an important step in annotation and validation. As a test case, we investigated human breast cancer associated gene 3 (BCA3) with LION Target Engine and with other bioinformatics tools. LION Target Engine confirmed that the BCA3 gene is located on 11p15.4 and that the two most likely splice variants (lacking exon 3 and exons 3 and 5, respectively) exist. Based on our manual curation of sequence data, it is proposed that an additional variant (missing only exon 5) published in a public sequence repository, is a prediction artifact. A significant number of new orthologs were also identified, and these were the basis for a high-quality protein secondary structure prediction. Moreover, our research confirmed several distinct functional domains as described in earlier reports. Sequence conservation from multiple sequence alignments, splice variant identification, secondary structure predictions, and predicted phosphorylation sites suggest that the removal of interaction sites through alternative splicing might play a modulatory role in BCA3. This in silico approach shows the depth and relevance of an analysis that can be accomplished by including a variety of publicly available tools with an integrated and customizable life science informatics platform.

  3. Development and Validation of a Predictive Model for Functional Outcome After Stroke Rehabilitation: The Maugeri Model.

    PubMed

    Scrutinio, Domenico; Lanzillo, Bernardo; Guida, Pietro; Mastropasqua, Filippo; Monitillo, Vincenzo; Pusineri, Monica; Formica, Roberto; Russo, Giovanna; Guarnaschelli, Caterina; Ferretti, Chiara; Calabrese, Gianluigi

    2017-12-01

    Prediction of outcome after stroke rehabilitation may help clinicians in decision-making and planning rehabilitation care. We developed and validated a predictive tool to estimate the probability of achieving improvement in physical functioning (model 1) and a level of independence requiring no more than supervision (model 2) after stroke rehabilitation. The models were derived from 717 patients admitted for stroke rehabilitation. We used multivariable logistic regression analysis to build each model. Then, each model was prospectively validated in 875 patients. Model 1 included age, time from stroke occurrence to rehabilitation admission, admission motor and cognitive Functional Independence Measure scores, and neglect. Model 2 included age, male gender, time since stroke onset, and admission motor and cognitive Functional Independence Measure score. Both models demonstrated excellent discrimination. In the derivation cohort, the area under the curve was 0.883 (95% confidence intervals, 0.858-0.910) for model 1 and 0.913 (95% confidence intervals, 0.884-0.942) for model 2. The Hosmer-Lemeshow χ 2 was 4.12 ( P =0.249) and 1.20 ( P =0.754), respectively. In the validation cohort, the area under the curve was 0.866 (95% confidence intervals, 0.840-0.892) for model 1 and 0.850 (95% confidence intervals, 0.815-0.885) for model 2. The Hosmer-Lemeshow χ 2 was 8.86 ( P =0.115) and 34.50 ( P =0.001), respectively. Both improvement in physical functioning (hazard ratios, 0.43; 0.25-0.71; P =0.001) and a level of independence requiring no more than supervision (hazard ratios, 0.32; 0.14-0.68; P =0.004) were independently associated with improved 4-year survival. A calculator is freely available for download at https://goo.gl/fEAp81. This study provides researchers and clinicians with an easy-to-use, accurate, and validated predictive tool for potential application in rehabilitation research and stroke management. © 2017 American Heart Association, Inc.

  4. Prediction of Detailed Enzyme Functions and Identification of Specificity Determining Residues by Random Forests

    PubMed Central

    Nagao, Chioko; Nagano, Nozomi; Mizuguchi, Kenji

    2014-01-01

    Determining enzyme functions is essential for a thorough understanding of cellular processes. Although many prediction methods have been developed, it remains a significant challenge to predict enzyme functions at the fourth-digit level of the Enzyme Commission numbers. Functional specificity of enzymes often changes drastically by mutations of a small number of residues and therefore, information about these critical residues can potentially help discriminate detailed functions. However, because these residues must be identified by mutagenesis experiments, the available information is limited, and the lack of experimentally verified specificity determining residues (SDRs) has hindered the development of detailed function prediction methods and computational identification of SDRs. Here we present a novel method for predicting enzyme functions by random forests, EFPrf, along with a set of putative SDRs, the random forests derived SDRs (rf-SDRs). EFPrf consists of a set of binary predictors for enzymes in each CATH superfamily and the rf-SDRs are the residue positions corresponding to the most highly contributing attributes obtained from each predictor. EFPrf showed a precision of 0.98 and a recall of 0.89 in a cross-validated benchmark assessment. The rf-SDRs included many residues, whose importance for specificity had been validated experimentally. The analysis of the rf-SDRs revealed both a general tendency that functionally diverged superfamilies tend to include more active site residues in their rf-SDRs than in less diverged superfamilies, and superfamily-specific conservation patterns of each functional residue. EFPrf and the rf-SDRs will be an effective tool for annotating enzyme functions and for understanding how enzyme functions have diverged within each superfamily. PMID:24416252

  5. Web-based applications for building, managing and analysing kinetic models of biological systems.

    PubMed

    Lee, Dong-Yup; Saha, Rajib; Yusufi, Faraaz Noor Khan; Park, Wonjun; Karimi, Iftekhar A

    2009-01-01

    Mathematical modelling and computational analysis play an essential role in improving our capability to elucidate the functions and characteristics of complex biological systems such as metabolic, regulatory and cell signalling pathways. The modelling and concomitant simulation render it possible to predict the cellular behaviour of systems under various genetically and/or environmentally perturbed conditions. This motivates systems biologists/bioengineers/bioinformaticians to develop new tools and applications, allowing non-experts to easily conduct such modelling and analysis. However, among a multitude of systems biology tools developed to date, only a handful of projects have adopted a web-based approach to kinetic modelling. In this report, we evaluate the capabilities and characteristics of current web-based tools in systems biology and identify desirable features, limitations and bottlenecks for further improvements in terms of usability and functionality. A short discussion on software architecture issues involved in web-based applications and the approaches taken by existing tools is included for those interested in developing their own simulation applications.

  6. Rigidity controllable polishing tool based on magnetorheological effect

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Wan, Yongjian; Shi, Chunyan

    2012-10-01

    A stable and predictable material removal function (MRF) plays a crucial role in computer controlled optical surfacing (CCOS). For physical contact polishing case, the stability of MRF depends on intimate contact between polishing interface and workpiece. Rigid laps maintain this function in polishing spherical surfaces, whose curvature has no variation with the position on the surface. Such rigid laps provide smoothing effect for mid-spatial frequency errors, but can't be used in aspherical surfaces for they will destroy the surface figure. Flexible tools such as magnetorheological fluid or air bonnet conform to the surface [1]. They lack rigidity and provide little natural smoothing effect. We present a rigidity controllable polishing tool that uses a kind of magnetorheological elastomers (MRE) medium [2]. It provides the ability of both conforming to the aspheric surface and maintaining natural smoothing effect. What's more, its rigidity can be controlled by the magnetic field. This paper will present the design, analysis, and stiffness variation mechanism model of such polishing tool [3].

  7. Automated support for experience-based software management

    NASA Technical Reports Server (NTRS)

    Valett, Jon D.

    1992-01-01

    To effectively manage a software development project, the software manager must have access to key information concerning a project's status. This information includes not only data relating to the project of interest, but also, the experience of past development efforts within the environment. This paper describes the concepts and functionality of a software management tool designed to provide this information. This tool, called the Software Management Environment (SME), enables the software manager to compare an ongoing development effort with previous efforts and with models of the 'typical' project within the environment, to predict future project status, to analyze a project's strengths and weaknesses, and to assess the project's quality. In order to provide these functions the tool utilizes a vast corporate memory that includes a data base of software metrics, a set of models and relationships that describe the software development environment, and a set of rules that capture other knowledge and experience of software managers within the environment. Integrating these major concepts into one software management tool, the SME is a model of the type of management tool needed for all software development organizations.

  8. Metagenomics approach to the study of the gut microbiome structure and function in zebrafish Danio rerio fed with gluten formulated diet.

    PubMed

    Koo, Hyunmin; Hakim, Joseph A; Powell, Mickie L; Kumar, Ranjit; Eipers, Peter G; Morrow, Casey D; Crowley, Michael; Lefkowitz, Elliot J; Watts, Stephen A; Bej, Asim K

    2017-04-01

    In this study, we report the gut microbial composition and predictive functional profiles of zebrafish, Danio rerio, fed with a control formulated diet (CFD), and a gluten formulated diet (GFD) using a metagenomics approach and bioinformatics tools. The microbial communities of the GFD-fed D. rerio displayed heightened abundances of Legionellales, Rhizobiaceae, and Rhodobacter, as compared to the CFD-fed counterparts. Predicted metagenomics of microbial communities (PICRUSt) in GFD-fed D. rerio showed KEGG functional categories corresponding to bile secretion, secondary bile acid biosynthesis, and the metabolism of glycine, serine, and threonine. The CFD-fed D. rerio exhibited KEGG functional categories of bacteria-mediated cobalamin biosynthesis, which was supported by the presence of cobalamin synthesizers such as Bacteroides and Lactobacillus. Though these bacteria were absent in GFD-fed D. rerio, a comparable level of the cobalamin biosynthesis KEGG functional category was observed, which could be contributed by the compensatory enrichment of Cetobacterium. Based on these results, we conclude D. rerio to be a suitable alternative animal model for the use of a targeted metagenomics approach along with bioinformatics tools to further investigate the relationship between the gluten diet and microbiome profile in the gut ecosystem leading to gastrointestinal diseases and other undesired adverse health effects. Copyright © 2017. Published by Elsevier B.V.

  9. Polarization modeling and predictions for DKIST part 2: application of the Berreman calculus to spectral polarization fringes of beamsplitters and crystal retarders

    NASA Astrophysics Data System (ADS)

    Harrington, David M.; Snik, Frans; Keller, Christoph U.; Sueoka, Stacey R.; van Harten, Gerard

    2017-10-01

    We outline polarization fringe predictions derived from an application of the Berreman calculus for the Daniel K. Inouye Solar Telescope (DKIST) retarder optics. The DKIST retarder baseline design used six crystals, single-layer antireflection coatings, thick cover windows, and oil between all optical interfaces. This tool estimates polarization fringes and optic Mueller matrices as functions of all optical design choices. The amplitude and period of polarized fringes under design changes, manufacturing errors, tolerances, and several physical factors can now be estimated. This tool compares well with observations of fringes for data collected with the spectropolarimeter for infrared and optical regions at the Dunn Solar Telescope using bicrystalline achromatic retarders as well as laboratory tests. With this tool, we show impacts of design decisions on polarization fringes as impacted by antireflection coatings, oil refractive indices, cover window presence, and part thicknesses. This tool helped DKIST decide to remove retarder cover windows and also recommends reconsideration of coating strategies for DKIST. We anticipate this tool to be essential in designing future retarders for mitigation of polarization and intensity fringe errors in other high spectral resolution astronomical systems.

  10. Learning, remembering, and predicting how to use tools: Distributed neurocognitive mechanisms

    PubMed Central

    Buxbaum, Laurel J.

    2016-01-01

    The reasoning-based approach championed by Francois Osiurak and Arnaud Badets (Osiurak & Badets, 2016) denies the existence of sensory-motor memories of tool use except in limited circumstances, and suggests instead that most tool use is subserved solely by online technical reasoning about tool properties. In this commentary, I highlight the strengths and limitations of the reasoning-based approach and review a number of lines of evidence that manipulation knowledge is in fact used in tool action tasks. In addition, I present a “two route” neurocognitive model of tool use called the “Two Action Systems Plus (2AS+)” framework that posits a complementary role for online and stored information and specifies the neurocognitive substrates of task-relevant action selection. This framework, unlike the reasoning based approach, has the potential to integrate the existing psychological and functional neuroanatomic data in the tool use domain. PMID:28358565

  11. Annotation of gene function in citrus using gene expression information and co-expression networks

    PubMed Central

    2014-01-01

    Background The genus Citrus encompasses major cultivated plants such as sweet orange, mandarin, lemon and grapefruit, among the world’s most economically important fruit crops. With increasing volumes of transcriptomics data available for these species, Gene Co-expression Network (GCN) analysis is a viable option for predicting gene function at a genome-wide scale. GCN analysis is based on a “guilt-by-association” principle whereby genes encoding proteins involved in similar and/or related biological processes may exhibit similar expression patterns across diverse sets of experimental conditions. While bioinformatics resources such as GCN analysis are widely available for efficient gene function prediction in model plant species including Arabidopsis, soybean and rice, in citrus these tools are not yet developed. Results We have constructed a comprehensive GCN for citrus inferred from 297 publicly available Affymetrix Genechip Citrus Genome microarray datasets, providing gene co-expression relationships at a genome-wide scale (33,000 transcripts). The comprehensive citrus GCN consists of a global GCN (condition-independent) and four condition-dependent GCNs that survey the sweet orange species only, all citrus fruit tissues, all citrus leaf tissues, or stress-exposed plants. All of these GCNs are clustered using genome-wide, gene-centric (guide) and graph clustering algorithms for flexibility of gene function prediction. For each putative cluster, gene ontology (GO) enrichment and gene expression specificity analyses were performed to enhance gene function, expression and regulation pattern prediction. The guide-gene approach was used to infer novel roles of genes involved in disease susceptibility and vitamin C metabolism, and graph-clustering approaches were used to investigate isoprenoid/phenylpropanoid metabolism in citrus peel, and citric acid catabolism via the GABA shunt in citrus fruit. Conclusions Integration of citrus gene co-expression networks, functional enrichment analysis and gene expression information provide opportunities to infer gene function in citrus. We present a publicly accessible tool, Network Inference for Citrus Co-Expression (NICCE, http://citrus.adelaide.edu.au/nicce/home.aspx), for the gene co-expression analysis in citrus. PMID:25023870

  12. PlantTFDB 4.0: toward a central hub for transcription factors and regulatory interactions in plants.

    PubMed

    Jin, Jinpu; Tian, Feng; Yang, De-Chang; Meng, Yu-Qi; Kong, Lei; Luo, Jingchu; Gao, Ge

    2017-01-04

    With the goal of providing a comprehensive, high-quality resource for both plant transcription factors (TFs) and their regulatory interactions with target genes, we upgraded plant TF database PlantTFDB to version 4.0 (http://planttfdb.cbi.pku.edu.cn/). In the new version, we identified 320 370 TFs from 165 species, presenting a more comprehensive genomic TF repertoires of green plants. Besides updating the pre-existing abundant functional and evolutionary annotation for identified TFs, we generated three new types of annotation which provide more directly clues to investigate functional mechanisms underlying: (i) a set of high-quality, non-redundant TF binding motifs derived from experiments; (ii) multiple types of regulatory elements identified from high-throughput sequencing data; (iii) regulatory interactions curated from literature and inferred by combining TF binding motifs and regulatory elements. In addition, we upgraded previous TF prediction server, and set up four novel tools for regulation prediction and functional enrichment analyses. Finally, we set up a novel companion portal PlantRegMap (http://plantregmap.cbi.pku.edu.cn) for users to access the regulation resource and analysis tools conveniently. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. A machine learning system to improve heart failure patient assistance.

    PubMed

    Guidi, Gabriele; Pettenati, Maria Chiara; Melillo, Paolo; Iadanza, Ernesto

    2014-11-01

    In this paper, we present a clinical decision support system (CDSS) for the analysis of heart failure (HF) patients, providing various outputs such as an HF severity evaluation, HF-type prediction, as well as a management interface that compares the different patients' follow-ups. The whole system is composed of a part of intelligent core and of an HF special-purpose management tool also providing the function to act as interface for the artificial intelligence training and use. To implement the smart intelligent functions, we adopted a machine learning approach. In this paper, we compare the performance of a neural network (NN), a support vector machine, a system with fuzzy rules genetically produced, and a classification and regression tree and its direct evolution, which is the random forest, in analyzing our database. Best performances in both HF severity evaluation and HF-type prediction functions are obtained by using the random forest algorithm. The management tool allows the cardiologist to populate a "supervised database" suitable for machine learning during his or her regular outpatient consultations. The idea comes from the fact that in literature there are a few databases of this type, and they are not scalable to our case.

  14. Software Tools to Support Research on Airport Departure Planning

    NASA Technical Reports Server (NTRS)

    Carr, Francis; Evans, Antony; Feron, Eric; Clarke, John-Paul

    2003-01-01

    A simple, portable and useful collection of software tools has been developed for the analysis of airport surface traffic. The tools are based on a flexible and robust traffic-flow model, and include calibration, validation and simulation functionality for this model. Several different interfaces have been developed to help promote usage of these tools, including a portable Matlab(TM) implementation of the basic algorithms; a web-based interface which provides online access to automated analyses of airport traffic based on a database of real-world operations data which covers over 250 U.S. airports over a 5-year period; and an interactive simulation-based tool currently in use as part of a college-level educational module. More advanced applications for airport departure traffic include taxi-time prediction and evaluation of "windowing" congestion control.

  15. SLUG - stochastically lighting up galaxies - III. A suite of tools for simulated photometry, spectroscopy, and Bayesian inference with stochastic stellar populations

    NASA Astrophysics Data System (ADS)

    Krumholz, Mark R.; Fumagalli, Michele; da Silva, Robert L.; Rendahl, Theodore; Parra, Jonathan

    2015-09-01

    Stellar population synthesis techniques for predicting the observable light emitted by a stellar population have extensive applications in numerous areas of astronomy. However, accurate predictions for small populations of young stars, such as those found in individual star clusters, star-forming dwarf galaxies, and small segments of spiral galaxies, require that the population be treated stochastically. Conversely, accurate deductions of the properties of such objects also require consideration of stochasticity. Here we describe a comprehensive suite of modular, open-source software tools for tackling these related problems. These include the following: a greatly-enhanced version of the SLUG code introduced by da Silva et al., which computes spectra and photometry for stochastically or deterministically sampled stellar populations with nearly arbitrary star formation histories, clustering properties, and initial mass functions; CLOUDY_SLUG, a tool that automatically couples SLUG-computed spectra with the CLOUDY radiative transfer code in order to predict stochastic nebular emission; BAYESPHOT, a general-purpose tool for performing Bayesian inference on the physical properties of stellar systems based on unresolved photometry; and CLUSTER_SLUG and SFR_SLUG, a pair of tools that use BAYESPHOT on a library of SLUG models to compute the mass, age, and extinction of mono-age star clusters, and the star formation rate of galaxies, respectively. The latter two tools make use of an extensive library of pre-computed stellar population models, which are included in the software. The complete package is available at http://www.slugsps.com.

  16. The Trail Making test: a study of its ability to predict falls in the acute neurological in-patient population.

    PubMed

    Mateen, Bilal Akhter; Bussas, Matthias; Doogan, Catherine; Waller, Denise; Saverino, Alessia; Király, Franz J; Playford, E Diane

    2018-05-01

    To determine whether tests of cognitive function and patient-reported outcome measures of motor function can be used to create a machine learning-based predictive tool for falls. Prospective cohort study. Tertiary neurological and neurosurgical center. In all, 337 in-patients receiving neurosurgical, neurological, or neurorehabilitation-based care. Binary (Y/N) for falling during the in-patient episode, the Trail Making Test (a measure of attention and executive function) and the Walk-12 (a patient-reported measure of physical function). The principal outcome was a fall during the in-patient stay ( n = 54). The Trail test was identified as the best predictor of falls. Moreover, addition of other variables, did not improve the prediction (Wilcoxon signed-rank P < 0.001). Classical linear statistical modeling methods were then compared with more recent machine learning based strategies, for example, random forests, neural networks, support vector machines. The random forest was the best modeling strategy when utilizing just the Trail Making Test data (Wilcoxon signed-rank P < 0.001) with 68% (± 7.7) sensitivity, and 90% (± 2.3) specificity. This study identifies a simple yet powerful machine learning (Random Forest) based predictive model for an in-patient neurological population, utilizing a single neuropsychological test of cognitive function, the Trail Making test.

  17. SChloro: directing Viridiplantae proteins to six chloroplastic sub-compartments.

    PubMed

    Savojardo, Castrense; Martelli, Pier Luigi; Fariselli, Piero; Casadio, Rita

    2017-02-01

    Chloroplasts are organelles found in plants and involved in several important cell processes. Similarly to other compartments in the cell, chloroplasts have an internal structure comprising several sub-compartments, where different proteins are targeted to perform their functions. Given the relation between protein function and localization, the availability of effective computational tools to predict protein sub-organelle localizations is crucial for large-scale functional studies. In this paper we present SChloro, a novel machine-learning approach to predict protein sub-chloroplastic localization, based on targeting signal detection and membrane protein information. The proposed approach performs multi-label predictions discriminating six chloroplastic sub-compartments that include inner membrane, outer membrane, stroma, thylakoid lumen, plastoglobule and thylakoid membrane. In comparative benchmarks, the proposed method outperforms current state-of-the-art methods in both single- and multi-compartment predictions, with an overall multi-label accuracy of 74%. The results demonstrate the relevance of the approach that is eligible as a good candidate for integration into more general large-scale annotation pipelines of protein subcellular localization. The method is available as web server at http://schloro.biocomp.unibo.it gigi@biocomp.unibo.it.

  18. An empirical propellant response function for combustion stability predictions

    NASA Technical Reports Server (NTRS)

    Hessler, R. O.

    1980-01-01

    An empirical response function model was developed for ammonium perchlorate propellants to supplant T-burner testing at the preliminary design stage. The model was developed by fitting a limited T-burner data base, in terms of oxidizer size and concentration, to an analytical two parameter response function expression. Multiple peaks are predicted, but the primary effect is of a single peak for most formulations, with notable bulges for the various AP size fractions. The model was extended to velocity coupling with the assumption that dynamic response was controlled primarily by the solid phase described by the two parameter model. The magnitude of velocity coupling was then scaled using an erosive burning law. Routine use of the model for stability predictions on a number of propulsion units indicates that the model tends to overpredict propellant response. It is concluded that the model represents a generally conservative prediction tool, suited especially for the preliminary design stage when T-burner data may not be readily available. The model work included development of a rigorous summation technique for pseudopropellant properties and of a concept for modeling ordered packing of particulates.

  19. Predictive analysis of beer quality by correlating sensory evaluation with higher alcohol and ester production using multivariate statistics methods.

    PubMed

    Dong, Jian-Jun; Li, Qing-Liang; Yin, Hua; Zhong, Cheng; Hao, Jun-Guang; Yang, Pan-Fei; Tian, Yu-Hong; Jia, Shi-Ru

    2014-10-15

    Sensory evaluation is regarded as a necessary procedure to ensure a reproducible quality of beer. Meanwhile, high-throughput analytical methods provide a powerful tool to analyse various flavour compounds, such as higher alcohol and ester. In this study, the relationship between flavour compounds and sensory evaluation was established by non-linear models such as partial least squares (PLS), genetic algorithm back-propagation neural network (GA-BP), support vector machine (SVM). It was shown that SVM with a Radial Basis Function (RBF) had a better performance of prediction accuracy for both calibration set (94.3%) and validation set (96.2%) than other models. Relatively lower prediction abilities were observed for GA-BP (52.1%) and PLS (31.7%). In addition, the kernel function of SVM played an essential role of model training when the prediction accuracy of SVM with polynomial kernel function was 32.9%. As a powerful multivariate statistics method, SVM holds great potential to assess beer quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Energy–density functional plus quasiparticle–phonon model theory as a powerful tool for nuclear structure and astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsoneva, N., E-mail: Nadia.Tsoneva@theo.physik.uni-giessen.de; Lenske, H.

    During the last decade, a theoretical method based on the energy–density functional theory and quasiparticle–phonon model, including up to three-phonon configurations was developed. The main advantages of themethod are that it incorporates a self-consistentmean-field and multi-configuration mixing which are found of crucial importance for systematic investigations of nuclear low-energy excitations, pygmy and giant resonances in an unified way. In particular, the theoretical approach has been proven to be very successful in predictions of new modes of excitations, namely pygmy quadrupole resonance which is also lately experimentally observed. Recently, our microscopically obtained dipole strength functions are implemented in predictions of nucleon-capturemore » reaction rates of astrophysical importance. A comparison to available experimental data is discussed.« less

  1. Web tools for predictive toxicology model building.

    PubMed

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  2. Validation of RetroPath, a computer-aided design tool for metabolic pathway engineering.

    PubMed

    Fehér, Tamás; Planson, Anne-Gaëlle; Carbonell, Pablo; Fernández-Castané, Alfred; Grigoras, Ioana; Dariy, Ekaterina; Perret, Alain; Faulon, Jean-Loup

    2014-11-01

    Metabolic engineering has succeeded in biosynthesis of numerous commodity or high value compounds. However, the choice of pathways and enzymes used for production was many times made ad hoc, or required expert knowledge of the specific biochemical reactions. In order to rationalize the process of engineering producer strains, we developed the computer-aided design (CAD) tool RetroPath that explores and enumerates metabolic pathways connecting the endogenous metabolites of a chassis cell to the target compound. To experimentally validate our tool, we constructed 12 top-ranked enzyme combinations producing the flavonoid pinocembrin, four of which displayed significant yields. Namely, our tool queried the enzymes found in metabolic databases based on their annotated and predicted activities. Next, it ranked pathways based on the predicted efficiency of the available enzymes, the toxicity of the intermediate metabolites and the calculated maximum product flux. To implement the top-ranking pathway, our procedure narrowed down a list of nine million possible enzyme combinations to 12, a number easily assembled and tested. One round of metabolic network optimization based on RetroPath output further increased pinocembrin titers 17-fold. In total, 12 out of the 13 enzymes tested in this work displayed a relative performance that was in accordance with its predicted score. These results validate the ranking function of our CAD tool, and open the way to its utilization in the biosynthesis of novel compounds. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Screening of mutations affecting protein stability and dynamics of FGFR1—A simulation analysis

    PubMed Central

    Doss, C. George Priya; Rajith, B.; Garwasis, Nimisha; Mathew, Pretty Raju; Raju, Anand Solomon; Apoorva, K.; William, Denise; Sadhana, N.R.; Himani, Tanwar; Dike, IP.

    2012-01-01

    Single amino acid substitutions in Fibroblast Growth Factor Receptor 1 (FGFR1) destabilize protein and have been implicated in several genetic disorders like various forms of cancer, Kallamann syndrome, Pfeiffer syndrome, Jackson Weiss syndrome, etc. In order to gain functional insight into mutation caused by amino acid substitution to protein function and expression, special emphasis was laid on molecular dynamics simulation techniques in combination with in silico tools such as SIFT, PolyPhen 2.0, I-Mutant 3.0 and SNAP. It has been estimated that 68% nsSNPs were predicted to be deleterious by I-Mutant, slightly higher than SIFT (37%), PolyPhen 2.0 (61%) and SNAP (58%). From the observed results, P722S mutation was found to be most deleterious by comparing results of all in silico tools. By molecular dynamics approach, we have shown that P722S mutation leads to increase in flexibility, and deviated more from the native structure which was supported by the decrease in the number of hydrogen bonds. In addition, biophysical analysis revealed a clear insight of stability loss due to P722S mutation in FGFR1 protein. Majority of mutations predicted by these in silico tools were in good concordance with the experimental results. PMID:27896051

  4. LIBRA-WA: a web application for ligand binding site detection and protein function recognition.

    PubMed

    Toti, Daniele; Viet Hung, Le; Tortosa, Valentina; Brandi, Valentina; Polticelli, Fabio

    2018-03-01

    Recently, LIBRA, a tool for active/ligand binding site prediction, was described. LIBRA's effectiveness was comparable to similar state-of-the-art tools; however, its scoring scheme, output presentation, dependence on local resources and overall convenience were amenable to improvements. To solve these issues, LIBRA-WA, a web application based on an improved LIBRA engine, has been developed, featuring a novel scoring scheme consistently improving LIBRA's performance, and a refined algorithm that can identify binding sites hosted at the interface between different subunits. LIBRA-WA also sports additional functionalities like ligand clustering and a completely redesigned interface for an easier analysis of the output. Extensive tests on 373 apoprotein structures indicate that LIBRA-WA is able to identify the biologically relevant ligand/ligand binding site in 357 cases (∼96%), with the correct prediction ranking first in 349 cases (∼98% of the latter, ∼94% of the total). The earlier stand-alone tool has also been updated and dubbed LIBRA+, by integrating LIBRA-WA's improved engine for cross-compatibility purposes. LIBRA-WA and LIBRA+ are available at: http://www.computationalbiology.it/software.html. polticel@uniroma3.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. Screening of mutations affecting protein stability and dynamics of FGFR1-A simulation analysis.

    PubMed

    Doss, C George Priya; Rajith, B; Garwasis, Nimisha; Mathew, Pretty Raju; Raju, Anand Solomon; Apoorva, K; William, Denise; Sadhana, N R; Himani, Tanwar; Dike, I P

    2012-12-01

    Single amino acid substitutions in Fibroblast Growth Factor Receptor 1 ( FGFR1 ) destabilize protein and have been implicated in several genetic disorders like various forms of cancer, Kallamann syndrome, Pfeiffer syndrome, Jackson Weiss syndrome, etc. In order to gain functional insight into mutation caused by amino acid substitution to protein function and expression, special emphasis was laid on molecular dynamics simulation techniques in combination with in silico tools such as SIFT, PolyPhen 2.0, I-Mutant 3.0 and SNAP. It has been estimated that 68% nsSNPs were predicted to be deleterious by I-Mutant, slightly higher than SIFT (37%), PolyPhen 2.0 (61%) and SNAP (58%). From the observed results, P722S mutation was found to be most deleterious by comparing results of all in silico tools. By molecular dynamics approach, we have shown that P722S mutation leads to increase in flexibility, and deviated more from the native structure which was supported by the decrease in the number of hydrogen bonds. In addition, biophysical analysis revealed a clear insight of stability loss due to P722S mutation in FGFR1 protein. Majority of mutations predicted by these in silico tools were in good concordance with the experimental results.

  6. Advances in the quantification of mitochondrial function in primary human immune cells through extracellular flux analysis.

    PubMed

    Nicholas, Dequina; Proctor, Elizabeth A; Raval, Forum M; Ip, Blanche C; Habib, Chloe; Ritou, Eleni; Grammatopoulos, Tom N; Steenkamp, Devin; Dooms, Hans; Apovian, Caroline M; Lauffenburger, Douglas A; Nikolajczyk, Barbara S

    2017-01-01

    Numerous studies show that mitochondrial energy generation determines the effectiveness of immune responses. Furthermore, changes in mitochondrial function may regulate lymphocyte function in inflammatory diseases like type 2 diabetes. Analysis of lymphocyte mitochondrial function has been facilitated by introduction of 96-well format extracellular flux (XF96) analyzers, but the technology remains imperfect for analysis of human lymphocytes. Limitations in XF technology include the lack of practical protocols for analysis of archived human cells, and inadequate data analysis tools that require manual quality checks. Current analysis tools for XF outcomes are also unable to automatically assess data quality and delete untenable data from the relatively high number of biological replicates needed to power complex human cell studies. The objectives of work presented herein are to test the impact of common cellular manipulations on XF outcomes, and to develop and validate a new automated tool that objectively analyzes a virtually unlimited number of samples to quantitate mitochondrial function in immune cells. We present significant improvements on previous XF analyses of primary human cells that will be absolutely essential to test the prediction that changes in immune cell mitochondrial function and fuel sources support immune dysfunction in chronic inflammatory diseases like type 2 diabetes.

  7. PDB-UF: database of predicted enzymatic functions for unannotated protein structures from structural genomics.

    PubMed

    von Grotthuss, Marcin; Plewczynski, Dariusz; Ginalski, Krzysztof; Rychlewski, Leszek; Shakhnovich, Eugene I

    2006-02-06

    The number of protein structures from structural genomics centers dramatically increases in the Protein Data Bank (PDB). Many of these structures are functionally unannotated because they have no sequence similarity to proteins of known function. However, it is possible to successfully infer function using only structural similarity. Here we present the PDB-UF database, a web-accessible collection of predictions of enzymatic properties using structure-function relationship. The assignments were conducted for three-dimensional protein structures of unknown function that come from structural genomics initiatives. We show that 4 hypothetical proteins (with PDB accession codes: 1VH0, 1NS5, 1O6D, and 1TO0), for which standard BLAST tools such as PSI-BLAST or RPS-BLAST failed to assign any function, are probably methyltransferase enzymes. We suggest that the structure-based prediction of an EC number should be conducted having the different similarity score cutoff for different protein folds. Moreover, performing the annotation using two different algorithms can reduce the rate of false positive assignments. We believe, that the presented web-based repository will help to decrease the number of protein structures that have functions marked as "unknown" in the PDB file. http://paradox.harvard.edu/PDB-UF and http://bioinfo.pl/PDB-UF.

  8. BioFuelDB: a database and prediction server of enzymes involved in biofuels production.

    PubMed

    Chaudhary, Nikhil; Gupta, Ankit; Gupta, Sudheer; Sharma, Vineet K

    2017-01-01

    In light of the rapid decrease in fossils fuel reserves and an increasing demand for energy, novel methods are required to explore alternative biofuel production processes to alleviate these pressures. A wide variety of molecules which can either be used as biofuels or as biofuel precursors are produced using microbial enzymes. However, the common challenges in the industrial implementation of enzyme catalysis for biofuel production are the unavailability of a comprehensive biofuel enzyme resource, low efficiency of known enzymes, and limited availability of enzymes which can function under extreme conditions in the industrial processes. We have developed a comprehensive database of known enzymes with proven or potential applications in biofuel production through text mining of PubMed abstracts and other publicly available information. A total of 131 enzymes with a role in biofuel production were identified and classified into six enzyme classes and four broad application categories namely 'Alcohol production', 'Biodiesel production', 'Fuel Cell' and 'Alternate biofuels'. A prediction tool 'Benz' was developed to identify and classify novel homologues of the known biofuel enzyme sequences from sequenced genomes and metagenomes. 'Benz' employs a hybrid approach incorporating HMMER 3.0 and RAPSearch2 programs to provide high accuracy and high speed for prediction. Using the Benz tool, 153,754 novel homologues of biofuel enzymes were identified from 23 diverse metagenomic sources. The comprehensive data of curated biofuel enzymes, their novel homologs identified from diverse metagenomes, and the hybrid prediction tool Benz are presented as a web server which can be used for the prediction of biofuel enzymes from genomic and metagenomic datasets. The database and the Benz tool is publicly available at http://metabiosys.iiserb.ac.in/biofueldb& http://metagenomics.iiserb.ac.in/biofueldb.

  9. Development of a Windbreak Dust Predictive Model and Mitigation Planning Tool

    DTIC Science & Technology

    2013-12-01

    laminar and turbulent flow (Uo = 5 m/s and Ls = 1 cm). Figure 28 Deposition fraction, DF, as a function of Stk * showing the collapse of the artificial...Figure 30 Deposition fraction, DF, as a function of the modified Stokes number ( Stk *). Figure 31 The measured decrease in horizontal PM10 flux, F...concentration. Sb Particle travel distance vi SERDP Strategic Environmental Research and Development Program Stk Stokes number Stk * Modified Stokes

  10. Prognostic and Prediction Tools in Bladder Cancer: A Comprehensive Review of the Literature.

    PubMed

    Kluth, Luis A; Black, Peter C; Bochner, Bernard H; Catto, James; Lerner, Seth P; Stenzl, Arnulf; Sylvester, Richard; Vickers, Andrew J; Xylinas, Evanguelos; Shariat, Shahrokh F

    2015-08-01

    This review focuses on risk assessment and prediction tools for bladder cancer (BCa). To review the current knowledge on risk assessment and prediction tools to enhance clinical decision making and counseling of patients with BCa. A literature search in English was performed using PubMed in July 2013. Relevant risk assessment and prediction tools for BCa were selected. More than 1600 publications were retrieved. Special attention was given to studies that investigated the clinical benefit of a prediction tool. Most prediction tools for BCa focus on the prediction of disease recurrence and progression in non-muscle-invasive bladder cancer or disease recurrence and survival after radical cystectomy. Although these tools are helpful, recent prediction tools aim to address a specific clinical problem, such as the prediction of organ-confined disease and lymph node metastasis to help identify patients who might benefit from neoadjuvant chemotherapy. Although a large number of prediction tools have been reported in recent years, many of them lack external validation. Few studies have investigated the clinical utility of any given model as measured by its ability to improve clinical decision making. There is a need for novel biomarkers to improve the accuracy and utility of prediction tools for BCa. Decision tools hold the promise of facilitating the shared decision process, potentially improving clinical outcomes for BCa patients. Prediction models need external validation and assessment of clinical utility before they can be incorporated into routine clinical care. We looked at models that aim to predict outcomes for patients with bladder cancer (BCa). We found a large number of prediction models that hold the promise of facilitating treatment decisions for patients with BCa. However, many models are missing confirmation in a different patient cohort, and only a few studies have tested the clinical utility of any given model as measured by its ability to improve clinical decision making. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  11. Structure prediction of the second extracellular loop in G-protein-coupled receptors.

    PubMed

    Kmiecik, Sebastian; Jamroz, Michal; Kolinski, Michal

    2014-06-03

    G-protein-coupled receptors (GPCRs) play key roles in living organisms. Therefore, it is important to determine their functional structures. The second extracellular loop (ECL2) is a functionally important region of GPCRs, which poses significant challenge for computational structure prediction methods. In this work, we evaluated CABS, a well-established protein modeling tool for predicting ECL2 structure in 13 GPCRs. The ECL2s (with between 13 and 34 residues) are predicted in an environment of other extracellular loops being fully flexible and the transmembrane domain fixed in its x-ray conformation. The modeling procedure used theoretical predictions of ECL2 secondary structure and experimental constraints on disulfide bridges. Our approach yielded ensembles of low-energy conformers and the most populated conformers that contained models close to the available x-ray structures. The level of similarity between the predicted models and x-ray structures is comparable to that of other state-of-the-art computational methods. Our results extend other studies by including newly crystallized GPCRs. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Mutagenicity in a Molecule: Identification of Core Structural Features of Mutagenicity Using a Scaffold Analysis

    PubMed Central

    Hsu, Kuo-Hsiang; Su, Bo-Han; Tu, Yi-Shu; Lin, Olivia A.; Tseng, Yufeng J.

    2016-01-01

    With advances in the development and application of Ames mutagenicity in silico prediction tools, the International Conference on Harmonisation (ICH) has amended its M7 guideline to reflect the use of such prediction models for the detection of mutagenic activity in early drug safety evaluation processes. Since current Ames mutagenicity prediction tools only focus on functional group alerts or side chain modifications of an analog series, these tools are unable to identify mutagenicity derived from core structures or specific scaffolds of a compound. In this study, a large collection of 6512 compounds are used to perform scaffold tree analysis. By relating different scaffolds on constructed scaffold trees with Ames mutagenicity, four major and one minor novel mutagenic groups of scaffold are identified. The recognized mutagenic groups of scaffold can serve as a guide for medicinal chemists to prevent the development of potentially mutagenic therapeutic agents in early drug design or development phases, by modifying the core structures of mutagenic compounds to form non-mutagenic compounds. In addition, five series of substructures are provided as recommendations, for direct modification of potentially mutagenic scaffolds to decrease associated mutagenic activities. PMID:26863515

  13. Artificial neural networks as a useful tool to predict the risk level of Betula pollen in the air

    NASA Astrophysics Data System (ADS)

    Castellano-Méndez, M.; Aira, M. J.; Iglesias, I.; Jato, V.; González-Manteiga, W.

    2005-05-01

    An increasing percentage of the European population suffers from allergies to pollen. The study of the evolution of air pollen concentration supplies prior knowledge of the levels of pollen in the air, which can be useful for the prevention and treatment of allergic symptoms, and the management of medical resources. The symptoms of Betula pollinosis can be associated with certain levels of pollen in the air. The aim of this study was to predict the risk of the concentration of pollen exceeding a given level, using previous pollen and meteorological information, by applying neural network techniques. Neural networks are a widespread statistical tool useful for the study of problems associated with complex or poorly understood phenomena. The binary response variable associated with each level requires a careful selection of the neural network and the error function associated with the learning algorithm used during the training phase. The performance of the neural network with the validation set showed that the risk of the pollen level exceeding a certain threshold can be successfully forecasted using artificial neural networks. This prediction tool may be implemented to create an automatic system that forecasts the risk of suffering allergic symptoms.

  14. [Neuroimaging and Blood Biomarkers in Functional Prognosis after Stroke].

    PubMed

    Branco, João Paulo; Costa, Joana Santos; Sargento-Freitas, João; Oliveira, Sandra; Mendes, Bruno; Laíns, Jorge; Pinheiro, João

    2016-11-01

    Stroke remains one of the leading causes of morbidity and mortality around the world and it is associated with an important long-term functional disability. Some neuroimaging resources and certain peripheral blood or cerebrospinal fluid proteins can give important information about etiology, therapeutic approach, follow-up and functional prognosis in acute ischemic stroke patients. However, among the scientific community, there is currently more interest in the stroke vital prognosis over the functional prognosis. Predicting the functional prognosis during acute phase would allow more objective rehabilitation programs and better management of the available resources. The aim of this work is to review the potential role of acute phase neuroimaging and blood biomarkers as functional recovery predictors after ischemic stroke. Review of the literature published between 2005 and 2015, in English, using the terms "ischemic stroke", "neuroimaging" e "blood biomarkers". We included nine studies, based on abstract reading. Computerized tomography, transcranial doppler ultrasound and diffuse magnetic resonance imaging show potential predictive value, based on the blood flow study and the evaluation of stroke's volume and localization, especially when combined with the National Institutes of Health Stroke Scale. Several biomarkers have been studied as diagnostic, risk stratification and prognostic tools, namely the S100 calcium binding protein B, C-reactive protein, matrix metalloproteinases and cerebral natriuretic peptide. Although some biomarkers and neuroimaging techniques have potential predictive value, none of the studies were able to support its use, alone or in association, as a clinically useful functionality predictor model. All the evaluated markers were considered insufficient to predict functional prognosis at three months, when applied in the first hours after stroke. Additional studies are necessary to identify reliable predictive markers for functional prognosis after ischemic stroke.

  15. A critical assessment of topologically associating domain prediction tools

    PubMed Central

    Dali, Rola

    2017-01-01

    Abstract Topologically associating domains (TADs) have been proposed to be the basic unit of chromosome folding and have been shown to play key roles in genome organization and gene regulation. Several different tools are available for TAD prediction, but their properties have never been thoroughly assessed. In this manuscript, we compare the output of seven different TAD prediction tools on two published Hi-C data sets. TAD predictions varied greatly between tools in number, size distribution and other biological properties. Assessed against a manual annotation of TADs, individual TAD boundary predictions were found to be quite reliable, but their assembly into complete TAD structures was much less so. In addition, many tools were sensitive to sequencing depth and resolution of the interaction frequency matrix. This manuscript provides users and designers of TAD prediction tools with information that will help guide the choice of tools and the interpretation of their predictions. PMID:28334773

  16. GREAT: a web portal for Genome Regulatory Architecture Tools

    PubMed Central

    Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François

    2016-01-01

    GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. PMID:27151196

  17. Identification of functional candidates amongst hypothetical proteins of Treponema pallidum ssp. pallidum.

    PubMed

    Naqvi, Ahmad Abu Turab; Shahbaaz, Mohd; Ahmad, Faizan; Hassan, Md Imtaiyaz

    2015-01-01

    Syphilis is a globally occurring venereal disease, and its infection is propagated through sexual contact. The causative agent of syphilis, Treponema pallidum ssp. pallidum, a Gram-negative sphirochaete, is an obligate human parasite. Genome of T. pallidum ssp. pallidum SS14 strain (RefSeq NC_010741.1) encodes 1,027 proteins, of which 444 proteins are known as hypothetical proteins (HPs), i.e., proteins of unknown functions. Here, we performed functional annotation of HPs of T. pallidum ssp. pallidum using various database, domain architecture predictors, protein function annotators and clustering tools. We have analyzed the sequences of 444 HPs of T. pallidum ssp. pallidum and subsequently predicted the function of 207 HPs with a high level of confidence. However, functions of 237 HPs are predicted with less accuracy. We found various enzymes, transporters, binding proteins in the annotated group of HPs that may be possible molecular targets, facilitating for the survival of pathogen. Our comprehensive analysis helps to understand the mechanism of pathogenesis to provide many novel potential therapeutic interventions.

  18. Real-time scene and signature generation for ladar and imaging sensors

    NASA Astrophysics Data System (ADS)

    Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios

    2014-05-01

    This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.

  19. Operon-mapper: A Web Server for Precise Operon Identification in Bacterial and Archaeal Genomes.

    PubMed

    Taboada, Blanca; Estrada, Karel; Ciria, Ricardo; Merino, Enrique

    2018-06-19

    Operon-mapper is a web server that accurately, easily, and directly predicts the operons of any bacterial or archaeal genome sequence. The operon predictions are based on the intergenic distance of neighboring genes as well as the functional relationships of their protein-coding products. To this end, Operon-mapper finds all the ORFs within a given nucleotide sequence, along with their genomic coordinates, orthology groups, and functional relationships. We believe that Operon-mapper, due to its accuracy, simplicity and speed, as well as the relevant information that it generates, will be a useful tool for annotating and characterizing genomic sequences. http://biocomputo.ibt.unam.mx/operon_mapper/.

  20. Empirical scoring functions for advanced protein-ligand docking with PLANTS.

    PubMed

    Korb, Oliver; Stützle, Thomas; Exner, Thomas E

    2009-01-01

    In this paper we present two empirical scoring functions, PLANTS(CHEMPLP) and PLANTS(PLP), designed for our docking algorithm PLANTS (Protein-Ligand ANT System), which is based on ant colony optimization (ACO). They are related, regarding their functional form, to parts of already published scoring functions and force fields. The parametrization procedure described here was able to identify several parameter settings showing an excellent performance for the task of pose prediction on two test sets comprising 298 complexes in total. Up to 87% of the complexes of the Astex diverse set and 77% of the CCDC/Astex clean listnc (noncovalently bound complexes of the clean list) could be reproduced with root-mean-square deviations of less than 2 A with respect to the experimentally determined structures. A comparison with the state-of-the-art docking tool GOLD clearly shows that this is, especially for the druglike Astex diverse set, an improvement in pose prediction performance. Additionally, optimized parameter settings for the search algorithm were identified, which can be used to balance pose prediction reliability and search speed.

  1. Density functional calculations on structural materials for nuclear energy applications and functional materials for photovoltaic energy applications (abstract only).

    PubMed

    Domain, C; Olsson, P; Becquart, C S; Legris, A; Guillemoles, J F

    2008-02-13

    Ab initio density functional theory calculations are carried out in order to predict the evolution of structural materials under aggressive working conditions such as cases with exposure to corrosion and irradiation, as well as to predict and investigate the properties of functional materials for photovoltaic energy applications. Structural metallic materials used in nuclear facilities are subjected to irradiation which induces the creation of large amounts of point defects. These defects interact with each other as well as with the different elements constituting the alloys, which leads to modifications of the microstructure and the mechanical properties. VASP (Vienna Ab initio Simulation Package) has been used to determine the properties of point defect clusters and also those of extended defects such as dislocations. The resulting quantities, such as interaction energies and migration energies, are used in larger scale simulation methods in order to build predictive tools. For photovoltaic energy applications, ab initio calculations are used in order to search for new semiconductors and possible element substitutions for existing ones in order to improve their efficiency.

  2. An evaluation of the accuracy and speed of metagenome analysis tools

    PubMed Central

    Lindgreen, Stinus; Adair, Karen L.; Gardner, Paul P.

    2016-01-01

    Metagenome studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial and aquatic ecosystems to human skin and gut. With the advent of high-throughput sequencing platforms, the use of large scale shotgun sequencing approaches is now commonplace. However, a thorough independent benchmark comparing state-of-the-art metagenome analysis tools is lacking. Here, we present a benchmark where the most widely used tools are tested on complex, realistic data sets. Our results clearly show that the most widely used tools are not necessarily the most accurate, that the most accurate tool is not necessarily the most time consuming, and that there is a high degree of variability between available tools. These findings are important as the conclusions of any metagenomics study are affected by errors in the predicted community composition and functional capacity. Data sets and results are freely available from http://www.ucbioinformatics.org/metabenchmark.html PMID:26778510

  3. Protein Structure Prediction by Protein Threading

    NASA Astrophysics Data System (ADS)

    Xu, Ying; Liu, Zhijie; Cai, Liming; Xu, Dong

    The seminal work of Bowie, Lüthy, and Eisenberg (Bowie et al., 1991) on "the inverse protein folding problem" laid the foundation of protein structure prediction by protein threading. By using simple measures for fitness of different amino acid types to local structural environments defined in terms of solvent accessibility and protein secondary structure, the authors derived a simple and yet profoundly novel approach to assessing if a protein sequence fits well with a given protein structural fold. Their follow-up work (Elofsson et al., 1996; Fischer and Eisenberg, 1996; Fischer et al., 1996a,b) and the work by Jones, Taylor, and Thornton (Jones et al., 1992) on protein fold recognition led to the development of a new brand of powerful tools for protein structure prediction, which we now term "protein threading." These computational tools have played a key role in extending the utility of all the experimentally solved structures by X-ray crystallography and nuclear magnetic resonance (NMR), providing structural models and functional predictions for many of the proteins encoded in the hundreds of genomes that have been sequenced up to now.

  4. Streamflow prediction using multi-site rainfall obtained from hydroclimatic teleconnection

    NASA Astrophysics Data System (ADS)

    Kashid, S. S.; Ghosh, Subimal; Maity, Rajib

    2010-12-01

    SummarySimultaneous variations in weather and climate over widely separated regions are commonly known as "hydroclimatic teleconnections". Rainfall and runoff patterns, over continents, are found to be significantly teleconnected, with large-scale circulation patterns, through such hydroclimatic teleconnections. Though such teleconnections exist in nature, it is very difficult to model them, due to their inherent complexity. Statistical techniques and Artificial Intelligence (AI) tools gain popularity in modeling hydroclimatic teleconnection, based on their ability, in capturing the complicated relationship between the predictors (e.g. sea surface temperatures) and predictand (e.g., rainfall). Genetic Programming is such an AI tool, which is capable of capturing nonlinear relationship, between predictor and predictand, due to its flexible functional structure. In the present study, gridded multi-site weekly rainfall is predicted from El Niño Southern Oscillation (ENSO) indices, Equatorial Indian Ocean Oscillation (EQUINOO) indices, Outgoing Longwave Radiation (OLR) and lag rainfall at grid points, over the catchment, using Genetic Programming. The predicted rainfall is further used in a Genetic Programming model to predict streamflows. The model is applied for weekly forecasting of streamflow in Mahanadi River, India, and satisfactory performance is observed.

  5. Individualized relapse prediction: Personality measures and striatal and insular activity during reward-processing robustly predict relapse.

    PubMed

    Gowin, Joshua L; Ball, Tali M; Wittmann, Marc; Tapert, Susan F; Paulus, Martin P

    2015-07-01

    Nearly half of individuals with substance use disorders relapse in the year after treatment. A diagnostic tool to help clinicians make decisions regarding treatment does not exist for psychiatric conditions. Identifying individuals with high risk for relapse to substance use following abstinence has profound clinical consequences. This study aimed to develop neuroimaging as a robust tool to predict relapse. 68 methamphetamine-dependent adults (15 female) were recruited from 28-day inpatient treatment. During treatment, participants completed a functional MRI scan that examined brain activation during reward processing. Patients were followed 1 year later to assess abstinence. We examined brain activation during reward processing between relapsing and abstaining individuals and employed three random forest prediction models (clinical and personality measures, neuroimaging measures, a combined model) to generate predictions for each participant regarding their relapse likelihood. 18 individuals relapsed. There were significant group by reward-size interactions for neural activation in the left insula and right striatum for rewards. Abstaining individuals showed increased activation for large, risky relative to small, safe rewards, whereas relapsing individuals failed to show differential activation between reward types. All three random forest models yielded good test characteristics such that a positive test for relapse yielded a likelihood ratio 2.63, whereas a negative test had a likelihood ratio of 0.48. These findings suggest that neuroimaging can be developed in combination with other measures as an instrument to predict relapse, advancing tools providers can use to make decisions about individualized treatment of substance use disorders. Published by Elsevier Ireland Ltd.

  6. Statistical Tools And Artificial Intelligence Approaches To Predict Fracture In Bulk Forming Processes

    NASA Astrophysics Data System (ADS)

    Di Lorenzo, R.; Ingarao, G.; Fonti, V.

    2007-05-01

    The crucial task in the prevention of ductile fracture is the availability of a tool for the prediction of such defect occurrence. The technical literature presents a wide investigation on this topic and many contributions have been given by many authors following different approaches. The main class of approaches regards the development of fracture criteria: generally, such criteria are expressed by determining a critical value of a damage function which depends on stress and strain paths: ductile fracture is assumed to occur when such critical value is reached during the analysed process. There is a relevant drawback related to the utilization of ductile fracture criteria; in fact each criterion usually has good performances in the prediction of fracture for particular stress - strain paths, i.e. it works very well for certain processes but may provide no good results for other processes. On the other hand, the approaches based on damage mechanics formulation are very effective from a theoretical point of view but they are very complex and their proper calibration is quite difficult. In this paper, two different approaches are investigated to predict fracture occurrence in cold forming operations. The final aim of the proposed method is the achievement of a tool which has a general reliability i.e. it is able to predict fracture for different forming processes. The proposed approach represents a step forward within a research project focused on the utilization of innovative predictive tools for ductile fracture. The paper presents a comparison between an artificial neural network design procedure and an approach based on statistical tools; both the approaches were aimed to predict fracture occurrence/absence basing on a set of stress and strain paths data. The proposed approach is based on the utilization of experimental data available, for a given material, on fracture occurrence in different processes. More in detail, the approach consists in the analysis of experimental tests in which fracture occurs followed by the numerical simulations of such processes in order to track the stress-strain paths in the workpiece region where fracture is expected. Such data are utilized to build up a proper data set which was utilized both to train an artificial neural network and to perform a statistical analysis aimed to predict fracture occurrence. The developed statistical tool is properly designed and optimized and is able to recognize the fracture occurrence. The reliability and predictive capability of the statistical method were compared with the ones obtained from an artificial neural network developed to predict fracture occurrence. Moreover, the approach is validated also in forming processes characterized by a complex fracture mechanics.

  7. Investigation of fatigue strength of tool steels in sheet-bulk metal forming

    NASA Astrophysics Data System (ADS)

    Pilz, F.; Gröbel, D.; Merklein, M.

    2018-05-01

    To encounter trends regarding an efficient production of complex functional components in forming technology, the process class of sheet-bulk metal forming (SBMF) can be applied. SBMF is characterized by the application of bulk forming operations on sheet metal, often in combination with sheet forming operations [1]. The combination of these conventional process classes leads to locally varying load conditions. The resulting load conditions cause high tool loads, which lead to a reduced tool life, and an uncontrolled material flow. Several studies have shown that locally modified tool surfaces, so-called tailored surfaces, have the potential to control the material flow and thus to increase the die filling of functional elements [2]. A combination of these modified tool surfaces and high tool loads in SBMF is furthermore critical for the tool life and leads to fatigue. Tool fatigue is hardly predictable and due to a lack of data [3], a challenge in tool design. Thus, it is necessary to provide such data for tool steels used in SBMF. The aim of this study is the investigation of the influence of tailored surfaces on the fatigue strength of the powder metallurgical tool steel ASP2023 (1.3344, AISI M3:2), which is typically used in cold forging applications, with a hardness 60 HRC ± 1 HRC. To conduct this investigation, the rotating bending test is chosen. As tailored surfaces, a DLC-coating and a surface manufactured by a high-feed-milling process are chosen. As reference a polished surface which is typical for cold forging tools is used. Before the rotating bending test, the surface integrity is characterized by measuring topography and residual stresses. After testing, the determined values of the surface integrity are correlated with the reached fracture load cycle to derive functional relations. Based on the gained results the investigated tailored surfaces are evaluated regarding their feasibility to modify tool surfaces within SBMF.

  8. Molecular Classifiers for Acute Kidney Transplant Rejection in Peripheral Blood by Whole Genome Gene Expression Profiling

    PubMed Central

    Kurian, S. M.; Williams, A. N.; Gelbart, T.; Campbell, D.; Mondala, T. S.; Head, S. R.; Horvath, S.; Gaber, L.; Thompson, R.; Whisenant, T.; Lin, W.; Langfelder, P.; Robison, E. H.; Schaffer, R. L.; Fisher, J. S.; Friedewald, J.; Flechner, S. M.; Chan, L. K.; Wiseman, A. C.; Shidban, H.; Mendez, R.; Heilman, R.; Abecassis, M. M.; Marsh, C. L.; Salomon, D. R.

    2015-01-01

    There are no minimally invasive diagnostic metrics for acute kidney transplant rejection (AR), especially in the setting of the common confounding diagnosis, acute dysfunction with no rejection (ADNR). Thus, though kidney transplant biopsies remain the gold standard, they are invasive, have substantial risks, sampling error issues and significant costs and are not suitable for serial monitoring. Global gene expression profiles of 148 peripheral blood samples from transplant patients with excellent function and normal histology (TX; n = 46), AR (n = 63) and ADNR (n = 39), from two independent cohorts were analyzed with DNA microarrays. We applied a new normalization tool, frozen robust multi-array analysis, particularly suitable for clinical diagnostics, multiple prediction tools to discover, refine and validate robust molecular classifiers and we tested a novel one-by-one analysis strategy to model the real clinical application of this test. Multiple three-way classifier tools identified 200 highest value probesets with sensitivity, specificity, positive predictive value, negative predictive value and area under the curve for the validation cohort ranging from 82% to 100%, 76% to 95%, 76% to 95%, 79% to 100%, 84% to 100% and 0.817 to 0.968, respectively. We conclude that peripheral blood gene expression profiling can be used as a minimally invasive tool to accurately reveal TX, AR and ADNR in the setting of acute kidney transplant dysfunction. PMID:24725967

  9. Resting Heart Rate Predicts Depression and Cognition Early after Ischemic Stroke: A Pilot Study.

    PubMed

    Tessier, Arnaud; Sibon, Igor; Poli, Mathilde; Audiffren, Michel; Allard, Michèle; Pfeuty, Micha

    2017-10-01

    Early detection of poststroke depression (PSD) and cognitive impairment (PSCI) remains challenging. It is well documented that the function of autonomic nervous system is associated with depression and cognition. However, their relationship has never been investigated in the early poststroke phase. This pilot study aimed at determining whether resting heart rate (HR) parameters measured in early poststroke phase (1) are associated with early-phase measures of depression and cognition and (2) could be used as new tools for early objective prediction of PSD or PSCI, which could be applicable to patients unable to answer usual questionnaires. Fifty-four patients with first-ever ischemic stroke, without cardiac arrhythmia, were assessed for resting HR and heart rate variability (HRV) within the first week after stroke and for depression and cognition during the first week and at 3 months after stroke. Multiple regression analyses controlled for age, gender, and stroke severity revealed that higher HR, lower HRV, and higher sympathovagal balance (low-frequency/high-frequency ratio of HRV) were associated with higher severity of depressive symptoms within the first week after stroke. Furthermore, higher sympathovagal balance in early phase predicted higher severity of depressive symptoms at the 3-month follow-up, whereas higher HR and lower HRV in early phase predicted lower global cognitive functioning at the 3-month follow-up. Resting HR measurements obtained in early poststroke phase could serve as an objective tool, applicable to patients unable to complete questionnaires, to help in the early prediction of PSD and PSCI. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  10. Integrative Identification of Arabidopsis Mitochondrial Proteome and Its Function Exploitation through Protein Interaction Network

    PubMed Central

    Cui, Jian; Liu, Jinghua; Li, Yuhua; Shi, Tieliu

    2011-01-01

    Mitochondria are major players on the production of energy, and host several key reactions involved in basic metabolism and biosynthesis of essential molecules. Currently, the majority of nucleus-encoded mitochondrial proteins are unknown even for model plant Arabidopsis. We reported a computational framework for predicting Arabidopsis mitochondrial proteins based on a probabilistic model, called Naive Bayesian Network, which integrates disparate genomic data generated from eight bioinformatics tools, multiple orthologous mappings, protein domain properties and co-expression patterns using 1,027 microarray profiles. Through this approach, we predicted 2,311 candidate mitochondrial proteins with 84.67% accuracy and 2.53% FPR performances. Together with those experimental confirmed proteins, 2,585 mitochondria proteins (named CoreMitoP) were identified, we explored those proteins with unknown functions based on protein-protein interaction network (PIN) and annotated novel functions for 26.65% CoreMitoP proteins. Moreover, we found newly predicted mitochondrial proteins embedded in particular subnetworks of the PIN, mainly functioning in response to diverse environmental stresses, like salt, draught, cold, and wound etc. Candidate mitochondrial proteins involved in those physiological acitivites provide useful targets for further investigation. Assigned functions also provide comprehensive information for Arabidopsis mitochondrial proteome. PMID:21297957

  11. Applicability of "MEGA"[Eighth Note] to Sexually Abusive Youth with Low Intellectual Functioning

    ERIC Educational Resources Information Center

    Miccio-Fonseca, L. C.; Rasmussen, Lucinda A.

    2013-01-01

    The study explored the predictive validity of "Multiplex Empirically Guided Inventory of Ecological Aggregates for Assessing Sexually Abusive Children and Adolescents (Ages 4 to 19)" ("MEGA"[eighth note]; Miccio-Fonseca, 2006b), a comprehensive developmentally sensitive risk assessment outcome tool. "MEGA"[eighth note] assesses risk for coarse…

  12. Variations in Driver Behavior: An Analysis of Car-Following Behavior Heterogeneity as a Function of Road Type and Traffic Condition

    DOT National Transportation Integrated Search

    2017-11-15

    Microsimulation modeling is a tool used by practitioners and researchers to predict and evaluate the flow of traffic on real transportation networks. These models are used in practice to inform decisions and thus must reflect a high level of accuracy...

  13. Controlling Release Kinetics of PLG Microspheres Using a Manufacturing Technique

    NASA Astrophysics Data System (ADS)

    Berchane, Nader

    2005-11-01

    Controlled drug delivery offers numerous advantages compared with conventional free dosage forms, in particular: improved efficacy and patient compliance. Emulsification is a widely used technique to entrap drugs in biodegradable microspheres for controlled drug delivery. The size of the formed microspheres has a significant influence on drug release kinetics. Despite the advantages of controlled drug delivery, previous attempts to achieve predetermined release rates have seen limited success. This study develops a tool to tailor desired release kinetics by combining microsphere batches of specified mean diameter and size distribution. A fluid mechanics based correlation that predicts the average size of Poly(Lactide-co-Glycolide) [PLG] microspheres from the manufacturing technique, is constructed and validated by comparison with experimental results. The microspheres produced are accurately represented by the Rosin-Rammler mathematical distribution function. A mathematical model is formulated that incorporates the microsphere distribution function to predict the release kinetics from mono-dispersed and poly-dispersed populations. Through this mathematical model, different release kinetics can be achieved by combining different sized populations in different ratios. The resulting design tool should prove useful for the pharmaceutical industry to achieve designer release kinetics.

  14. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.

    PubMed

    Gong, Xiajing; Hu, Meng; Zhao, Liang

    2018-05-01

    Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  15. Plant membrane proteomics.

    PubMed

    Ephritikhine, Geneviève; Ferro, Myriam; Rolland, Norbert

    2004-12-01

    Plant membrane proteins are involved in many different functions according to their location in the cell. For instance, the chloroplast has two membrane systems, thylakoids and envelope, with specialized membrane proteins for photosynthesis and metabolite and ion transporters, respectively. Although recent advances in sample preparation and analytical techniques have been achieved for the study of membrane proteins, the characterization of these proteins, especially the hydrophobic ones, is still challenging. The present review highlights recent advances in methodologies for identification of plant membrane proteins from purified subcellular structures. The interest of combining several complementary extraction procedures to take into account specific features of membrane proteins is discussed in the light of recent proteomics data, notably for chloroplast envelope, mitochondrial membranes and plasma membrane from Arabidopsis. These examples also illustrate how, on one hand, proteomics can feed bioinformatics for a better definition of prediction tools and, on the other hand, although prediction tools are not 100% reliable, they can give valuable information for biological investigations. In particular, membrane proteomics brings new insights over plant membrane systems, on both the membrane compartment where proteins are working and their putative cellular function.

  16. The Cementitious Barriers Partnership (CBP) Software Toolbox Capabilities in Assessing the Degradation of Cementitious Barriers - 13487

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G.P.; Burns, H.H.; Langton, C.

    2013-07-01

    The Cementitious Barriers Partnership (CBP) Project is a multi-disciplinary, multi-institutional collaboration supported by the U.S. Department of Energy (US DOE) Office of Tank Waste and Nuclear Materials Management. The CBP program has developed a set of integrated tools (based on state-of-the-art models and leaching test methods) that help improve understanding and predictions of the long-term structural, hydraulic and chemical performance of cementitious barriers used in nuclear applications. Tools selected for and developed under this program have been used to evaluate and predict the behavior of cementitious barriers used in near-surface engineered waste disposal systems for periods of performance up tomore » 100 years and longer for operating facilities and longer than 1000 years for waste disposal. The CBP Software Toolbox has produced tangible benefits to the DOE Performance Assessment (PA) community. A review of prior DOE PAs has provided a list of potential opportunities for improving cementitious barrier performance predictions through the use of the CBP software tools. These opportunities include: 1) impact of atmospheric exposure to concrete and grout before closure, such as accelerated slag and Tc-99 oxidation, 2) prediction of changes in K{sub d}/mobility as a function of time that result from changing pH and redox conditions, 3) concrete degradation from rebar corrosion due to carbonation, 4) early age cracking from drying and/or thermal shrinkage and 5) degradation due to sulfate attack. The CBP has already had opportunity to provide near-term, tangible support to ongoing DOE-EM PAs such as the Savannah River Saltstone Disposal Facility (SDF) by providing a sulfate attack analysis that predicts the extent and damage that sulfate ingress will have on the concrete vaults over extended time (i.e., > 1000 years). This analysis is one of the many technical opportunities in cementitious barrier performance that can be addressed by the DOE-EM sponsored CBP software tools. Modification of the existing tools can provide many opportunities to bring defense in depth in prediction of the performance of cementitious barriers over time. (authors)« less

  17. Prediction of liver disease in patients whose liver function tests have been checked in primary care: model development and validation using population-based observational cohorts.

    PubMed

    McLernon, David J; Donnan, Peter T; Sullivan, Frank M; Roderick, Paul; Rosenberg, William M; Ryder, Steve D; Dillon, John F

    2014-06-02

    To derive and validate a clinical prediction model to estimate the risk of liver disease diagnosis following liver function tests (LFTs) and to convert the model to a simplified scoring tool for use in primary care. Population-based observational cohort study of patients in Tayside Scotland identified as having their LFTs performed in primary care and followed for 2 years. Biochemistry data were linked to secondary care, prescriptions and mortality data to ascertain baseline characteristics of the derivation cohort. A separate validation cohort was obtained from 19 general practices across the rest of Scotland to externally validate the final model. Primary care, Tayside, Scotland. Derivation cohort: LFT results from 310 511 patients. After exclusions (including: patients under 16 years, patients having initial LFTs measured in secondary care, bilirubin >35 μmol/L, liver complications within 6 weeks and history of a liver condition), the derivation cohort contained 95 977 patients with no clinically apparent liver condition. Validation cohort: after exclusions, this cohort contained 11 653 patients. Diagnosis of a liver condition within 2 years. From the derivation cohort (n=95 977), 481 (0.5%) were diagnosed with a liver disease. The model showed good discrimination (C-statistic=0.78). Given the low prevalence of liver disease, the negative predictive values were high. Positive predictive values were low but rose to 20-30% for high-risk patients. This study successfully developed and validated a clinical prediction model and subsequent scoring tool, the Algorithm for Liver Function Investigations (ALFI), which can predict liver disease risk in patients with no clinically obvious liver disease who had their initial LFTs taken in primary care. ALFI can help general practitioners focus referral on a small subset of patients with higher predicted risk while continuing to address modifiable liver disease risk factors in those at lower risk. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Genome-wide protein-protein interactions and protein function exploration in cyanobacteria

    PubMed Central

    Lv, Qi; Ma, Weimin; Liu, Hui; Li, Jiang; Wang, Huan; Lu, Fang; Zhao, Chen; Shi, Tieliu

    2015-01-01

    Genome-wide network analysis is well implemented to study proteins of unknown function. Here, we effectively explored protein functions and the biological mechanism based on inferred high confident protein-protein interaction (PPI) network in cyanobacteria. We integrated data from seven different sources and predicted 1,997 PPIs, which were evaluated by experiments in molecular mechanism, text mining of literatures in proved direct/indirect evidences, and “interologs” in conservation. Combined the predicted PPIs with known PPIs, we obtained 4,715 no-redundant PPIs (involving 3,231 proteins covering over 90% of genome) to generate the PPI network. Based on the PPI network, terms in Gene ontology (GO) were assigned to function-unknown proteins. Functional modules were identified by dissecting the PPI network into sub-networks and analyzing pathway enrichment, with which we investigated novel function of underlying proteins in protein complexes and pathways. Examples of photosynthesis and DNA repair indicate that the network approach is a powerful tool in protein function analysis. Overall, this systems biology approach provides a new insight into posterior functional analysis of PPIs in cyanobacteria. PMID:26490033

  19. A thermal sensation prediction tool for use by the profession

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fountain, M.E.; Huizenga, C.

    1997-12-31

    As part of a recent ASHRAE research project (781-RP), a thermal sensation prediction tool has been developed. This paper introduces the tool, describes the component thermal sensation models, and presents examples of how the tool can be used in practice. Since the main end product of the HVAC industry is the comfort of occupants indoors, tools for predicting occupant thermal response can be an important asset to designers of indoor climate control systems. The software tool presented in this paper incorporates several existing models for predicting occupant comfort.

  20. Feature-based classification of amino acid substitutions outside conserved functional protein domains.

    PubMed

    Gemovic, Branislava; Perovic, Vladimir; Glisic, Sanja; Veljkovic, Nevena

    2013-01-01

    There are more than 500 amino acid substitutions in each human genome, and bioinformatics tools irreplaceably contribute to determination of their functional effects. We have developed feature-based algorithm for the detection of mutations outside conserved functional domains (CFDs) and compared its classification efficacy with the most commonly used phylogeny-based tools, PolyPhen-2 and SIFT. The new algorithm is based on the informational spectrum method (ISM), a feature-based technique, and statistical analysis. Our dataset contained neutral polymorphisms and mutations associated with myeloid malignancies from epigenetic regulators ASXL1, DNMT3A, EZH2, and TET2. PolyPhen-2 and SIFT had significantly lower accuracies in predicting the effects of amino acid substitutions outside CFDs than expected, with especially low sensitivity. On the other hand, only ISM algorithm showed statistically significant classification of these sequences. It outperformed PolyPhen-2 and SIFT by 15% and 13%, respectively. These results suggest that feature-based methods, like ISM, are more suitable for the classification of amino acid substitutions outside CFDs than phylogeny-based tools.

  1. Neural network models - a novel tool for predicting the efficacy of growth hormone (GH) therapy in children with short stature.

    PubMed

    Smyczynska, Joanna; Hilczer, Maciej; Smyczynska, Urszula; Stawerska, Renata; Tadeusiewicz, Ryszard; Lewinski, Andrzej

    2015-01-01

    The leading method for prediction of growth hormone (GH) therapy effectiveness are multiple linear regression (MLR) models. Best of our knowledge, we are the first to apply artificial neural networks (ANN) to solve this problem. For ANN there is no necessity to assume the functions linking independent and dependent variables. The aim of study is to compare ANN and MLR models of GH therapy effectiveness. Analysis comprised the data of 245 GH-deficient children (170 boys) treated with GH up to final height (FH). Independent variables included: patients' height, pre-treatment height velocity, chronological age, bone age, gender, pubertal status, parental heights, GH peak in 2 stimulation tests, IGF-I concentration. The output variable was FH. For testing dataset, MLR model predicted FH SDS with average error (RMSE) 0.64 SD, explaining 34.3% of its variability; ANN model derived on the same pre-processed data predicted FH SDS with RMSE 0.60 SD, explaining 42.0% of its variability; ANN model derived on raw data predicted FH with RMSE 3.9 cm (0.63 SD), explaining 78.7% of its variability. ANN seem to be valuable tool in prediction of GH treatment effectiveness, especially since they can be applied to raw clinical data.

  2. Functional Investigations of HNF1A Identify Rare Variants as Risk Factors for Type 2 Diabetes in the General Population

    PubMed Central

    Najmi, Laeya Abdoli; Aukrust, Ingvild; Flannick, Jason; Molnes, Janne; Burtt, Noel; Molven, Anders; Groop, Leif; Altshuler, David; Johansson, Stefan; Njølstad, Pål Rasmus

    2017-01-01

    Variants in HNF1A encoding hepatocyte nuclear factor 1α (HNF-1A) are associated with maturity-onset diabetes of the young form 3 (MODY 3) and type 2 diabetes. We investigated whether functional classification of HNF1A rare coding variants can inform models of diabetes risk prediction in the general population by analyzing the effect of 27 HNF1A variants identified in well-phenotyped populations (n = 4,115). Bioinformatics tools classified 11 variants as likely pathogenic and showed no association with diabetes risk (combined minor allele frequency [MAF] 0.22%; odds ratio [OR] 2.02; 95% CI 0.73–5.60; P = 0.18). However, a different set of 11 variants that reduced HNF-1A transcriptional activity to <60% of normal (wild-type) activity was strongly associated with diabetes in the general population (combined MAF 0.22%; OR 5.04; 95% CI 1.99–12.80; P = 0.0007). Our functional investigations indicate that 0.44% of the population carry HNF1A variants that result in a substantially increased risk for developing diabetes. These results suggest that functional characterization of variants within MODY genes may overcome the limitations of bioinformatics tools for the purposes of presymptomatic diabetes risk prediction in the general population. PMID:27899486

  3. Diagnosis of edge condition based on force measurement during milling of composites

    NASA Astrophysics Data System (ADS)

    Felusiak, Agata; Twardowski, Paweł

    2018-04-01

    The present paper presents comparative results of the forecasting of a cutting tool wear with the application of different methods of diagnostic deduction based on the measurement of cutting force components. The research was carried out during the milling of the Duralcan F3S.10S aluminum-ceramic composite. Prediction of the toolwear was based on one variable, two variables regression Multilayer Perceptron(MLP)and Radial Basis Function(RBF)neural networks. Forecasting the condition of the cutting tool on the basis of cutting forces has yielded very satisfactory results.

  4. Genomics Portals: integrative web-platform for mining genomics data.

    PubMed

    Shinde, Kaustubh; Phatak, Mukta; Johannes, Freudenberg M; Chen, Jing; Li, Qian; Vineet, Joshi K; Hu, Zhen; Ghosh, Krishnendu; Meller, Jaroslaw; Medvedovic, Mario

    2010-01-13

    A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org.

  5. Genomics Portals: integrative web-platform for mining genomics data

    PubMed Central

    2010-01-01

    Background A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Results Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. Conclusion The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org. PMID:20070909

  6. A Comparison of Predictive Thermo and Water Solvation Property Prediction Tools and Experimental Data for Selected Traditional Chemical Warfare Agents and Simulants II: COSMO RS and COSMOTherm

    DTIC Science & Technology

    2017-04-01

    A COMPARISON OF PREDICTIVE THERMO AND WATER SOLVATION PROPERTY PREDICTION TOOLS AND EXPERIMENTAL DATA FOR...4. TITLE AND SUBTITLE A Comparison of Predictive Thermo and Water Solvation Property Prediction Tools and Experimental Data for Selected...1  2.  EXPERIMENTAL PROCEDURE

  7. Temperature and Material Flow Prediction in Friction-Stir Spot Welding of Advanced High-Strength Steel

    NASA Astrophysics Data System (ADS)

    Miles, M.; Karki, U.; Hovanski, Y.

    2014-10-01

    Friction-stir spot welding (FSSW) has been shown to be capable of joining advanced high-strength steel, with its flexibility in controlling the heat of welding and the resulting microstructure of the joint. This makes FSSW a potential alternative to resistance spot welding if tool life is sufficiently high, and if machine spindle loads are sufficiently low that the process can be implemented on an industrial robot. Robots for spot welding can typically sustain vertical loads of about 8 kN, but FSSW at tool speeds of less than 3000 rpm cause loads that are too high, in the range of 11-14 kN. Therefore, in the current work, tool speeds of 5000 rpm were employed to generate heat more quickly and to reduce welding loads to acceptable levels. Si3N4 tools were used for the welding experiments on 1.2-mm DP 980 steel. The FSSW process was modeled with a finite element approach using the Forge® software. An updated Lagrangian scheme with explicit time integration was employed to predict the flow of the sheet material, subjected to boundary conditions of a rotating tool and a fixed backing plate. Material flow was calculated from a velocity field that is two-dimensional, but heat generated by friction was computed by a novel approach, where the rotational velocity component imparted to the sheet by the tool surface was included in the thermal boundary conditions. An isotropic, viscoplastic Norton-Hoff law was used to compute the material flow stress as a function of strain, strain rate, and temperature. The model predicted welding temperatures to within 4%, and the position of the joint interface to within 10%, of the experimental results.

  8. Temperature and Material Flow Prediction in Friction-Stir Spot Welding of Advanced High-Strength Steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miles, Michael; Karki, U.; Hovanski, Yuri

    Friction-stir spot welding (FSSW) has been shown to be capable of joining advanced high-strength steel, with its flexibility in controlling the heat of welding and the resulting microstructure of the joint. This makes FSSW a potential alternative to resistance spot welding if tool life is sufficiently high, and if machine spindle loads are sufficiently low that the process can be implemented on an industrial robot. Robots for spot welding can typically sustain vertical loads of about 8 kN, but FSSW at tool speeds of less than 3000 rpm cause loads that are too high, in the range of 11–14 kN.more » Therefore, in the current work, tool speeds of 5000 rpm were employed to generate heat more quickly and to reduce welding loads to acceptable levels. Si3N4 tools were used for the welding experiments on 1.2-mm DP 980 steel. The FSSW process was modeled with a finite element approach using the Forge* software. An updated Lagrangian scheme with explicit time integration was employed to predict the flow of the sheet material, subjected to boundary conditions of a rotating tool and a fixed backing plate. Material flow was calculated from a velocity field that is two-dimensional, but heat generated by friction was computed by a novel approach, where the rotational velocity component imparted to the sheet by the tool surface was included in the thermal boundary conditions. An isotropic, viscoplastic Norton-Hoff law was used to compute the material flow stress as a function of strain, strain rate, and temperature. The model predicted welding temperatures to within percent, and the position of the joint interface to within 10 percent, of the experimental results.« less

  9. Computational prediction of hinge axes in proteins

    PubMed Central

    2014-01-01

    Background A protein's function is determined by the wide range of motions exhibited by its 3D structure. However, current experimental techniques are not able to reliably provide the level of detail required for elucidating the exact mechanisms of protein motion essential for effective drug screening and design. Computational tools are instrumental in the study of the underlying structure-function relationship. We focus on a special type of proteins called "hinge proteins" which exhibit a motion that can be interpreted as a rotation of one domain relative to another. Results This work proposes a computational approach that uses the geometric structure of a single conformation to predict the feasible motions of the protein and is founded in recent work from rigidity theory, an area of mathematics that studies flexibility properties of general structures. Given a single conformational state, our analysis predicts a relative axis of motion between two specified domains. We analyze a dataset of 19 structures known to exhibit this hinge-like behavior. For 15, the predicted axis is consistent with a motion to a second, known conformation. We present a detailed case study for three proteins whose dynamics have been well-studied in the literature: calmodulin, the LAO binding protein and the Bence-Jones protein. Conclusions Our results show that incorporating rigidity-theoretic analyses can lead to effective computational methods for understanding hinge motions in macromolecules. This initial investigation is the first step towards a new tool for probing the structure-dynamics relationship in proteins. PMID:25080829

  10. Statistical modelling coupled with LC-MS analysis to predict human upper intestinal absorption of phytochemical mixtures.

    PubMed

    Selby-Pham, Sophie N B; Howell, Kate S; Dunshea, Frank R; Ludbey, Joel; Lutz, Adrian; Bennett, Louise

    2018-04-15

    A diet rich in phytochemicals confers benefits for health by reducing the risk of chronic diseases via regulation of oxidative stress and inflammation (OSI). For optimal protective bio-efficacy, the time required for phytochemicals and their metabolites to reach maximal plasma concentrations (T max ) should be synchronised with the time of increased OSI. A statistical model has been reported to predict T max of individual phytochemicals based on molecular mass and lipophilicity. We report the application of the model for predicting the absorption profile of an uncharacterised phytochemical mixture, herein referred to as the 'functional fingerprint'. First, chemical profiles of phytochemical extracts were acquired using liquid chromatography mass spectrometry (LC-MS), then the molecular features for respective components were used to predict their plasma absorption maximum, based on molecular mass and lipophilicity. This method of 'functional fingerprinting' of plant extracts represents a novel tool for understanding and optimising the health efficacy of plant extracts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Identification of genetic variants predictive of early onset pancreatic cancer through a population science analysis of functional genomic datasets

    PubMed Central

    Chen, Jinyun; Wu, Xifeng; Huang, Yujing; Chen, Wei; Brand, Randall E.; Killary, Ann M.; Sen, Subrata; Frazier, Marsha L.

    2016-01-01

    Biomarkers are critically needed for the early detection of pancreatic cancer (PC) are urgently needed. Our purpose was to identify a panel of genetic variants that, combined, can predict increased risk for early-onset PC and thereby identify individuals who should begin screening at an early age. Previously, we identified genes using a functional genomic approach that were aberrantly expressed in early pathways to PC tumorigenesis. We now report the discovery of single nucleotide polymorphisms (SNPs) in these genes associated with early age at diagnosis of PC using a two-phase study design. In silico and bioinformatics tools were used to examine functional relevance of the identified SNPs. Eight SNPs were consistently associated with age at diagnosis in the discovery phase, validation phase and pooled analysis. Further analysis of the joint effects of these 8 SNPs showed that, compared to participants carrying none of these unfavorable genotypes (median age at PC diagnosis 70 years), those carrying 1–2, 3–4, or 5 or more unfavorable genotypes had median ages at diagnosis of 64, 63, and 62 years, respectively (P = 3.0E–04). A gene-dosage effect was observed, with age at diagnosis inversely related to number of unfavorable genotypes (Ptrend = 1.0E–04). Using bioinformatics tools, we found that all of the 8 SNPs were predicted to play functional roles in the disruption of transcription factor and/or enhancer binding sites and most of them were expression quantitative trait loci (eQTL) of the target genes. The panel of genetic markers identified may serve as susceptibility markers for earlier PC diagnosis. PMID:27486767

  12. Spot and Runway Departure Advisor (SARDA)

    NASA Technical Reports Server (NTRS)

    Jung, Yoon

    2016-01-01

    Spot and Runway Departure Advisor (SARDA) is a decision support tool to assist airline ramp controllers and ATC tower controllers to manage traffic on the airport surface to significantly improve efficiency and predictability in surface operations. The core function of the tool is the runway scheduler which generates an optimal solution for runway sequence and schedule of departure aircraft, which would minimize system delay and maximize runway throughput. The presentation also discusses the latest status of NASA's current surface research through a collaboration with an airline partner, where a tool is developed for airline ramp operators to assist departure pushback operations. The presentation describes the concept of the SARDA tool and results from human-in-the-loop simulations conducted in 2012 for Dallas-Ft. Worth International Airport and 2014 for Charlotte airport ramp tower.

  13. Molecular beacon sequence design algorithm.

    PubMed

    Monroe, W Todd; Haselton, Frederick R

    2003-01-01

    A method based on Web-based tools is presented to design optimally functioning molecular beacons. Molecular beacons, fluorogenic hybridization probes, are a powerful tool for the rapid and specific detection of a particular nucleic acid sequence. However, their synthesis costs can be considerable. Since molecular beacon performance is based on its sequence, it is imperative to rationally design an optimal sequence before synthesis. The algorithm presented here uses simple Microsoft Excel formulas and macros to rank candidate sequences. This analysis is carried out using mfold structural predictions along with other free Web-based tools. For smaller laboratories where molecular beacons are not the focus of research, the public domain algorithm described here may be usefully employed to aid in molecular beacon design.

  14. Virus-Based MicroRNA Silencing in Plants1[C][W][OPEN

    PubMed Central

    Sha, Aihua; Zhao, Jinping; Yin, Kangquan; Tang, Yang; Wang, Yan; Wei, Xiang; Hong, Yiguo; Liu, Yule

    2014-01-01

    MicroRNAs (miRNAs) play pivotal roles in various biological processes across kingdoms. Many plant miRNAs have been experimentally identified or predicted by bioinformatics mining of small RNA databases. However, the functions of these miRNAs remain largely unknown due to the lack of effective genetic tools. Here, we report a virus-based microRNA silencing (VbMS) system that can be used for functional analysis of plant miRNAs. VbMS is performed through tobacco rattle virus-based expression of miRNA target mimics to silence endogenous miRNAs. VbMS of either miR172 or miR165/166 caused developmental defects in Nicotiana benthamiana. VbMS of miR319 reduced the complexity of tomato (Solanum lycopersicum) compound leaves. These results demonstrate that tobacco rattle virus-based VbMS is a powerful tool to silence endogenous miRNAs and to dissect their functions in different plant species. PMID:24296072

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Evan

    There exist hundreds of building energy software tools, both web- and disk-based. These tools exhibit considerable range in approach and creativity, with some being highly specialized and others able to consider the building as a whole. However, users are faced with a dizzying array of choices and, often, conflicting results. The fragmentation of development and deployment efforts has hampered tool quality and market penetration. The purpose of this review is to provide information for defining the desired characteristics of residential energy tools, and to encourage future tool development that improves on current practice. This project entails (1) creating a frameworkmore » for describing possible technical and functional characteristics of such tools, (2) mapping existing tools onto this framework, (3) exploring issues of tool accuracy, and (4) identifying ''best practice'' and strategic opportunities for tool design. evaluated 50 web-based residential calculators, 21 of which we regard as ''whole-house'' tools(i.e., covering a range of end uses). Of the whole-house tools, 13 provide open-ended energy calculations, 5 normalize the results to actual costs (a.k.a ''bill-disaggregation tools''), and 3 provide both options. Across the whole-house tools, we found a range of 5 to 58 house-descriptive features (out of 68 identified in our framework) and 2 to 41 analytical and decision-support features (55 possible). We also evaluated 15 disk-based residential calculators, six of which are whole-house tools. Of these tools, 11 provide open-ended calculations, 1 normalizes the results to actual costs, and 3 provide both options. These tools offered ranges of 18 to 58 technical features (70 possible) and 10 to 40 user- and decision-support features (56 possible). The comparison shows that such tools can employ many approaches and levels of detail. Some tools require a relatively small number of well-considered inputs while others ask a myriad of questions and still miss key issues. The value of detail has a lot to do with the type of question(s) being asked by the user (e.g., the availability of dozens of miscellaneous appliances is immaterial for a user attempting to evaluate the potential for space-heating savings by installing a new furnace). More detail does not, according to our evaluation, automatically translate into a ''better'' or ''more accurate'' tool. Efforts to quantify and compare the ''accuracy'' of these tools are difficult at best, and prior tool-comparison studies have not undertaken this in a meaningful way. The ability to evaluate accuracy is inherently limited by the availability of measured data. Furthermore, certain tool outputs can only be measured against ''actual'' values that are themselves calculated (e.g., HVAC sizing), while others are rarely if ever available (e.g., measured energy use or savings for specific measures). Similarly challenging is to understand the sources of inaccuracies. There are many ways in which quantitative errors can occur in tools, ranging from programming errors to problems inherent in a tool's design. Due to hidden assumptions and non-variable ''defaults'', most tools cannot be fully tested across the desirable range of building configurations, operating conditions, weather locations, etc. Many factors conspire to confound performance comparisons among tools. Differences in inputs can range from weather city, to types of HVAC systems, to appliance characteristics, to occupant-driven effects such as thermostat management. Differences in results would thus no doubt emerge from an extensive comparative exercise, but the sources or implications of these differences for the purposes of accuracy evaluation or tool development would remain largely unidentifiable (especially given the paucity of technical documentation available for most tools). For the tools that we tested, the predicted energy bills for a single test building ranged widely (by nearly a factor of three), and far more so at the end-use level. Most tools over-predicted energy bills and all over-predicted consumption. Variability was lower among disk-based tools,but they more significantly over-predicted actual use. The deviations (over-predictions) we observed from actual bills corresponded to up to $1400 per year (approx. 250 percent of the actual bills). For bill-disaggregation tools, wherein the results are forced to equal actual bills, the accuracy issue shifts to whether or not the total is properly attributed to the various end uses and to whether savings calculations are done accurately (a challenge that demands relatively rare end-use data). Here, too, we observed a number of dubious results. Energy savings estimates automatically generated by the web-based tools varied from $46/year (5 percent of predicted use) to $625/year (52 percent of predicted use).« less

  16. An integrated computational approach can classify VHL missense mutations according to risk of clear cell renal carcinoma

    PubMed Central

    Gossage, Lucy; Pires, Douglas E. V.; Olivera-Nappa, Álvaro; Asenjo, Juan; Bycroft, Mark; Blundell, Tom L.; Eisen, Tim

    2014-01-01

    Mutations in the von Hippel–Lindau (VHL) gene are pathogenic in VHL disease, congenital polycythaemia and clear cell renal carcinoma (ccRCC). pVHL forms a ternary complex with elongin C and elongin B, critical for pVHL stability and function, which interacts with Cullin-2 and RING-box protein 1 to target hypoxia-inducible factor for polyubiquitination and proteasomal degradation. We describe a comprehensive database of missense VHL mutations linked to experimental and clinical data. We use predictions from in silico tools to link the functional effects of missense VHL mutations to phenotype. The risk of ccRCC in VHL disease is linked to the degree of destabilization resulting from missense mutations. An optimized binary classification system (symphony), which integrates predictions from five in silico methods, can predict the risk of ccRCC associated with VHL missense mutations with high sensitivity and specificity. We use symphony to generate predictions for risk of ccRCC for all possible VHL missense mutations and present these predictions, in association with clinical and experimental data, in a publically available, searchable web server. PMID:24969085

  17. Ab initio RNA folding by discrete molecular dynamics: From structure prediction to folding mechanisms

    PubMed Central

    Ding, Feng; Sharma, Shantanu; Chalasani, Poornima; Demidov, Vadim V.; Broude, Natalia E.; Dokholyan, Nikolay V.

    2008-01-01

    RNA molecules with novel functions have revived interest in the accurate prediction of RNA three-dimensional (3D) structure and folding dynamics. However, existing methods are inefficient in automated 3D structure prediction. Here, we report a robust computational approach for rapid folding of RNA molecules. We develop a simplified RNA model for discrete molecular dynamics (DMD) simulations, incorporating base-pairing and base-stacking interactions. We demonstrate correct folding of 150 structurally diverse RNA sequences. The majority of DMD-predicted 3D structures have <4 Å deviations from experimental structures. The secondary structures corresponding to the predicted 3D structures consist of 94% native base-pair interactions. Folding thermodynamics and kinetics of tRNAPhe, pseudoknots, and mRNA fragments in DMD simulations are in agreement with previous experimental findings. Folding of RNA molecules features transient, non-native conformations, suggesting non-hierarchical RNA folding. Our method allows rapid conformational sampling of RNA folding, with computational time increasing linearly with RNA length. We envision this approach as a promising tool for RNA structural and functional analyses. PMID:18456842

  18. Identification of Extracellular Segments by Mass Spectrometry Improves Topology Prediction of Transmembrane Proteins.

    PubMed

    Langó, Tamás; Róna, Gergely; Hunyadi-Gulyás, Éva; Turiák, Lilla; Varga, Julia; Dobson, László; Várady, György; Drahos, László; Vértessy, Beáta G; Medzihradszky, Katalin F; Szakács, Gergely; Tusnády, Gábor E

    2017-02-13

    Transmembrane proteins play crucial role in signaling, ion transport, nutrient uptake, as well as in maintaining the dynamic equilibrium between the internal and external environment of cells. Despite their important biological functions and abundance, less than 2% of all determined structures are transmembrane proteins. Given the persisting technical difficulties associated with high resolution structure determination of transmembrane proteins, additional methods, including computational and experimental techniques remain vital in promoting our understanding of their topologies, 3D structures, functions and interactions. Here we report a method for the high-throughput determination of extracellular segments of transmembrane proteins based on the identification of surface labeled and biotin captured peptide fragments by LC/MS/MS. We show that reliable identification of extracellular protein segments increases the accuracy and reliability of existing topology prediction algorithms. Using the experimental topology data as constraints, our improved prediction tool provides accurate and reliable topology models for hundreds of human transmembrane proteins.

  19. Toward a preoperative planning tool for brain tumor resection therapies.

    PubMed

    Coffey, Aaron M; Miga, Michael I; Chen, Ishita; Thompson, Reid C

    2013-01-01

    Neurosurgical procedures involving tumor resection require surgical planning such that the surgical path to the tumor is determined to minimize the impact on healthy tissue and brain function. This work demonstrates a predictive tool to aid neurosurgeons in planning tumor resection therapies by finding an optimal model-selected patient orientation that minimizes lateral brain shift in the field of view. Such orientations may facilitate tumor access and removal, possibly reduce the need for retraction, and could minimize the impact of brain shift on image-guided procedures. In this study, preoperative magnetic resonance images were utilized in conjunction with pre- and post-resection laser range scans of the craniotomy and cortical surface to produce patient-specific finite element models of intraoperative shift for 6 cases. These cases were used to calibrate a model (i.e., provide general rules for the application of patient positioning parameters) as well as determine the current model-based framework predictive capabilities. Finally, an objective function is proposed that minimizes shift subject to patient position parameters. Patient positioning parameters were then optimized and compared to our neurosurgeon as a preliminary study. The proposed model-driven brain shift minimization objective function suggests an overall reduction of brain shift by 23 % over experiential methods. This work recasts surgical simulation from a trial-and-error process to one where options are presented to the surgeon arising from an optimization of surgical goals. To our knowledge, this is the first realization of an evaluative tool for surgical planning that attempts to optimize surgical approach by means of shift minimization in this manner.

  20. Prediction of N-nitrosodimethylamine (NDMA) formation as a disinfection by-product.

    PubMed

    Kim, Jongo; Clevenger, Thomas E

    2007-06-25

    This study investigated the possibility of a statistical model application for the prediction of N-nitrosodimethylamine (NDMA) formation. The NDMA formation was studied as a function of monochloramine concentration (0.001-5mM) at fixed dimethylamine (DMA) concentrations of 0.01mM or 0.05mM. Excellent linear correlations were observed between the molar ratio of monochloramine to DMA and the NDMA formation on a log scale at pH 7 and 8. When a developed prediction equation was applied to a previously reported study, a good result was obtained. The statistical model appears to predict adequately NDMA concentrations if other NDMA precursors are excluded. Using the predictive tool, a simple and approximate calculation of NDMA formation can be obtained in drinking water systems.

  1. Systematic analysis of snake neurotoxins' functional classification using a data warehousing approach.

    PubMed

    Siew, Joyce Phui Yee; Khan, Asif M; Tan, Paul T J; Koh, Judice L Y; Seah, Seng Hong; Koo, Chuay Yeng; Chai, Siaw Ching; Armugam, Arunmozhiarasi; Brusic, Vladimir; Jeyaseelan, Kandiah

    2004-12-12

    Sequence annotations, functional and structural data on snake venom neurotoxins (svNTXs) are scattered across multiple databases and literature sources. Sequence annotations and structural data are available in the public molecular databases, while functional data are almost exclusively available in the published articles. There is a need for a specialized svNTXs database that contains NTX entries, which are organized, well annotated and classified in a systematic manner. We have systematically analyzed svNTXs and classified them using structure-function groups based on their structural, functional and phylogenetic properties. Using conserved motifs in each phylogenetic group, we built an intelligent module for the prediction of structural and functional properties of unknown NTXs. We also developed an annotation tool to aid the functional prediction of newly identified NTXs as an additional resource for the venom research community. We created a searchable online database of NTX proteins sequences (http://research.i2r.a-star.edu.sg/Templar/DB/snake_neurotoxin). This database can also be found under Swiss-Prot Toxin Annotation Project website (http://www.expasy.org/sprot/).

  2. cnvScan: a CNV screening and annotation tool to improve the clinical utility of computational CNV prediction from exome sequencing data.

    PubMed

    Samarakoon, Pubudu Saneth; Sorte, Hanne Sørmo; Stray-Pedersen, Asbjørg; Rødningen, Olaug Kristin; Rognes, Torbjørn; Lyle, Robert

    2016-01-14

    With advances in next generation sequencing technology and analysis methods, single nucleotide variants (SNVs) and indels can be detected with high sensitivity and specificity in exome sequencing data. Recent studies have demonstrated the ability to detect disease-causing copy number variants (CNVs) in exome sequencing data. However, exonic CNV prediction programs have shown high false positive CNV counts, which is the major limiting factor for the applicability of these programs in clinical studies. We have developed a tool (cnvScan) to improve the clinical utility of computational CNV prediction in exome data. cnvScan can accept input from any CNV prediction program. cnvScan consists of two steps: CNV screening and CNV annotation. CNV screening evaluates CNV prediction using quality scores and refines this using an in-house CNV database, which greatly reduces the false positive rate. The annotation step provides functionally and clinically relevant information using multiple source datasets. We assessed the performance of cnvScan on CNV predictions from five different prediction programs using 64 exomes from Primary Immunodeficiency (PIDD) patients, and identified PIDD-causing CNVs in three individuals from two different families. In summary, cnvScan reduces the time and effort required to detect disease-causing CNVs by reducing the false positive count and providing annotation. This improves the clinical utility of CNV detection in exome data.

  3. The role of nerve monitoring to predict postoperative recurrent laryngeal nerve function in thyroid and parathyroid surgery.

    PubMed

    Eid, Issam; Miller, Frank R; Rowan, Stephanie; Otto, Randal A

    2013-10-01

    To determine the role and efficacy of intraoperative recurrent laryngeal nerve (RLN) stimulation in the prediction of early and permanent postoperative nerve function in thyroid and parathyroid surgery. A retrospective review of thyroid and parathyroid surgeries was performed with calculation of sensitivity and specificity of the response of intraoperative stimulation for different pathological groups. Normal electromyography (EMG) response with 0.5 mAmp stimulation was considered a positive stimulation response with postoperative function determined by laryngoscopy. No EMG response at >1-2 mAmps was considered a negative response. The rates of early and permanent paralysis, as well as sensitivity, specificity, and positive and negative predictive values for postoperative nerve function were calculated for separate pathological groups. The number of nerves at risk analyzed was 909. The overall early and permanent paralysis rates were 3.1% and 1.2%, respectively, with the highest rate being for Grave's disease cases. The overall sensitivity was 98.4%. The specificity was lower at 62.5% but acceptable in thyroid carcinoma and Grave's disease patients. The majority of nerves with a positive stimulation result and postoperative paralysis on laryngoscopy recovered function in 3 to 12 weeks, showing positive stimulation to be a good predictor of eventual recovery. Stimulation of the RLN during thyroid and parathyroid surgery is a useful tool in predicting postoperative RLN function. The sensitivity of stimulation is high, showing positive stimulation to be an excellent predictor of normal nerve function. Negative stimulation is more predictive of paralysis in cases of thyroid carcinoma and Grave's disease. 2b. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  4. New in protein structure and function annotation: hotspots, single nucleotide polymorphisms and the 'Deep Web'.

    PubMed

    Bromberg, Yana; Yachdav, Guy; Ofran, Yanay; Schneider, Reinhard; Rost, Burkhard

    2009-05-01

    The rapidly increasing quantity of protein sequence data continues to widen the gap between available sequences and annotations. Comparative modeling suggests some aspects of the 3D structures of approximately half of all known proteins; homology- and network-based inferences annotate some aspect of function for a similar fraction of the proteome. For most known protein sequences, however, there is detailed knowledge about neither their function nor their structure. Comprehensive efforts towards the expert curation of sequence annotations have failed to meet the demand of the rapidly increasing number of available sequences. Only the automated prediction of protein function in the absence of homology can close the gap between available sequences and annotations in the foreseeable future. This review focuses on two novel methods for automated annotation, and briefly presents an outlook on how modern web software may revolutionize the field of protein sequence annotation. First, predictions of protein binding sites and functional hotspots, and the evolution of these into the most successful type of prediction of protein function from sequence will be discussed. Second, a new tool, comprehensive in silico mutagenesis, which contributes important novel predictions of function and at the same time prepares for the onset of the next sequencing revolution, will be described. While these two new sub-fields of protein prediction represent the breakthroughs that have been achieved methodologically, it will then be argued that a different development might further change the way biomedical researchers benefit from annotations: modern web software can connect the worldwide web in any browser with the 'Deep Web' (ie, proprietary data resources). The availability of this direct connection, and the resulting access to a wealth of data, may impact drug discovery and development more than any existing method that contributes to protein annotation.

  5. GIANT API: an application programming interface for functional genomics

    PubMed Central

    Roberts, Andrew M.; Wong, Aaron K.; Fisk, Ian; Troyanskaya, Olga G.

    2016-01-01

    GIANT API provides biomedical researchers programmatic access to tissue-specific and global networks in humans and model organisms, and associated tools, which includes functional re-prioritization of existing genome-wide association study (GWAS) data. Using tissue-specific interaction networks, researchers are able to predict relationships between genes specific to a tissue or cell lineage, identify the changing roles of genes across tissues and uncover disease-gene associations. Additionally, GIANT API enables computational tools like NetWAS, which leverages tissue-specific networks for re-prioritization of GWAS results. The web services covered by the API include 144 tissue-specific functional gene networks in human, global functional networks for human and six common model organisms and the NetWAS method. GIANT API conforms to the REST architecture, which makes it stateless, cacheable and highly scalable. It can be used by a diverse range of clients including web browsers, command terminals, programming languages and standalone apps for data analysis and visualization. The API is freely available for use at http://giant-api.princeton.edu. PMID:27098035

  6. A sampling-based method for ranking protein structural models by integrating multiple scores and features.

    PubMed

    Shi, Xiaohu; Zhang, Jingfen; He, Zhiquan; Shang, Yi; Xu, Dong

    2011-09-01

    One of the major challenges in protein tertiary structure prediction is structure quality assessment. In many cases, protein structure prediction tools generate good structural models, but fail to select the best models from a huge number of candidates as the final output. In this study, we developed a sampling-based machine-learning method to rank protein structural models by integrating multiple scores and features. First, features such as predicted secondary structure, solvent accessibility and residue-residue contact information are integrated by two Radial Basis Function (RBF) models trained from different datasets. Then, the two RBF scores and five selected scoring functions developed by others, i.e., Opus-CA, Opus-PSP, DFIRE, RAPDF, and Cheng Score are synthesized by a sampling method. At last, another integrated RBF model ranks the structural models according to the features of sampling distribution. We tested the proposed method by using two different datasets, including the CASP server prediction models of all CASP8 targets and a set of models generated by our in-house software MUFOLD. The test result shows that our method outperforms any individual scoring function on both best model selection, and overall correlation between the predicted ranking and the actual ranking of structural quality.

  7. Model-driven discovery of underground metabolic functions in Escherichia coli.

    PubMed

    Guzmán, Gabriela I; Utrilla, José; Nurk, Sergey; Brunk, Elizabeth; Monk, Jonathan M; Ebrahim, Ali; Palsson, Bernhard O; Feist, Adam M

    2015-01-20

    Enzyme promiscuity toward substrates has been discussed in evolutionary terms as providing the flexibility to adapt to novel environments. In the present work, we describe an approach toward exploring such enzyme promiscuity in the space of a metabolic network. This approach leverages genome-scale models, which have been widely used for predicting growth phenotypes in various environments or following a genetic perturbation; however, these predictions occasionally fail. Failed predictions of gene essentiality offer an opportunity for targeting biological discovery, suggesting the presence of unknown underground pathways stemming from enzymatic cross-reactivity. We demonstrate a workflow that couples constraint-based modeling and bioinformatic tools with KO strain analysis and adaptive laboratory evolution for the purpose of predicting promiscuity at the genome scale. Three cases of genes that are incorrectly predicted as essential in Escherichia coli--aspC, argD, and gltA--are examined, and isozyme functions are uncovered for each to a different extent. Seven isozyme functions based on genetic and transcriptional evidence are suggested between the genes aspC and tyrB, argD and astC, gabT and puuE, and gltA and prpC. This study demonstrates how a targeted model-driven approach to discovery can systematically fill knowledge gaps, characterize underground metabolism, and elucidate regulatory mechanisms of adaptation in response to gene KO perturbations.

  8. Early prediction of cerebral palsy by computer-based video analysis of general movements: a feasibility study.

    PubMed

    Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander R; Taraldsen, Gunnar; Grunewaldt, Kristine H; Støen, Ragnhild

    2010-08-01

    The aim of this study was to investigate the predictive value of a computer-based video analysis of the development of cerebral palsy (CP) in young infants. A prospective study of general movements used recordings from 30 high-risk infants (13 males, 17 females; mean gestational age 31wks, SD 6wks; range 23-42wks) between 10 and 15 weeks post term when fidgety movements should be present. Recordings were analysed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analyses. CP status was reported at 5 years. Thirteen infants developed CP (eight hemiparetic, four quadriparetic, one dyskinetic; seven ambulatory, three non-ambulatory, and three unknown function), of whom one had fidgety movements. Variability of the centroid of motion had a sensitivity of 85% and a specificity of 71% in identifying CP. By combining this with variables reflecting the amount of motion, specificity increased to 88%. Nine out of 10 children with CP, and for whom information about functional level was available, were correctly predicted with regard to ambulatory and non-ambulatory function. Prediction of CP can be provided by computer-based video analysis in young infants. The method may serve as an objective and feasible tool for early prediction of CP in high-risk infants.

  9. Link prediction boosted psychiatry disorder classification for functional connectivity network

    NASA Astrophysics Data System (ADS)

    Li, Weiwei; Mei, Xue; Wang, Hao; Zhou, Yu; Huang, Jiashuang

    2017-02-01

    Functional connectivity network (FCN) is an effective tool in psychiatry disorders classification, and represents cross-correlation of the regional blood oxygenation level dependent signal. However, FCN is often incomplete for suffering from missing and spurious edges. To accurate classify psychiatry disorders and health control with the incomplete FCN, we first `repair' the FCN with link prediction, and then exact the clustering coefficients as features to build a weak classifier for every FCN. Finally, we apply a boosting algorithm to combine these weak classifiers for improving classification accuracy. Our method tested by three datasets of psychiatry disorder, including Alzheimer's Disease, Schizophrenia and Attention Deficit Hyperactivity Disorder. The experimental results show our method not only significantly improves the classification accuracy, but also efficiently reconstructs the incomplete FCN.

  10. Prediction of microRNAs Associated with Human Diseases Based on Weighted k Most Similar Neighbors

    PubMed Central

    Guo, Maozu; Guo, Yahong; Li, Jinbao; Ding, Jian; Liu, Yong; Dai, Qiguo; Li, Jin; Teng, Zhixia; Huang, Yufei

    2013-01-01

    Background The identification of human disease-related microRNAs (disease miRNAs) is important for further investigating their involvement in the pathogenesis of diseases. More experimentally validated miRNA-disease associations have been accumulated recently. On the basis of these associations, it is essential to predict disease miRNAs for various human diseases. It is useful in providing reliable disease miRNA candidates for subsequent experimental studies. Methodology/Principal Findings It is known that miRNAs with similar functions are often associated with similar diseases and vice versa. Therefore, the functional similarity of two miRNAs has been successfully estimated by measuring the semantic similarity of their associated diseases. To effectively predict disease miRNAs, we calculated the functional similarity by incorporating the information content of disease terms and phenotype similarity between diseases. Furthermore, the members of miRNA family or cluster are assigned higher weight since they are more probably associated with similar diseases. A new prediction method, HDMP, based on weighted k most similar neighbors is presented for predicting disease miRNAs. Experiments validated that HDMP achieved significantly higher prediction performance than existing methods. In addition, the case studies examining prostatic neoplasms, breast neoplasms, and lung neoplasms, showed that HDMP can uncover potential disease miRNA candidates. Conclusions The superior performance of HDMP can be attributed to the accurate measurement of miRNA functional similarity, the weight assignment based on miRNA family or cluster, and the effective prediction based on weighted k most similar neighbors. The online prediction and analysis tool is freely available at http://nclab.hit.edu.cn/hdmpred. PMID:23950912

  11. GI-POP: a combinational annotation and genomic island prediction pipeline for ongoing microbial genome projects.

    PubMed

    Lee, Chi-Ching; Chen, Yi-Ping Phoebe; Yao, Tzu-Jung; Ma, Cheng-Yu; Lo, Wei-Cheng; Lyu, Ping-Chiang; Tang, Chuan Yi

    2013-04-10

    Sequencing of microbial genomes is important because of microbial-carrying antibiotic and pathogenetic activities. However, even with the help of new assembling software, finishing a whole genome is a time-consuming task. In most bacteria, pathogenetic or antibiotic genes are carried in genomic islands. Therefore, a quick genomic island (GI) prediction method is useful for ongoing sequencing genomes. In this work, we built a Web server called GI-POP (http://gipop.life.nthu.edu.tw) which integrates a sequence assembling tool, a functional annotation pipeline, and a high-performance GI predicting module, in a support vector machine (SVM)-based method called genomic island genomic profile scanning (GI-GPS). The draft genomes of the ongoing genome projects in contigs or scaffolds can be submitted to our Web server, and it provides the functional annotation and highly probable GI-predicting results. GI-POP is a comprehensive annotation Web server designed for ongoing genome project analysis. Researchers can perform annotation and obtain pre-analytic information include possible GIs, coding/non-coding sequences and functional analysis from their draft genomes. This pre-analytic system can provide useful information for finishing a genome sequencing project. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Clinical parameters that predict the need for medium or intensive care admission in intentional drug overdose patients: A retrospective cohort study.

    PubMed

    van den Oever, Huub L A; van Dam, Mirja; van 't Riet, Esther; Jansman, Frank G A

    2017-02-01

    Many patients with intentional drug overdose (IDO) are admitted to a medium (MC) or intensive care unit (IC) without ever requiring MC/IC related interventions. The objective of this study was to develop a decision tool, using parameters readily available in the emergency room (ER) for patients with an IDO, to identify patients requiring admission to a monitoring unit. Retrospective cohort study among cases of IDO with drugs having potentially acute effects on neurological, circulatory or ventilatory function, admitted to the MC/IC unit between 2007 and 2013. A decision tool was developed, using 6 criteria, representing intubation, breathing, oxygenation, cardiac conduction, blood pressure, and consciousness. Cases were labeled as 'high acuity' if one or more criteria were present. Among 255 cases of IDO that met the inclusion criteria, 197 were identified as "high acuity". Only 70 of 255 cases underwent one or more MC/IC related interventions, of which 67 were identified as 'high acuity by the decision tool (sensitivity 95.7%). In a population of patients with intentional drug overdose with agents having potentially acute effect on vital functions, 95.7% of MC/IC interventions could be predicted by clinical assessment, supplemented with electrocardiogram and blood gas analysis, in the ER. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Towards a generalized energy prediction model for machine tools

    PubMed Central

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H.; Dornfeld, David A.; Helu, Moneer; Rachuri, Sudarsan

    2017-01-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process. PMID:28652687

  14. Towards a generalized energy prediction model for machine tools.

    PubMed

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan

    2017-04-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.

  15. Analysis tools for the interplay between genome layout and regulation.

    PubMed

    Bouyioukos, Costas; Elati, Mohamed; Képès, François

    2016-06-06

    Genome layout and gene regulation appear to be interdependent. Understanding this interdependence is key to exploring the dynamic nature of chromosome conformation and to engineering functional genomes. Evidence for non-random genome layout, defined as the relative positioning of either co-functional or co-regulated genes, stems from two main approaches. Firstly, the analysis of contiguous genome segments across species, has highlighted the conservation of gene arrangement (synteny) along chromosomal regions. Secondly, the study of long-range interactions along a chromosome has emphasised regularities in the positioning of microbial genes that are co-regulated, co-expressed or evolutionarily correlated. While one-dimensional pattern analysis is a mature field, it is often powerless on biological datasets which tend to be incomplete, and partly incorrect. Moreover, there is a lack of comprehensive, user-friendly tools to systematically analyse, visualise, integrate and exploit regularities along genomes. Here we present the Genome REgulatory and Architecture Tools SCAN (GREAT:SCAN) software for the systematic study of the interplay between genome layout and gene expression regulation. SCAN is a collection of related and interconnected applications currently able to perform systematic analyses of genome regularities as well as to improve transcription factor binding sites (TFBS) and gene regulatory network predictions based on gene positional information. We demonstrate the capabilities of these tools by studying on one hand the regular patterns of genome layout in the major regulons of the bacterium Escherichia coli. On the other hand, we demonstrate the capabilities to improve TFBS prediction in microbes. Finally, we highlight, by visualisation of multivariate techniques, the interplay between position and sequence information for effective transcription regulation.

  16. GREAT: a web portal for Genome Regulatory Architecture Tools.

    PubMed

    Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François

    2016-07-08

    GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Technical note: A mathematical function to predict daily milk yield of dairy cows in relation to the interval between milkings.

    PubMed

    Klopčič, M; Koops, W J; Kuipers, A

    2013-09-01

    The milk production of a dairy cow is characterized by lactation production, which is calculated from daily milk yields (DMY) during lactation. The DMY is calculated from one or more milkings a day collected at the farm. Various milking systems are in use today, resulting in one or many recorded milk yields a day, from which different calculations are used to determine DMY. The primary objective of this study was to develop a mathematical function that described milk production of a dairy cow in relation to the interval between 2 milkings. The function was partly based on the biology of the milk production process. This function, called the 3K-function, was able to predict milk production over an interval of 12h, so DMY was twice this estimate. No external information is needed to incorporate this function in methods to predict DMY. Application of the function on data from different milking systems showed a good fit. This function could be a universal tool to predict DMY for a variety of milking systems, and it seems especially useful for data from robotic milking systems. Further study is needed to evaluate the function under a wide range of circumstances, and to see how it can be incorporated in existing milk recording systems. A secondary objective of using the 3K-function was to compare how much DMY based on different milking systems differed from that based on a twice-a-day milking. Differences were consistent with findings in the literature. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Metabolome of human gut microbiome is predictive of host dysbiosis.

    PubMed

    Larsen, Peter E; Dai, Yang

    2015-01-01

    Humans live in constant and vital symbiosis with a closely linked bacterial ecosystem called the microbiome, which influences many aspects of human health. When this microbial ecosystem becomes disrupted, the health of the human host can suffer; a condition called dysbiosis. However, the community compositions of human microbiomes also vary dramatically from individual to individual, and over time, making it difficult to uncover the underlying mechanisms linking the microbiome to human health. We propose that a microbiome's interaction with its human host is not necessarily dependent upon the presence or absence of particular bacterial species, but instead is dependent on its community metabolome; an emergent property of the microbiome. Using data from a previously published, longitudinal study of microbiome populations of the human gut, we extrapolated information about microbiome community enzyme profiles and metabolome models. Using machine learning techniques, we demonstrated that the aggregate predicted community enzyme function profiles and modeled metabolomes of a microbiome are more predictive of dysbiosis than either observed microbiome community composition or predicted enzyme function profiles. Specific enzyme functions and metabolites predictive of dysbiosis provide insights into the molecular mechanisms of microbiome-host interactions. The ability to use machine learning to predict dysbiosis from microbiome community interaction data provides a potentially powerful tool for understanding the links between the human microbiome and human health, pointing to potential microbiome-based diagnostics and therapeutic interventions.

  19. Validation of a prediction model that allows direct comparison of the Oxford Knee Score and American Knee Society clinical rating system.

    PubMed

    Maempel, J F; Clement, N D; Brenkel, I J; Walmsley, P J

    2015-04-01

    This study demonstrates a significant correlation between the American Knee Society (AKS) Clinical Rating System and the Oxford Knee Score (OKS) and provides a validated prediction tool to estimate score conversion. A total of 1022 patients were prospectively clinically assessed five years after TKR and completed AKS assessments and an OKS questionnaire. Multivariate regression analysis demonstrated significant correlations between OKS and the AKS knee and function scores but a stronger correlation (r = 0.68, p < 0.001) when using the sum of the AKS knee and function scores. Addition of body mass index and age (other statistically significant predictors of OKS) to the algorithm did not significantly increase the predictive value. The simple regression model was used to predict the OKS in a group of 236 patients who were clinically assessed nine to ten years after TKR using the AKS system. The predicted OKS was compared with actual OKS in the second group. Intra-class correlation demonstrated excellent reliability (r = 0.81, 95% confidence intervals 0.75 to 0.85) for the combined knee and function score when used to predict OKS. Our findings will facilitate comparison of outcome data from studies and registries using either the OKS or the AKS scores and may also be of value for those undertaking meta-analyses and systematic reviews. ©2015 The British Editorial Society of Bone & Joint Surgery.

  20. Metabolome of human gut microbiome is predictive of host dysbiosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, Peter E.; Dai, Yang

    Background: Humans live in constant and vital symbiosis with a closely linked bacterial ecosystem called the microbiome, which influences many aspects of human health. When this microbial ecosystem becomes disrupted, the health of the human host can suffer; a condition called dysbiosis. The community compositions of human microbiomes also vary dramatically from individual to individual, and over time, making it difficult to uncover the underlying mechanisms linking the microbiome to human health. We propose that a microbiome’s interaction with its human host is not necessarily dependent upon the presence or absence of particular bacterial species, but instead is dependent onmore » its community metabolome; an emergent property of the microbiome. Results: Using data from a previously published, longitudinal study of microbiome populations of the human gut, we extrapolated information about microbiome community enzyme profiles and metabolome models. Using machine learning techniques, we demonstrated that the aggregate predicted community enzyme function profiles and modeled metabolomes of a microbiome are more predictive of dysbiosis than either observed microbiome community composition or predicted enzyme function profiles. Conclusions: Specific enzyme functions and metabolites predictive of dysbiosis provide insights into the molecular mechanisms of microbiome–host interactions. The ability to use machine learning to predict dysbiosis from microbiome community interaction data provides a potentially powerful tool for understanding the links between the human microbiome and human health, pointing to potential microbiome-based diagnostics and therapeutic interventions.« less

  1. Metabolome of human gut microbiome is predictive of host dysbiosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, Peter E.; Dai, Yang

    Background: Humans live in constant and vital symbiosis with a closely linked bacterial ecosystem called the microbiome, which influences many aspects of human health. When this microbial ecosystem becomes disrupted, the health of the human host can suffer; a condition called dysbiosis. However, the community compositions of human microbiomes also vary dramatically from individual to individual, and over time, making it difficult to uncover the underlying mechanisms linking the microbiome to human health. We propose that a microbiome’s interaction with its human host is not necessarily dependent upon the presence or absence of particular bacterial species, but instead is dependentmore » on its community metabolome; an emergent property of the microbiome. Results: Using data from a previously published, longitudinal study of microbiome populations of the human gut, we extrapolated information about microbiome community enzyme profiles and metabolome models. Using machine learning techniques, we demonstrated that the aggregate predicted community enzyme function profiles and modeled metabolomes of a microbiome are more predictive of dysbiosis than either observed microbiome community composition or predicted enzyme function profiles. Conclusions: Specific enzyme functions and metabolites predictive of dysbiosis provide insights into the molecular mechanisms of microbiome–host interactions. The ability to use machine learning to predict dysbiosis from microbiome community interaction data provides a potentially powerful tool for understanding the links between the human microbiome and human health, pointing to potential microbiome-based diagnostics and therapeutic interventions.« less

  2. Metabolome of human gut microbiome is predictive of host dysbiosis

    DOE PAGES

    Larsen, Peter E.; Dai, Yang

    2015-09-14

    Background: Humans live in constant and vital symbiosis with a closely linked bacterial ecosystem called the microbiome, which influences many aspects of human health. When this microbial ecosystem becomes disrupted, the health of the human host can suffer; a condition called dysbiosis. The community compositions of human microbiomes also vary dramatically from individual to individual, and over time, making it difficult to uncover the underlying mechanisms linking the microbiome to human health. We propose that a microbiome’s interaction with its human host is not necessarily dependent upon the presence or absence of particular bacterial species, but instead is dependent onmore » its community metabolome; an emergent property of the microbiome. Results: Using data from a previously published, longitudinal study of microbiome populations of the human gut, we extrapolated information about microbiome community enzyme profiles and metabolome models. Using machine learning techniques, we demonstrated that the aggregate predicted community enzyme function profiles and modeled metabolomes of a microbiome are more predictive of dysbiosis than either observed microbiome community composition or predicted enzyme function profiles. Conclusions: Specific enzyme functions and metabolites predictive of dysbiosis provide insights into the molecular mechanisms of microbiome–host interactions. The ability to use machine learning to predict dysbiosis from microbiome community interaction data provides a potentially powerful tool for understanding the links between the human microbiome and human health, pointing to potential microbiome-based diagnostics and therapeutic interventions.« less

  3. NaviGO: interactive tool for visualization and functional similarity and coherence analysis with gene ontology.

    PubMed

    Wei, Qing; Khan, Ishita K; Ding, Ziyun; Yerneni, Satwica; Kihara, Daisuke

    2017-03-20

    The number of genomics and proteomics experiments is growing rapidly, producing an ever-increasing amount of data that are awaiting functional interpretation. A number of function prediction algorithms were developed and improved to enable fast and automatic function annotation. With the well-defined structure and manual curation, Gene Ontology (GO) is the most frequently used vocabulary for representing gene functions. To understand relationship and similarity between GO annotations of genes, it is important to have a convenient pipeline that quantifies and visualizes the GO function analyses in a systematic fashion. NaviGO is a web-based tool for interactive visualization, retrieval, and computation of functional similarity and associations of GO terms and genes. Similarity of GO terms and gene functions is quantified with six different scores including protein-protein interaction and context based association scores we have developed in our previous works. Interactive navigation of the GO function space provides intuitive and effective real-time visualization of functional groupings of GO terms and genes as well as statistical analysis of enriched functions. We developed NaviGO, which visualizes and analyses functional similarity and associations of GO terms and genes. The NaviGO webserver is freely available at: http://kiharalab.org/web/navigo .

  4. Universal noise and Efimov physics

    NASA Astrophysics Data System (ADS)

    Nicholson, Amy N.

    2016-03-01

    Probability distributions for correlation functions of particles interacting via random-valued fields are discussed as a novel tool for determining the spectrum of a theory. In particular, this method is used to determine the energies of universal N-body clusters tied to Efimov trimers, for even N, by investigating the distribution of a correlation function of two particles at unitarity. Using numerical evidence that this distribution is log-normal, an analytical prediction for the N-dependence of the N-body binding energies is made.

  5. Advances and Computational Tools towards Predictable Design in Biological Engineering

    PubMed Central

    2014-01-01

    The design process of complex systems in all the fields of engineering requires a set of quantitatively characterized components and a method to predict the output of systems composed by such elements. This strategy relies on the modularity of the used components or the prediction of their context-dependent behaviour, when parts functioning depends on the specific context. Mathematical models usually support the whole process by guiding the selection of parts and by predicting the output of interconnected systems. Such bottom-up design process cannot be trivially adopted for biological systems engineering, since parts function is hard to predict when components are reused in different contexts. This issue and the intrinsic complexity of living systems limit the capability of synthetic biologists to predict the quantitative behaviour of biological systems. The high potential of synthetic biology strongly depends on the capability of mastering this issue. This review discusses the predictability issues of basic biological parts (promoters, ribosome binding sites, coding sequences, transcriptional terminators, and plasmids) when used to engineer simple and complex gene expression systems in Escherichia coli. A comparison between bottom-up and trial-and-error approaches is performed for all the discussed elements and mathematical models supporting the prediction of parts behaviour are illustrated. PMID:25161694

  6. CowPI: A Rumen Microbiome Focussed Version of the PICRUSt Functional Inference Software.

    PubMed

    Wilkinson, Toby J; Huws, Sharon A; Edwards, Joan E; Kingston-Smith, Alison H; Siu-Ting, Karen; Hughes, Martin; Rubino, Francesco; Friedersdorff, Maximillian; Creevey, Christopher J

    2018-01-01

    Metataxonomic 16S rDNA based studies are a commonplace and useful tool in the research of the microbiome, but they do not provide the full investigative power of metagenomics and metatranscriptomics for revealing the functional potential of microbial communities. However, the use of metagenomic and metatranscriptomic technologies is hindered by high costs and skills barrier necessary to generate and interpret the data. To address this, a tool for Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt) was developed for inferring the functional potential of an observed microbiome profile, based on 16S data. This allows functional inferences to be made from metataxonomic 16S rDNA studies with little extra work or cost, but its accuracy relies on the availability of completely sequenced genomes of representative organisms from the community being investigated. The rumen microbiome is an example of a community traditionally underrepresented in genome and sequence databases, but recent efforts by projects such as the Global Rumen Census and Hungate 1000 have resulted in a wide sampling of 16S rDNA profiles and almost 500 fully sequenced microbial genomes from this environment. Using this information, we have developed "CowPI," a focused version of the PICRUSt tool provided for use by the wider scientific community in the study of the rumen microbiome. We evaluated the accuracy of CowPI and PICRUSt using two 16S datasets from the rumen microbiome: one generated from rDNA and the other from rRNA where corresponding metagenomic and metatranscriptomic data was also available. We show that the functional profiles predicted by CowPI better match estimates for both the meta-genomic and transcriptomic datasets than PICRUSt, and capture the higher degree of genetic variation and larger pangenomes of rumen organisms. Nonetheless, whilst being closer in terms of predictive power for the rumen microbiome, there were differences when compared to both the metagenomic and metatranscriptome data and so we recommend, where possible, functional inferences from 16S data should not replace metagenomic and metatranscriptomic approaches. The tool can be accessed at http://www.cowpi.org and is provided to the wider scientific community for use in the study of the rumen microbiome.

  7. Using the brain's fight-or-flight response for predicting mental illness on the human space flight program

    NASA Astrophysics Data System (ADS)

    Losik, L.

    A predictive medicine program allows disease and illness including mental illness to be predicted using tools created to identify the presence of accelerated aging (a.k.a. disease) in electrical and mechanical equipment. When illness and disease can be predicted, actions can be taken so that the illness and disease can be prevented and eliminated. A predictive medicine program uses the same tools and practices from a prognostic and health management program to process biological and engineering diagnostic data provided in analog telemetry during prelaunch readiness and space exploration missions. The biological and engineering diagnostic data necessary to predict illness and disease is collected from the pre-launch spaceflight readiness activities and during space flight for the ground crew to perform a prognostic analysis on the results from a diagnostic analysis. The diagnostic, biological data provided in telemetry is converted to prognostic (predictive) data using the predictive algorithms. Predictive algorithms demodulate telemetry behavior. They illustrate the presence of accelerated aging/disease in normal appearing systems that function normally. Mental illness can predicted using biological diagnostic measurements provided in CCSDS telemetry from a spacecraft such as the ISS or from a manned spacecraft in deep space. The measurements used to predict mental illness include biological and engineering data from an astronaut's circadian and ultranian rhythms. This data originates deep in the brain that is also damaged from the long-term exposure to cortisol and adrenaline anytime the body's fight or flight response is activated. This paper defines the brain's FOFR; the diagnostic, biological and engineering measurements needed to predict mental illness, identifies the predictive algorithms necessary to process the behavior in CCSDS analog telemetry to predict and thus prevent mental illness from occurring on human spaceflight missions.

  8. Climate change and the future of seed zones

    Treesearch

    Francis Kilkenny; Brad St. Clair; Matt Horning

    2013-01-01

    The use of native plants in wildland restoration is critical to the recovery and health of ecosystems. Information from genecological and reciprocal transplant common garden studies can be used to develop seed transfer guidelines and to predict how plants will respond to future climate change. Tools developed from these data, such as universal response functions and...

  9. Neuroimaging in Pediatric Traumatic Brain Injury: Current and Future Predictors of Functional Outcome

    ERIC Educational Resources Information Center

    Suskauer, Stacy J.; Huisman, Thierry A. G. M.

    2009-01-01

    Although neuroimaging has long played a role in the acute management of pediatric traumatic brain injury (TBI), until recently, its use as a tool for understanding and predicting long-term brain-behavior relationships after TBI has been limited by the relatively poor sensitivity of routine clinical imaging for detecting diffuse axonal injury…

  10. A Comparison of QSAR Based Thermo and Water Solvation Property Prediction Tools and Experimental Data for Selected Traditional Chemical Warfare Agents and Simulants

    DTIC Science & Technology

    2014-07-01

    Labs uses parameterized Hammett -type equations to describe 1500 possible combinations of more than 650 ionizable functional groups. The change in...of the form ⋯ , ⋯ Equation (1) where Ypred is the predicted property, c0 is a constant, c1 to cn are coefficients from the...regression to the training set of measurements, X1 to Xn represent molecular or fragment or field-based descriptors, and the final term in Equation 1

  11. Occipital cortical thickness in very low birth weight born adolescents predicts altered neural specialization of visual semantic category related neural networks.

    PubMed

    Klaver, Peter; Latal, Beatrice; Martin, Ernst

    2015-01-01

    Very low birth weight (VLBW) premature born infants have a high risk to develop visual perceptual and learning deficits as well as widespread functional and structural brain abnormalities during infancy and childhood. Whether and how prematurity alters neural specialization within visual neural networks is still unknown. We used functional and structural brain imaging to examine the visual semantic system of VLBW born (<1250 g, gestational age 25-32 weeks) adolescents (13-15 years, n = 11, 3 males) and matched term born control participants (13-15 years, n = 11, 3 males). Neurocognitive assessment revealed no group differences except for lower scores on an adaptive visuomotor integration test. All adolescents were scanned while viewing pictures of animals and tools and scrambled versions of these pictures. Both groups demonstrated animal and tool category related neural networks. Term born adolescents showed tool category related neural activity, i.e. tool pictures elicited more activity than animal pictures, in temporal and parietal brain areas. Animal category related activity was found in the occipital, temporal and frontal cortex. VLBW born adolescents showed reduced tool category related activity in the dorsal visual stream compared with controls, specifically the left anterior intraparietal sulcus, and enhanced animal category related activity in the left middle occipital gyrus and right lingual gyrus. Lower birth weight of VLBW adolescents correlated with larger thickness of the pericalcarine gyrus in the occipital cortex and smaller surface area of the superior temporal gyrus in the lateral temporal cortex. Moreover, larger thickness of the pericalcarine gyrus and smaller surface area of the superior temporal gyrus correlated with reduced tool category related activity in the parietal cortex. Together, our data suggest that very low birth weight predicts alterations of higher order visual semantic networks, particularly in the dorsal stream. The differences in neural specialization may be associated with aberrant cortical development of areas in the visual system that develop early in childhood. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Study on active lap tool influence function in grinding 1.8 m primary mirror.

    PubMed

    Haitao, Liu; Zhige, Zeng; Fan, Wu; Bin, Fan; Yongjian, Wan

    2013-11-01

    We present a theoretical modeling method to predict the ring tool influence function (TIF) based on the computer-controlled active lap process. The gap on the lap-grinding layer is considered, and its influence on the ring TIF is analyzed too. The relationship between the shape of the ring TIF and the lap-workpiece rotation speed ratio is discussed in this paper. The recipe for calculating dwell time for axisymmetric fabrication is discussed. The grinding process of a 1.8 m primary mirror is improved based on these results. The grinding process is accomplished after 30 circles of grinding, and the surface shape error is from PV 82 μm RMS 16.4 μm reduced to PV 13.5 μm RMS 2.5 μm.

  13. Predictive and Experimental Approaches for Elucidating Protein–Protein Interactions and Quaternary Structures

    PubMed Central

    Nealon, John Oliver; Philomina, Limcy Seby

    2017-01-01

    The elucidation of protein–protein interactions is vital for determining the function and action of quaternary protein structures. Here, we discuss the difficulty and importance of establishing protein quaternary structure and review in vitro and in silico methods for doing so. Determining the interacting partner proteins of predicted protein structures is very time-consuming when using in vitro methods, this can be somewhat alleviated by use of predictive methods. However, developing reliably accurate predictive tools has proved to be difficult. We review the current state of the art in predictive protein interaction software and discuss the problem of scoring and therefore ranking predictions. Current community-based predictive exercises are discussed in relation to the growth of protein interaction prediction as an area within these exercises. We suggest a fusion of experimental and predictive methods that make use of sparse experimental data to determine higher resolution predicted protein interactions as being necessary to drive forward development. PMID:29206185

  14. The effects of shared information on semantic calculations in the gene ontology.

    PubMed

    Bible, Paul W; Sun, Hong-Wei; Morasso, Maria I; Loganantharaj, Rasiah; Wei, Lai

    2017-01-01

    The structured vocabulary that describes gene function, the gene ontology (GO), serves as a powerful tool in biological research. One application of GO in computational biology calculates semantic similarity between two concepts to make inferences about the functional similarity of genes. A class of term similarity algorithms explicitly calculates the shared information (SI) between concepts then substitutes this calculation into traditional term similarity measures such as Resnik, Lin, and Jiang-Conrath. Alternative SI approaches, when combined with ontology choice and term similarity type, lead to many gene-to-gene similarity measures. No thorough investigation has been made into the behavior, complexity, and performance of semantic methods derived from distinct SI approaches. We apply bootstrapping to compare the generalized performance of 57 gene-to-gene semantic measures across six benchmarks. Considering the number of measures, we additionally evaluate whether these methods can be leveraged through ensemble machine learning to improve prediction performance. Results showed that the choice of ontology type most strongly influenced performance across all evaluations. Combining measures into an ensemble classifier reduces cross-validation error beyond any individual measure for protein interaction prediction. This improvement resulted from information gained through the combination of ontology types as ensemble methods within each GO type offered no improvement. These results demonstrate that multiple SI measures can be leveraged for machine learning tasks such as automated gene function prediction by incorporating methods from across the ontologies. To facilitate future research in this area, we developed the GO Graph Tool Kit (GGTK), an open source C++ library with Python interface (github.com/paulbible/ggtk).

  15. Identification of widespread adenosine nucleotide binding in Mycobacterium tuberculosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ansong, Charles; Ortega, Corrie; Payne, Samuel H.

    The annotation of protein function is almost completely performed by in silico approaches. However, computational prediction of protein function is frequently incomplete and error prone. In Mycobacterium tuberculosis (Mtb), ~25% of all genes have no predicted function and are annotated as hypothetical proteins. This lack of functional information severely limits our understanding of Mtb pathogenicity. Current tools for experimental functional annotation are limited and often do not scale to entire protein families. Here, we report a generally applicable chemical biology platform to functionally annotate bacterial proteins by combining activity-based protein profiling (ABPP) and quantitative LC-MS-based proteomics. As an example ofmore » this approach for high-throughput protein functional validation and discovery, we experimentally annotate the families of ATP-binding proteins in Mtb. Our data experimentally validate prior in silico predictions of >250 ATPases and adenosine nucleotide-binding proteins, and reveal 73 hypothetical proteins as novel ATP-binding proteins. We identify adenosine cofactor interactions with many hypothetical proteins containing a diversity of unrelated sequences, providing a new and expanded view of adenosine nucleotide binding in Mtb. Furthermore, many of these hypothetical proteins are both unique to Mycobacteria and essential for infection, suggesting specialized functions in mycobacterial physiology and pathogenicity. Thus, we provide a generally applicable approach for high throughput protein function discovery and validation, and highlight several ways in which application of activity-based proteomics data can improve the quality of functional annotations to facilitate novel biological insights.« less

  16. Optimization of infobutton design and Implementation: A systematic review.

    PubMed

    Teixeira, Miguel; Cook, David A; Heale, Bret S E; Del Fiol, Guilherme

    2017-10-01

    Infobuttons are clinical decision tools embedded in the electronic health record that attempt to link clinical data with context sensitive knowledge resources. We systematically reviewed technical approaches that contribute to improved infobutton design, implementation and functionality. We searched databases including MEDLINE, EMBASE, and the Cochrane Library database from inception to March 1, 2016 for studies describing the use of infobuttons. We selected full review comparative studies, usability studies, and qualitative studies examining infobutton design and implementation. We abstracted usability measures such as user satisfaction, impact, and efficiency, as well as prediction accuracy of infobutton content retrieval algorithms and infobutton adoption/interoperability. We found 82 original research studies on infobuttons. Twelve studies met criteria for detailed abstraction. These studies investigated infobutton interoperability (1 study); tools to help tailor infobutton functionality (1 study); interventions to improve user experience (7 studies); and interventions to improve content retrieval by improving prediction of relevant knowledge resources and information needs (3 studies). In-depth interviews with implementers showed the Health Level Seven (HL7) Infobutton standard to be simple and easy to implement. A usability study demonstrated the feasibility of a tool to help medical librarians tailor infobutton functionality. User experience studies showed that access to resources with which users are familiar increased user satisfaction ratings; and that links to specific subsections of drug monographs increased information seeking efficiency. However, none of the user experience improvements led to increased usage uptake. Recommender systems based on machine learning algorithms outperformed hand-crafted rules in the prediction of relevant resources and clinicians' information needs in a laboratory setting, but no studies were found using these techniques in clinical settings. Improved content indexing in one study led to improved content retrieval across three health care organizations. Best practice technical approaches to ensure optimal infobutton functionality, design and implementation remain understudied. The HL7 Infobutton standard has supported wide adoption of infobutton functionality among clinical information systems and knowledge resources. Limited evidence supports infobutton enhancements such as links to specific subtopics, configuration of optimal resources for specific tasks and users, and improved indexing and content coverage. Further research is needed to investigate user experience improvements to increase infobutton use and effectiveness. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. CRIMEtoYHU: a new web tool to develop yeast-based functional assays for characterizing cancer-associated missense variants.

    PubMed

    Mercatanti, Alberto; Lodovichi, Samuele; Cervelli, Tiziana; Galli, Alvaro

    2017-12-01

    Evaluation of the functional impact of cancer-associated missense variants is more difficult than for protein-truncating mutations and consequently standard guidelines for the interpretation of sequence variants have been recently proposed. A number of algorithms and software products were developed to predict the impact of cancer-associated missense mutations on protein structure and function. Importantly, direct assessment of the variants using high-throughput functional assays using simple genetic systems can help in speeding up the functional evaluation of newly identified cancer-associated variants. We developed the web tool CRIMEtoYHU (CTY) to help geneticists in the evaluation of the functional impact of cancer-associated missense variants. Humans and the yeast Saccharomyces cerevisiae share thousands of protein-coding genes although they have diverged for a billion years. Therefore, yeast humanization can be helpful in deciphering the functional consequences of human genetic variants found in cancer and give information on the pathogenicity of missense variants. To humanize specific positions within yeast genes, human and yeast genes have to share functional homology. If a mutation in a specific residue is associated with a particular phenotype in humans, a similar substitution in the yeast counterpart may reveal its effect at the organism level. CTY simultaneously finds yeast homologous genes, identifies the corresponding variants and determines the transferability of human variants to yeast counterparts by assigning a reliability score (RS) that may be predictive for the validity of a functional assay. CTY analyzes newly identified mutations or retrieves mutations reported in the COSMIC database, provides information about the functional conservation between yeast and human and shows the mutation distribution in human genes. CTY analyzes also newly found mutations and aborts when no yeast homologue is found. Then, on the basis of the protein domain localization and functional conservation between yeast and human, the selected variants are ranked by the RS. The RS is assigned by an algorithm that computes functional data, type of mutation, chemistry of amino acid substitution and the degree of mutation transferability between human and yeast protein. Mutations giving a positive RS are highly transferable to yeast and, therefore, yeast functional assays will be more predictable. To validate the web application, we have analyzed 8078 cancer-associated variants located in 31 genes that have a yeast homologue. More than 50% of variants are transferable to yeast. Incidentally, 88% of all transferable mutations have a reliability score >0. Moreover, we analyzed by CTY 72 functionally validated missense variants located in yeast genes at positions corresponding to the human cancer-associated variants. All these variants gave a positive RS. To further validate CTY, we analyzed 3949 protein variants (with positive RS) by the predictive algorithm PROVEAN. This analysis shows that yeast-based functional assays will be more predictable for the variants with positive RS. We believe that CTY could be an important resource for the cancer research community by providing information concerning the functional impact of specific mutations, as well as for the design of functional assays useful for decision support in precision medicine. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Prediction of miRNA targets.

    PubMed

    Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis

    2015-01-01

    Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.

  19. IntNetDB v1.0: an integrated protein-protein interaction network database generated by a probabilistic model

    PubMed Central

    Xia, Kai; Dong, Dong; Han, Jing-Dong J

    2006-01-01

    Background Although protein-protein interaction (PPI) networks have been explored by various experimental methods, the maps so built are still limited in coverage and accuracy. To further expand the PPI network and to extract more accurate information from existing maps, studies have been carried out to integrate various types of functional relationship data. A frequently updated database of computationally analyzed potential PPIs to provide biological researchers with rapid and easy access to analyze original data as a biological network is still lacking. Results By applying a probabilistic model, we integrated 27 heterogeneous genomic, proteomic and functional annotation datasets to predict PPI networks in human. In addition to previously studied data types, we show that phenotypic distances and genetic interactions can also be integrated to predict PPIs. We further built an easy-to-use, updatable integrated PPI database, the Integrated Network Database (IntNetDB) online, to provide automatic prediction and visualization of PPI network among genes of interest. The networks can be visualized in SVG (Scalable Vector Graphics) format for zooming in or out. IntNetDB also provides a tool to extract topologically highly connected network neighborhoods from a specific network for further exploration and research. Using the MCODE (Molecular Complex Detections) algorithm, 190 such neighborhoods were detected among all the predicted interactions. The predicted PPIs can also be mapped to worm, fly and mouse interologs. Conclusion IntNetDB includes 180,010 predicted protein-protein interactions among 9,901 human proteins and represents a useful resource for the research community. Our study has increased prediction coverage by five-fold. IntNetDB also provides easy-to-use network visualization and analysis tools that allow biological researchers unfamiliar with computational biology to access and analyze data over the internet. The web interface of IntNetDB is freely accessible at . Visualization requires Mozilla version 1.8 (or higher) or Internet Explorer with installation of SVGviewer. PMID:17112386

  20. Modeling of Principal Flank Wear: An Empirical Approach Combining the Effect of Tool, Environment and Workpiece Hardness

    NASA Astrophysics Data System (ADS)

    Mia, Mozammel; Al Bashir, Mahmood; Dhar, Nikhil Ranjan

    2016-10-01

    Hard turning is increasingly employed in machining, lately, to replace time-consuming conventional turning followed by grinding process. An excessive amount of tool wear in hard turning is one of the main hurdles to be overcome. Many researchers have developed tool wear model, but most of them developed it for a particular work-tool-environment combination. No aggregate model is developed that can be used to predict the amount of principal flank wear for specific machining time. An empirical model of principal flank wear (VB) has been developed for the different hardness of workpiece (HRC40, HRC48 and HRC56) while turning by coated carbide insert with different configurations (SNMM and SNMG) under both dry and high pressure coolant conditions. Unlike other developed model, this model includes the use of dummy variables along with the base empirical equation to entail the effect of any changes in the input conditions on the response. The base empirical equation for principal flank wear is formulated adopting the Exponential Associate Function using the experimental results. The coefficient of dummy variable reflects the shifting of the response from one set of machining condition to another set of machining condition which is determined by simple linear regression. The independent cutting parameters (speed, rate, depth of cut) are kept constant while formulating and analyzing this model. The developed model is validated with different sets of machining responses in turning hardened medium carbon steel by coated carbide inserts. For any particular set, the model can be used to predict the amount of principal flank wear for specific machining time. Since the predicted results exhibit good resemblance with experimental data and the average percentage error is <10 %, this model can be used to predict the principal flank wear for stated conditions.

  1. Body posture differentially impacts on visual attention towards tool, graspable, and non-graspable objects.

    PubMed

    Ambrosini, Ettore; Costantini, Marcello

    2017-02-01

    Viewed objects have been shown to afford suitable actions, even in the absence of any intention to act. However, little is known as to whether gaze behavior (i.e., the way we simply look at objects) is sensitive to action afforded by the seen object and how our actual motor possibilities affect this behavior. We recorded participants' eye movements during the observation of tools, graspable and ungraspable objects, while their hands were either freely resting on the table or tied behind their back. The effects of the observed object and hand posture on gaze behavior were measured by comparing the actual fixation distribution with that predicted by 2 widely supported models of visual attention, namely the Graph-Based Visual Saliency and the Adaptive Whitening Salience models. Results showed that saliency models did not accurately predict participants' fixation distributions for tools. Indeed, participants mostly fixated the action-related, functional part of the tools, regardless of its visual saliency. Critically, the restriction of the participants' action possibility led to a significant reduction of this effect and significantly improved the model prediction of the participants' gaze behavior. We suggest, first, that action-relevant object information at least in part guides gaze behavior. Second, postural information interacts with visual information to the generation of priority maps of fixation behavior. We support the view that the kind of information we access from the environment is constrained by our readiness to act. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. PEA: an integrated R toolkit for plant epitranscriptome analysis.

    PubMed

    Zhai, Jingjing; Song, Jie; Cheng, Qian; Tang, Yunjia; Ma, Chuang

    2018-05-29

    The epitranscriptome, also known as chemical modifications of RNA (CMRs), is a newly discovered layer of gene regulation, the biological importance of which emerged through analysis of only a small fraction of CMRs detected by high-throughput sequencing technologies. Understanding of the epitranscriptome is hampered by the absence of computational tools for the systematic analysis of epitranscriptome sequencing data. In addition, no tools have yet been designed for accurate prediction of CMRs in plants, or to extend epitranscriptome analysis from a fraction of the transcriptome to its entirety. Here, we introduce PEA, an integrated R toolkit to facilitate the analysis of plant epitranscriptome data. The PEA toolkit contains a comprehensive collection of functions required for read mapping, CMR calling, motif scanning and discovery, and gene functional enrichment analysis. PEA also takes advantage of machine learning technologies for transcriptome-scale CMR prediction, with high prediction accuracy, using the Positive Samples Only Learning algorithm, which addresses the two-class classification problem by using only positive samples (CMRs), in the absence of negative samples (non-CMRs). Hence PEA is a versatile epitranscriptome analysis pipeline covering CMR calling, prediction, and annotation, and we describe its application to predict N6-methyladenosine (m6A) modifications in Arabidopsis thaliana. Experimental results demonstrate that the toolkit achieved 71.6% sensitivity and 73.7% specificity, which is superior to existing m6A predictors. PEA is potentially broadly applicable to the in-depth study of epitranscriptomics. PEA Docker image is available at https://hub.docker.com/r/malab/pea, source codes and user manual are available at https://github.com/cma2015/PEA. chuangma2006@gmail.com. Supplementary data are available at Bioinformatics online.

  3. Analysis of Cysteine Redox Post-Translational Modifications in Cell Biology and Drug Pharmacology.

    PubMed

    Wani, Revati; Murray, Brion W

    2017-01-01

    Reversible cysteine oxidation is an emerging class of protein post-translational modification (PTM) that regulates catalytic activity, modulates conformation, impacts protein-protein interactions, and affects subcellular trafficking of numerous proteins. Redox PTMs encompass a broad array of cysteine oxidation reactions with different half-lives, topographies, and reactivities such as S-glutathionylation and sulfoxidation. Recent studies from our group underscore the lesser known effect of redox protein modifications on drug binding. To date, biological studies to understand mechanistic and functional aspects of redox regulation are technically challenging. A prominent issue is the lack of tools for labeling proteins oxidized to select chemotype/oxidant species in cells. Predictive computational tools and curated databases of oxidized proteins are facilitating structural and functional insights into regulation of the network of oxidized proteins or redox proteome. In this chapter, we discuss analytical platforms for studying protein oxidation, suggest computational tools currently available in the field to determine redox sensitive proteins, and begin to illuminate roles of cysteine redox PTMs in drug pharmacology.

  4. The multi-copy simultaneous search methodology: a fundamental tool for structure-based drug design.

    PubMed

    Schubert, Christian R; Stultz, Collin M

    2009-08-01

    Fragment-based ligand design approaches, such as the multi-copy simultaneous search (MCSS) methodology, have proven to be useful tools in the search for novel therapeutic compounds that bind pre-specified targets of known structure. MCSS offers a variety of advantages over more traditional high-throughput screening methods, and has been applied successfully to challenging targets. The methodology is quite general and can be used to construct functionality maps for proteins, DNA, and RNA. In this review, we describe the main aspects of the MCSS method and outline the general use of the methodology as a fundamental tool to guide the design of de novo lead compounds. We focus our discussion on the evaluation of MCSS results and the incorporation of protein flexibility into the methodology. In addition, we demonstrate on several specific examples how the information arising from the MCSS functionality maps has been successfully used to predict ligand binding to protein targets and RNA.

  5. Introducing the Forensic Research/Reference on Genetics knowledge base, FROG-kb.

    PubMed

    Rajeevan, Haseena; Soundararajan, Usha; Pakstis, Andrew J; Kidd, Kenneth K

    2012-09-01

    Online tools and databases based on multi-allelic short tandem repeat polymorphisms (STRPs) are actively used in forensic teaching, research, and investigations. The Fst value of each CODIS marker tends to be low across the populations of the world and most populations typically have all the common STRP alleles present diminishing the ability of these systems to discriminate ethnicity. Recently, considerable research is being conducted on single nucleotide polymorphisms (SNPs) to be considered for human identification and description. However, online tools and databases that can be used for forensic research and investigation are limited. The back end DBMS (Database Management System) for FROG-kb is Oracle version 10. The front end is implemented with specific code using technologies such as Java, Java Servlet, JSP, JQuery, and GoogleCharts. We present an open access web application, FROG-kb (Forensic Research/Reference on Genetics-knowledge base, http://frog.med.yale.edu), that is useful for teaching and research relevant to forensics and can serve as a tool facilitating forensic practice. The underlying data for FROG-kb are provided by the already extensively used and referenced ALlele FREquency Database, ALFRED (http://alfred.med.yale.edu). In addition to displaying data in an organized manner, computational tools that use the underlying allele frequencies with user-provided data are implemented in FROG-kb. These tools are organized by the different published SNP/marker panels available. This web tool currently has implemented general functions possible for two types of SNP panels, individual identification and ancestry inference, and a prediction function specific to a phenotype informative panel for eye color. The current online version of FROG-kb already provides new and useful functionality. We expect FROG-kb to grow and expand in capabilities and welcome input from the forensic community in identifying datasets and functionalities that will be most helpful and useful. Thus, the structure and functionality of FROG-kb will be revised in an ongoing process of improvement. This paper describes the state as of early June 2012.

  6. Comparison of Performance Predictions for New Low-Thrust Trajectory Tools

    NASA Technical Reports Server (NTRS)

    Polsgrove, Tara; Kos, Larry; Hopkins, Randall; Crane, Tracie

    2006-01-01

    Several low thrust trajectory optimization tools have been developed over the last 3% years by the Low Thrust Trajectory Tools development team. This toolset includes both low-medium fidelity and high fidelity tools which allow the analyst to quickly research a wide mission trade space and perform advanced mission design. These tools were tested using a set of reference trajectories that exercised each tool s unique capabilities. This paper compares the performance predictions of the various tools against several of the reference trajectories. The intent is to verify agreement between the high fidelity tools and to quantify the performance prediction differences between tools of different fidelity levels.

  7. The basis function approach for modeling autocorrelation in ecological data

    USGS Publications Warehouse

    Hefley, Trevor J.; Broms, Kristin M.; Brost, Brian M.; Buderman, Frances E.; Kay, Shannon L.; Scharf, Henry; Tipton, John; Williams, Perry J.; Hooten, Mevin B.

    2017-01-01

    Analyzing ecological data often requires modeling the autocorrelation created by spatial and temporal processes. Many seemingly disparate statistical methods used to account for autocorrelation can be expressed as regression models that include basis functions. Basis functions also enable ecologists to modify a wide range of existing ecological models in order to account for autocorrelation, which can improve inference and predictive accuracy. Furthermore, understanding the properties of basis functions is essential for evaluating the fit of spatial or time-series models, detecting a hidden form of collinearity, and analyzing large data sets. We present important concepts and properties related to basis functions and illustrate several tools and techniques ecologists can use when modeling autocorrelation in ecological data.

  8. Identification of Functional Candidates amongst Hypothetical Proteins of Treponema pallidum ssp. pallidum

    PubMed Central

    Naqvi, Ahmad Abu Turab; Shahbaaz, Mohd; Ahmad, Faizan; Hassan, Md. Imtaiyaz

    2015-01-01

    Syphilis is a globally occurring venereal disease, and its infection is propagated through sexual contact. The causative agent of syphilis, Treponema pallidum ssp. pallidum, a Gram-negative sphirochaete, is an obligate human parasite. Genome of T. pallidum ssp. pallidum SS14 strain (RefSeq NC_010741.1) encodes 1,027 proteins, of which 444 proteins are known as hypothetical proteins (HPs), i.e., proteins of unknown functions. Here, we performed functional annotation of HPs of T. pallidum ssp. pallidum using various database, domain architecture predictors, protein function annotators and clustering tools. We have analyzed the sequences of 444 HPs of T. pallidum ssp. pallidum and subsequently predicted the function of 207 HPs with a high level of confidence. However, functions of 237 HPs are predicted with less accuracy. We found various enzymes, transporters, binding proteins in the annotated group of HPs that may be possible molecular targets, facilitating for the survival of pathogen. Our comprehensive analysis helps to understand the mechanism of pathogenesis to provide many novel potential therapeutic interventions. PMID:25894582

  9. Plane-Wave Implementation and Performance of à-la-Carte Coulomb-Attenuated Exchange-Correlation Functionals for Predicting Optical Excitation Energies in Some Notorious Cases.

    PubMed

    Bircher, Martin P; Rothlisberger, Ursula

    2018-06-12

    Linear-response time-dependent density functional theory (LR-TD-DFT) has become a valuable tool in the calculation of excited states of molecules of various sizes. However, standard generalized-gradient approximation and hybrid exchange-correlation (xc) functionals often fail to correctly predict charge-transfer (CT) excitations with low orbital overlap, thus limiting the scope of the method. The Coulomb-attenuation method (CAM) in the form of the CAM-B3LYP functional has been shown to reliably remedy this problem in many CT systems, making accurate predictions possible. However, in spite of a rather consistent performance across different orbital overlap regimes, some pitfalls remain. Here, we present a fully flexible and adaptable implementation of the CAM for Γ-point calculations within the plane-wave pseudopotential molecular dynamics package CPMD and explore how customized xc functionals can improve the optical spectra of some notorious cases. We find that results obtained using plane waves agree well with those from all-electron calculations employing atom-centered bases, and that it is possible to construct a new Coulomb-attenuated xc functional based on simple considerations. We show that such a functional is able to outperform CAM-B3LYP in some cases, while retaining similar accuracy in systems where CAM-B3LYP performs well.

  10. Predictive models in cancer management: A guide for clinicians.

    PubMed

    Kazem, Mohammed Ali

    2017-04-01

    Predictive tools in cancer management are used to predict different outcomes including survival probability or risk of recurrence. The uptake of these tools by clinicians involved in cancer management has not been as common as other clinical tools, which may be due to the complexity of some of these tools or a lack of understanding of how they can aid decision-making in particular clinical situations. The aim of this article is to improve clinicians' knowledge and understanding of predictive tools used in cancer management, including how they are built, how they can be applied to medical practice, and what their limitations may be. Literature review was conducted to investigate the role of predictive tools in cancer management. All predictive models share similar characteristics, but depending on the type of the tool its ability to predict an outcome will differ. Each type has its own pros and cons, and its generalisability will depend on the cohort used to build the tool. These factors will affect the clinician's decision whether to apply the model to their cohort or not. Before a model is used in clinical practice, it is important to appreciate how the model is constructed, what its use may add over and above traditional decision-making tools, and what problems or limitations may be associated with it. Understanding all the above is an important step for any clinician who wants to decide whether or not use predictive tools in their practice. Copyright © 2016 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  11. Pediatric Eating Assessment Tool-10 as an indicator to predict aspiration in children with esophageal atresia.

    PubMed

    Soyer, Tutku; Yalcin, Sule; Arslan, Selen Serel; Demir, Numan; Tanyel, Feridun Cahit

    2017-10-01

    Airway aspiration is a common problem in children with esophageal atresia (EA). Pediatric Eating Assessment Tool-10 (pEAT-10) is a self-administered questionnaire to evaluate dysphagia symptoms in children. A prospective study was performed to evaluate the validity of pEAT-10 to predict aspiration in children with EA. Patients with EA were evaluated for age, sex, type of atresia, presence of associated anomalies, type of esophageal repair, time of definitive treatment, and the beginning of oral feeding. Penetration-aspiration score (PAS) was evaluated with videofluoroscopy (VFS) and parents were surveyed for pEAT-10, dysphagia score (DS) and functional oral intake scale (FOIS). PAS scores greater than 7 were considered as risk of aspiration. EAT-10 values greater than 3 were assessed as abnormal. Higher DS scores shows dysphagia whereas higher FOIS shows better feeding abilities. Forty patients were included. Children with PAS greater than 7 were assessed as PAS+ group, and scores less than 7 were constituted as PAS- group. Demographic features and results of surgical treatments showed no difference between groups (p>0.05). The median values of PAS, pEAT-10 and DS scores were significantly higher in PAS+ group when compared to PAS- group (p<0.05). The sensitivity and specificity of pEAT-10 to predict aspiration were 88% and 77%, and the positive and negative predictive values were 22% and 11%, respectively. Type-C cases had better pEAT-10 and FOIS scores with respect to type-A cases, and both scores were statistically more reliable in primary repair than delayed repair (p<0.05). Among the postoperative complications, only leakage had impact on DS, pEAT-10, PAS and FOIS scores (p<0.05). The pEAT-10 is a valid, simple and reliable tool to predict aspiration in children. Patients with higher pEAT-10 scores should undergo detailed evaluation of deglutitive functions and assessment of risks of aspiration to improve safer feeding strategies. Level II (Development of diagnostic criteria in a consecutive series of patients and a universally applied "gold standard"). Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Patient-Reported Outcomes After Radiation Therapy in Men With Prostate Cancer: A Systematic Review of Prognostic Tool Accuracy and Validity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Callaghan, Michael E., E-mail: elspeth.raymond@health.sa.gov.au; Freemasons Foundation Centre for Men's Health, University of Adelaide; Urology Unit, Repatriation General Hospital, SA Health, Flinders Centre for Innovation in Cancer

    Purpose: To identify, through a systematic review, all validated tools used for the prediction of patient-reported outcome measures (PROMs) in patients being treated with radiation therapy for prostate cancer, and provide a comparative summary of accuracy and generalizability. Methods and Materials: PubMed and EMBASE were searched from July 2007. Title/abstract screening, full text review, and critical appraisal were undertaken by 2 reviewers, whereas data extraction was performed by a single reviewer. Eligible articles had to provide a summary measure of accuracy and undertake internal or external validation. Tools were recommended for clinical implementation if they had been externally validated and foundmore » to have accuracy ≥70%. Results: The search strategy identified 3839 potential studies, of which 236 progressed to full text review and 22 were included. From these studies, 50 tools predicted gastrointestinal/rectal symptoms, 29 tools predicted genitourinary symptoms, 4 tools predicted erectile dysfunction, and no tools predicted quality of life. For patients treated with external beam radiation therapy, 3 tools could be recommended for the prediction of rectal toxicity, gastrointestinal toxicity, and erectile dysfunction. For patients treated with brachytherapy, 2 tools could be recommended for the prediction of urinary retention and erectile dysfunction. Conclusions: A large number of tools for the prediction of PROMs in prostate cancer patients treated with radiation therapy have been developed. Only a small minority are accurate and have been shown to be generalizable through external validation. This review provides an accessible catalogue of tools that are ready for clinical implementation as well as which should be prioritized for validation.« less

  13. [Bioinformatics analysis of mosquito densovirus nostructure protein NS1].

    PubMed

    Dong, Yun-qiao; Ma, Wen-li; Gu, Jin-bao; Zheng, Wen-ling

    2009-12-01

    To analyze and predict the structure and function of mosquito densovirus (MDV) nostructual protein1 (NS1). Using different bioinformatics software, the EXPASY pmtparam tool, ClustalX1.83, Bioedit, MEGA3.1, ScanProsite, and Motifscan, respectively to comparatively analyze and predict the physic-chemical parameters, homology, evolutionary relation, secondary structure and main functional motifs of NS1. MDV NS1 protein was a unstable hydrophilic protein and the amino acid sequence was highly conserved which had a relatively closer evolutionary distance with infectious hypodermal and hematopoietic necrosis virus (IHHNV). MDV NS1 has a specific domain of superfamily 3 helicase of small DNA viruses. This domain contains the NTP-binding region with a metal ion-dependent ATPase activity. A virus replication roller rolling-circle replication(RCR) initiation domain was found near the N terminal of this protein. This protien has the biological function of single stranded incision enzyme. The bioinformatics prediction results suggest that MDV NS1 protein plays a key role in viral replication, packaging, and the other stages of viral life.

  14. News from the protein mutability landscape.

    PubMed

    Hecht, Maximilian; Bromberg, Yana; Rost, Burkhard

    2013-11-01

    Some mutations of protein residues matter more than others, and these are often conserved evolutionarily. The explosion of deep sequencing and genotyping increasingly requires the distinction between effect and neutral variants. The simplest approach predicts all mutations of conserved residues to have an effect; however, this works poorly, at best. Many computational tools that are optimized to predict the impact of point mutations provide more detail. Here, we expand the perspective from the view of single variants to the level of sketching the entire mutability landscape. This landscape is defined by the impact of substituting every residue at each position in a protein by each of the 19 non-native amino acids. We review some of the powerful conclusions about protein function, stability and their robustness to mutation that can be drawn from such an analysis. Large-scale experimental and computational mutagenesis experiments are increasingly furthering our understanding of protein function and of the genotype-phenotype associations. We also discuss how these can be used to improve predictions of protein function and pathogenicity of missense variants. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Protein loop modeling using a new hybrid energy function and its application to modeling in inaccurate structural environments.

    PubMed

    Park, Hahnbeom; Lee, Gyu Rie; Heo, Lim; Seok, Chaok

    2014-01-01

    Protein loop modeling is a tool for predicting protein local structures of particular interest, providing opportunities for applications involving protein structure prediction and de novo protein design. Until recently, the majority of loop modeling methods have been developed and tested by reconstructing loops in frameworks of experimentally resolved structures. In many practical applications, however, the protein loops to be modeled are located in inaccurate structural environments. These include loops in model structures, low-resolution experimental structures, or experimental structures of different functional forms. Accordingly, discrepancies in the accuracy of the structural environment assumed in development of the method and that in practical applications present additional challenges to modern loop modeling methods. This study demonstrates a new strategy for employing a hybrid energy function combining physics-based and knowledge-based components to help tackle this challenge. The hybrid energy function is designed to combine the strengths of each energy component, simultaneously maintaining accurate loop structure prediction in a high-resolution framework structure and tolerating minor environmental errors in low-resolution structures. A loop modeling method based on global optimization of this new energy function is tested on loop targets situated in different levels of environmental errors, ranging from experimental structures to structures perturbed in backbone as well as side chains and template-based model structures. The new method performs comparably to force field-based approaches in loop reconstruction in crystal structures and better in loop prediction in inaccurate framework structures. This result suggests that higher-accuracy predictions would be possible for a broader range of applications. The web server for this method is available at http://galaxy.seoklab.org/loop with the PS2 option for the scoring function.

  16. Longitudinal Aerodynamic Modeling of the Adaptive Compliant Trailing Edge Flaps on a GIII Airplane and Comparisons to Flight Data

    NASA Technical Reports Server (NTRS)

    Smith, Mark S.; Bui, Trong T.; Garcia, Christian A.; Cumming, Stephen B.

    2016-01-01

    A pair of compliant trailing edge flaps was flown on a modified GIII airplane. Prior to flight test, multiple analysis tools of various levels of complexity were used to predict the aerodynamic effects of the flaps. Vortex lattice, full potential flow, and full Navier-Stokes aerodynamic analysis software programs were used for prediction, in addition to another program that used empirical data. After the flight-test series, lift and pitching moment coefficient increments due to the flaps were estimated from flight data and compared to the results of the predictive tools. The predicted lift increments matched flight data well for all predictive tools for small flap deflections. All tools over-predicted lift increments for large flap deflections. The potential flow and Navier-Stokes programs predicted pitching moment coefficient increments better than the other tools.

  17. Prediction of cloud condensation nuclei activity for organic compounds using functional group contribution methods

    DOE PAGES

    Petters, M. D.; Kreidenweis, S. M.; Ziemann, P. J.

    2016-01-19

    A wealth of recent laboratory and field experiments demonstrate that organic aerosol composition evolves with time in the atmosphere, leading to changes in the influence of the organic fraction to cloud condensation nuclei (CCN) spectra. There is a need for tools that can realistically represent the evolution of CCN activity to better predict indirect effects of organic aerosol on clouds and climate. This work describes a model to predict the CCN activity of organic compounds from functional group composition. Following previous methods in the literature, we test the ability of semi-empirical group contribution methods in Kohler theory to predict themore » effective hygroscopicity parameter, kappa. However, in our approach we also account for liquid–liquid phase boundaries to simulate phase-limited activation behavior. Model evaluation against a selected database of published laboratory measurements demonstrates that kappa can be predicted within a factor of 2. Simulation of homologous series is used to identify the relative effectiveness of different functional groups in increasing the CCN activity of weakly functionalized organic compounds. Hydroxyl, carboxyl, aldehyde, hydroperoxide, carbonyl, and ether moieties promote CCN activity while methylene and nitrate moieties inhibit CCN activity. Furthermore, the model can be incorporated into scale-bridging test beds such as the Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) to evaluate the evolution of kappa for a complex mix of organic compounds and to develop suitable parameterizations of CCN evolution for larger-scale models.« less

  18. DWARF – a data warehouse system for analyzing protein families

    PubMed Central

    Fischer, Markus; Thai, Quan K; Grieb, Melanie; Pleiss, Jürgen

    2006-01-01

    Background The emerging field of integrative bioinformatics provides the tools to organize and systematically analyze vast amounts of highly diverse biological data and thus allows to gain a novel understanding of complex biological systems. The data warehouse DWARF applies integrative bioinformatics approaches to the analysis of large protein families. Description The data warehouse system DWARF integrates data on sequence, structure, and functional annotation for protein fold families. The underlying relational data model consists of three major sections representing entities related to the protein (biochemical function, source organism, classification to homologous families and superfamilies), the protein sequence (position-specific annotation, mutant information), and the protein structure (secondary structure information, superimposed tertiary structure). Tools for extracting, transforming and loading data from public available resources (ExPDB, GenBank, DSSP) are provided to populate the database. The data can be accessed by an interface for searching and browsing, and by analysis tools that operate on annotation, sequence, or structure. We applied DWARF to the family of α/β-hydrolases to host the Lipase Engineering database. Release 2.3 contains 6138 sequences and 167 experimentally determined protein structures, which are assigned to 37 superfamilies 103 homologous families. Conclusion DWARF has been designed for constructing databases of large structurally related protein families and for evaluating their sequence-structure-function relationships by a systematic analysis of sequence, structure and functional annotation. It has been applied to predict biochemical properties from sequence, and serves as a valuable tool for protein engineering. PMID:17094801

  19. HPA axis hyperactivity as suicide predictor in elderly mood disorder inpatients.

    PubMed

    Jokinen, Jussi; Nordström, Peter

    2008-11-01

    Dysregulation of the hypothalamic-pituitary-adrenal (HPA) axis function is associated with suicidal behaviour and age-associated alterations in HPA axis functioning may render elderly individuals more susceptible to HPA dysregulation related to mood disorders. Research on HPA axis function in suicide prediction in elderly mood disorder patients is sparse. The study sample consisted of 99 depressed elderly inpatients 65 years of age or older admitted to the department of Psychiatry at the Karolinska University Hospital between 1980 and 2000. The hypothesis was that elderly mood disorder inpatients who fail to suppress cortisol in the dexamethasone suppression test (DST) are at higher risk of suicide. The DST non-suppression distinguished between suicides and survivors in elderly depressed inpatients and the suicide attempt at the index episode was a strong predictor for suicide. Additionally, the DST non-suppression showed higher specificity and predictive value in the suicide attempter group. Due to age-associated alterations in HPA axis functioning, the optimal cut-off for DST non-suppression in suicide prediction may be higher in elderly mood disorder inpatients. These data demonstrate the importance of attempted suicide and DST non-suppression as predictors of suicide risk in late-life depression and suggest the use for neuroendocrine testing of HPA axis functioning as a complementary tool in suicide prevention.

  20. Executive Functions Predict the Success of Top-Soccer Players

    PubMed Central

    Vestberg, Torbjörn; Gustafson, Roland; Maurex, Liselotte; Ingvar, Martin; Petrovic, Predrag

    2012-01-01

    While the importance of physical abilities and motor coordination is non-contested in sport, more focus has recently been turned toward cognitive processes important for different sports. However, this line of studies has often investigated sport-specific cognitive traits, while few studies have focused on general cognitive traits. We explored if measures of general executive functions can predict the success of a soccer player. The present study used standardized neuropsychological assessment tools assessing players' general executive functions including on-line multi-processing such as creativity, response inhibition, and cognitive flexibility. In a first cross-sectional part of the study we compared the results between High Division players (HD), Lower Division players (LD) and a standardized norm group. The result shows that both HD and LD players had significantly better measures of executive functions in comparison to the norm group for both men and women. Moreover, the HD players outperformed the LD players in these tests. In the second prospective part of the study, a partial correlation test showed a significant correlation between the result from the executive test and the numbers of goals and assists the players had scored two seasons later. The results from this study strongly suggest that results in cognitive function tests predict the success of ball sport players. PMID:22496850

  1. Learning from data to design functional materials without inversion symmetry

    PubMed Central

    Balachandran, Prasanna V.; Young, Joshua; Lookman, Turab; Rondinelli, James M.

    2017-01-01

    Accelerating the search for functional materials is a challenging problem. Here we develop an informatics-guided ab initio approach to accelerate the design and discovery of noncentrosymmetric materials. The workflow integrates group theory, informatics and density-functional theory to uncover design guidelines for predicting noncentrosymmetric compounds, which we apply to layered Ruddlesden-Popper oxides. Group theory identifies how configurations of oxygen octahedral rotation patterns, ordered cation arrangements and their interplay break inversion symmetry, while informatics tools learn from available data to select candidate compositions that fulfil the group-theoretical postulates. Our key outcome is the identification of 242 compositions after screening ∼3,200 that show potential for noncentrosymmetric structures, a 25-fold increase in the projected number of known noncentrosymmetric Ruddlesden-Popper oxides. We validate our predictions for 19 compounds using phonon calculations, among which 17 have noncentrosymmetric ground states including two potential multiferroics. Our approach enables rational design of materials with targeted crystal symmetries and functionalities. PMID:28211456

  2. Microbial genome analysis: the COG approach.

    PubMed

    Galperin, Michael Y; Kristensen, David M; Makarova, Kira S; Wolf, Yuri I; Koonin, Eugene V

    2017-09-14

    For the past 20 years, the Clusters of Orthologous Genes (COG) database had been a popular tool for microbial genome annotation and comparative genomics. Initially created for the purpose of evolutionary classification of protein families, the COG have been used, apart from straightforward functional annotation of sequenced genomes, for such tasks as (i) unification of genome annotation in groups of related organisms; (ii) identification of missing and/or undetected genes in complete microbial genomes; (iii) analysis of genomic neighborhoods, in many cases allowing prediction of novel functional systems; (iv) analysis of metabolic pathways and prediction of alternative forms of enzymes; (v) comparison of organisms by COG functional categories; and (vi) prioritization of targets for structural and functional characterization. Here we review the principles of the COG approach and discuss its key advantages and drawbacks in microbial genome analysis. Published by Oxford University Press 2017. This work is written by US Government employees and is in the public domain in the US.

  3. Stochastic differential equations as a tool to regularize the parameter estimation problem for continuous time dynamical systems given discrete time measurements.

    PubMed

    Leander, Jacob; Lundh, Torbjörn; Jirstrand, Mats

    2014-05-01

    In this paper we consider the problem of estimating parameters in ordinary differential equations given discrete time experimental data. The impact of going from an ordinary to a stochastic differential equation setting is investigated as a tool to overcome the problem of local minima in the objective function. Using two different models, it is demonstrated that by allowing noise in the underlying model itself, the objective functions to be minimized in the parameter estimation procedures are regularized in the sense that the number of local minima is reduced and better convergence is achieved. The advantage of using stochastic differential equations is that the actual states in the model are predicted from data and this will allow the prediction to stay close to data even when the parameters in the model is incorrect. The extended Kalman filter is used as a state estimator and sensitivity equations are provided to give an accurate calculation of the gradient of the objective function. The method is illustrated using in silico data from the FitzHugh-Nagumo model for excitable media and the Lotka-Volterra predator-prey system. The proposed method performs well on the models considered, and is able to regularize the objective function in both models. This leads to parameter estimation problems with fewer local minima which can be solved by efficient gradient-based methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Clinical utility of the AlphaFIM® instrument in stroke rehabilitation.

    PubMed

    Lo, Alexander; Tahair, Nicola; Sharp, Shelley; Bayley, Mark T

    2012-02-01

    The AlphaFIM instrument is an assessment tool designed to facilitate discharge planning of stroke patients from acute care, by extrapolating overall functional status from performance in six key Functional Independence Measure (FIM) instrument items. To determine whether acute care AlphaFIM rating is correlated to stroke rehabilitation outcomes. In this prospective observational study, data were analyzed from 891 patients referred for inpatient stroke rehabilitation through an Internet-based referral system. Simple linear and stepwise regression models determined correlations between rehabilitation-ready AlphaFIM rating and rehabilitation outcomes (admission and discharge FIM ratings, FIM gain, FIM efficiency, and length of stay). Covariates including demographic data, stroke characteristics, medical history, cognitive deficits, and activity tolerance were included in the stepwise regressions. The AlphaFIM instrument was significant in predicting admission and discharge FIM ratings at rehabilitation (adjusted R² 0.40 and 0.28, respectively; P < 0.0001) and was weakly correlated with FIM gain and length of stay (adjusted R² 0.04 and 0.09, respectively; P < 0.0001), but not FIM efficiency. AlphaFIM rating was inversely related to FIM gain. Age, bowel incontinence, left hemiparesis, and previous infarcts were negative predictors of discharge FIM rating on stepwise regression. Intact executive function and physical activity tolerance of 30 to 60 mins were predictors of FIM gain. The AlphaFIM instrument is a valuable tool for triaging stroke patients from acute care to rehabilitation and predicts functional status at discharge from rehabilitation. Patients with low AlphaFIM ratings have the potential to make significant functional gains and should not be denied admission to inpatient rehabilitation programs.

  5. Prefrontal Reactivity to Social Signals of Threat as a Predictor of Treatment Response in Anxious Youth

    PubMed Central

    Kujawa, Autumn; Swain, James E; Hanna, Gregory L; Koschmann, Elizabeth; Simpson, David; Connolly, Sucheta; Fitzgerald, Kate D; Monk, Christopher S; Phan, K Luan

    2016-01-01

    Neuroimaging has shown promise as a tool to predict likelihood of treatment response in adult anxiety disorders, with potential implications for clinical decision-making. Despite the relatively high prevalence and emergence of anxiety disorders in youth, very little work has evaluated neural predictors of response to treatment. The goal of the current study was to examine brain function during emotional face processing as a predictor of response to treatment in children and adolescents (age 7–19 years; N=41) with generalized, social, and/or separation anxiety disorder. Prior to beginning treatment with the selective serotonin reuptake inhibitor (SSRI) sertraline or cognitive behavior therapy (CBT), participants completed an emotional faces matching task during functional magnetic resonance imaging (fMRI). Whole brain responses to threatening (ie, angry and fearful) and happy faces were examined as predictors of change in anxiety severity following treatment. Greater activation in inferior and superior frontal gyri, including dorsolateral prefrontal cortex and ventrolateral prefrontal cortex, as well as precentral/postcentral gyri during processing of threatening faces predicted greater response to CBT and SSRI treatment. For processing of happy faces, activation in postcentral gyrus was a significant predictor of treatment response. Post-hoc analyses indicated that effects were not significantly moderated by type of treatment. Findings suggest that greater activation in prefrontal regions involved in appraising and regulating responses to social signals of threat predict better response to SSRI and CBT treatment in anxious youth and that neuroimaging may be a useful tool for predicting how youth will respond to treatment. PMID:26708107

  6. ENFIN--A European network for integrative systems biology.

    PubMed

    Kahlem, Pascal; Clegg, Andrew; Reisinger, Florian; Xenarios, Ioannis; Hermjakob, Henning; Orengo, Christine; Birney, Ewan

    2009-11-01

    Integration of biological data of various types and the development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing an adapted infrastructure to connect databases, and platforms to enable both the generation of new bioinformatics tools and the experimental validation of computational predictions. With the aim of bridging the gap existing between standard wet laboratories and bioinformatics, the ENFIN Network runs integrative research projects to bring the latest computational techniques to bear directly on questions dedicated to systems biology in the wet laboratory environment. The Network maintains internally close collaboration between experimental and computational research, enabling a permanent cycling of experimental validation and improvement of computational prediction methods. The computational work includes the development of a database infrastructure (EnCORE), bioinformatics analysis methods and a novel platform for protein function analysis FuncNet.

  7. Priorities for future research into asthma diagnostic tools: A PAN-EU consensus exercise from the European asthma research innovation partnership (EARIP).

    PubMed

    Garcia-Marcos, L; Edwards, J; Kennington, E; Aurora, P; Baraldi, E; Carraro, S; Gappa, M; Louis, R; Moreno-Galdo, A; Peroni, D G; Pijnenburg, M; Priftis, K N; Sanchez-Solis, M; Schuster, A; Walker, S

    2018-02-01

    The diagnosis of asthma is currently based on clinical history, physical examination and lung function, and to date, there are no accurate objective tests either to confirm the diagnosis or to discriminate between different types of asthma. This consensus exercise reviews the state of the art in asthma diagnosis to identify opportunities for future investment based on the likelihood of their successful development, potential for widespread adoption and their perceived impact on asthma patients. Using a two-stage e-Delphi process and a summarizing workshop, a group of European asthma experts including health professionals, researchers, people with asthma and industry representatives ranked the potential impact of research investment in each technique or tool for asthma diagnosis and monitoring. After a systematic review of the literature, 21 statements were extracted and were subject of the two-stage Delphi process. Eleven statements were scored 3 or more and were further discussed and ranked in a face-to-face workshop. The three most important diagnostic/predictive tools ranked were as follows: "New biological markers of asthma (eg genomics, proteomics and metabolomics) as a tool for diagnosis and/or monitoring," "Prediction of future asthma in preschool children with reasonable accuracy" and "Tools to measure volatile organic compounds (VOCs) in exhaled breath." © 2018 John Wiley & Sons Ltd.

  8. Prediction of Erectile Function Following Treatment for Prostate Cancer

    PubMed Central

    Alemozaffar, Mehrdad; Regan, Meredith M.; Cooperberg, Matthew R.; Wei, John T.; Michalski, Jeff M.; Sandler, Howard M.; Hembroff, Larry; Sadetsky, Natalia; Saigal, Christopher S.; Litwin, Mark S.; Klein, Eric; Kibel, Adam S.; Hamstra, Daniel A.; Pisters, Louis L.; Kuban, Deborah A.; Kaplan, Irving D.; Wood, David P.; Ciezki, Jay; Dunn, Rodney L.; Carroll, Peter R.; Sanda, Martin G.

    2013-01-01

    Context Sexual function is the health-related quality of life (HRQOL) domain most commonly impaired after prostate cancer treatment; however, validated tools to enable personalized prediction of erectile dysfunction after prostate cancer treatment are lacking. Objective To predict long-term erectile function following prostate cancer treatment based on individual patient and treatment characteristics. Design Pretreatment patient characteristics, sexual HRQOL, and treatment details measured in a longitudinal academic multicenter cohort (Prostate Cancer Outcomes and Satisfaction With Treatment Quality Assessment; enrolled from 2003 through 2006), were used to develop models predicting erectile function 2 years after treatment. A community-based cohort (community-based Cancer of the Prostate Strategic Urologic Research Endeavor [CaPSURE]; enrolled 1995 through 2007) externally validated model performance. Patients in US academic and community-based practices whose HRQOL was measured pretreatment (N = 1201) underwent follow-up after prostatectomy, external radiotherapy, or brachytherapy for prostate cancer. Sexual outcomes among men completing 2 years’ follow-up (n = 1027) were used to develop models predicting erectile function that were externally validated among 1913 patients in a community-based cohort. Main Outcome Measures Patient-reported functional erections suitable for intercourse 2 years following prostate cancer treatment. Results Two years after prostate cancer treatment, 368 (37% [95% CI, 34%–40%]) of all patients and 335 (48% [95% CI, 45%–52%]) of those with functional erections prior to treatment reported functional erections; 531 (53% [95% CI, 50%–56%]) of patients without penile prostheses reported use of medications or other devices for erectile dysfunction. Pretreatment sexual HRQOL score, age, serum prostate-specific antigen level, race/ethnicity, body mass index, and intended treatment details were associated with functional erections 2 years after treatment. Multivariable logistic regression models predicting erectile function estimated 2-year function probabilities from as low as 10% or less to as high as 70% or greater depending on the individual’s pretreatment patient characteristics and treatment details. The models performed well in predicting erections in external validation among CaPSURE cohort patients (areas under the receiver operating characteristic curve, 0.77 [95% CI, 0.74–0.80] for prostatectomy; 0.87 [95% CI, 0.80–0.94] for external radiotherapy; and 0.90 [95% CI, 0.85–0.95] for brachytherapy). Conclusion Stratification by pretreatment patient characteristics and treatment details enables prediction of erectile function 2 years after prostatectomy, external radiotherapy, or brachytherapy for prostate cancer. PMID:21934053

  9. Automated Concurrent Blackboard System Generation in C++

    NASA Technical Reports Server (NTRS)

    Kaplan, J. A.; McManus, J. W.; Bynum, W. L.

    1999-01-01

    In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.

  10. Evolution of a detailed physiological model to simulate the gastrointestinal transit and absorption process in humans, part II: extension to describe performance of solid dosage forms.

    PubMed

    Thelen, Kirstin; Coboeken, Katrin; Willmann, Stefan; Dressman, Jennifer B; Lippert, Jörg

    2012-03-01

    The physiological absorption model presented in part I of this work is now extended to account for dosage-form-dependent gastrointestinal (GI) transit as well as disintegration and dissolution processes of various immediate-release and modified-release dosage forms. Empirical functions of the Weibull type were fitted to experimental in vitro dissolution profiles of solid dosage forms for eight test compounds (aciclovir, caffeine, cimetidine, diclofenac, furosemide, paracetamol, phenobarbital, and theophylline). The Weibull functions were then implemented into the model to predict mean plasma concentration-time profiles of the various dosage forms. On the basis of these dissolution functions, pharmacokinetics (PK) of six model drugs was predicted well. In the case of diclofenac, deviations between predicted and observed plasma concentrations were attributable to the large variability in gastric emptying time of the enteric-coated tablets. Likewise, oral PK of furosemide was found to be predominantly governed by the gastric emptying patterns. It is concluded that the revised model for GI transit and absorption was successfully integrated with dissolution functions of the Weibull type, enabling prediction of in vivo PK profiles from in vitro dissolution data. It facilitates a comparative analysis of the parameters contributing to oral drug absorption and is thus a powerful tool for formulation design. Copyright © 2011 Wiley Periodicals, Inc.

  11. Predicting the velocity and azimuth of fragments generated by the range destruction or random failure of rocket casings and tankage

    NASA Technical Reports Server (NTRS)

    Eck, Marshall; Mukunda, Meera

    1988-01-01

    A calculational method is described which provides a powerful tool for predicting solid rocket motor (SRM) casing and liquid rocket tankage fragmentation response. The approach properly partitions the available impulse to each major system-mass component. It uses the Pisces code developed by Physics International to couple the forces generated by an Eulerian-modeled gas flow field to a Lagrangian-modeled fuel and casing system. The details of the predictive analytical modeling process and the development of normalized relations for momentum partition as a function of SRM burn time and initial geometry are discussed. Methods for applying similar modeling techniques to liquid-tankage-overpressure failures are also discussed. Good agreement between predictions and observations are obtained for five specific events.

  12. Serum creatinine role in predicting outcome after cardiac surgery beyond acute kidney injury

    PubMed Central

    Najafi, Mahdi

    2014-01-01

    Serum creatinine is still the most important determinant in the assessment of perioperative renal function and in the prediction of adverse outcome in cardiac surgery. Many biomarkers have been studied to date; still, there is no surrogate for serum creatinine measurement in clinical practice because it is feasible and inexpensive. High levels of serum creatinine and its equivalents have been the most important preoperative risk factor for postoperative renal injury. Moreover, creatinine is the mainstay in predicting risk models and risk factor reduction has enhanced its importance in outcome prediction. The future perspective is the development of new definitions and novel tools for the early diagnosis of acute kidney injury largely based on serum creatinine and a panel of novel biomarkers. PMID:25276301

  13. Heterogeneous Structure of Stem Cells Dynamics: Statistical Models and Quantitative Predictions

    PubMed Central

    Bogdan, Paul; Deasy, Bridget M.; Gharaibeh, Burhan; Roehrs, Timo; Marculescu, Radu

    2014-01-01

    Understanding stem cell (SC) population dynamics is essential for developing models that can be used in basic science and medicine, to aid in predicting cells fate. These models can be used as tools e.g. in studying patho-physiological events at the cellular and tissue level, predicting (mal)functions along the developmental course, and personalized regenerative medicine. Using time-lapsed imaging and statistical tools, we show that the dynamics of SC populations involve a heterogeneous structure consisting of multiple sub-population behaviors. Using non-Gaussian statistical approaches, we identify the co-existence of fast and slow dividing subpopulations, and quiescent cells, in stem cells from three species. The mathematical analysis also shows that, instead of developing independently, SCs exhibit a time-dependent fractal behavior as they interact with each other through molecular and tactile signals. These findings suggest that more sophisticated models of SC dynamics should view SC populations as a collective and avoid the simplifying homogeneity assumption by accounting for the presence of more than one dividing sub-population, and their multi-fractal characteristics. PMID:24769917

  14. lncRScan-SVM: A Tool for Predicting Long Non-Coding RNAs Using Support Vector Machine.

    PubMed

    Sun, Lei; Liu, Hui; Zhang, Lin; Meng, Jia

    2015-01-01

    Functional long non-coding RNAs (lncRNAs) have been bringing novel insight into biological study, however it is still not trivial to accurately distinguish the lncRNA transcripts (LNCTs) from the protein coding ones (PCTs). As various information and data about lncRNAs are preserved by previous studies, it is appealing to develop novel methods to identify the lncRNAs more accurately. Our method lncRScan-SVM aims at classifying PCTs and LNCTs using support vector machine (SVM). The gold-standard datasets for lncRScan-SVM model training, lncRNA prediction and method comparison were constructed according to the GENCODE gene annotations of human and mouse respectively. By integrating features derived from gene structure, transcript sequence, potential codon sequence and conservation, lncRScan-SVM outperforms other approaches, which is evaluated by several criteria such as sensitivity, specificity, accuracy, Matthews correlation coefficient (MCC) and area under curve (AUC). In addition, several known human lncRNA datasets were assessed using lncRScan-SVM. LncRScan-SVM is an efficient tool for predicting the lncRNAs, and it is quite useful for current lncRNA study.

  15. Arc Jet Facility Test Condition Predictions Using the ADSI Code

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Prabhu, Dinesh; Terrazas-Salinas, Imelda

    2015-01-01

    The Aerothermal Design Space Interpolation (ADSI) tool is used to interpolate databases of previously computed computational fluid dynamic solutions for test articles in a NASA Ames arc jet facility. The arc jet databases are generated using an Navier-Stokes flow solver using previously determined best practices. The arc jet mass flow rates and arc currents used to discretize the database are chosen to span the operating conditions possible in the arc jet, and are based on previous arc jet experimental conditions where possible. The ADSI code is a database interpolation, manipulation, and examination tool that can be used to estimate the stagnation point pressure and heating rate for user-specified values of arc jet mass flow rate and arc current. The interpolation is performed in the other direction (predicting mass flow and current to achieve a desired stagnation point pressure and heating rate). ADSI is also used to generate 2-D response surfaces of stagnation point pressure and heating rate as a function of mass flow rate and arc current (or vice versa). Arc jet test data is used to assess the predictive capability of the ADSI code.

  16. Automatic assessment of functional health decline in older adults based on smart home data.

    PubMed

    Alberdi Aramendi, Ane; Weakley, Alyssa; Aztiria Goenaga, Asier; Schmitter-Edgecombe, Maureen; Cook, Diane J

    2018-05-01

    In the context of an aging population, tools to help elderly to live independently must be developed. The goal of this paper is to evaluate the possibility of using unobtrusively collected activity-aware smart home behavioral data to automatically detect one of the most common consequences of aging: functional health decline. After gathering the longitudinal smart home data of 29 older adults for an average of >2 years, we automatically labeled the data with corresponding activity classes and extracted time-series statistics containing 10 behavioral features. Using this data, we created regression models to predict absolute and standardized functional health scores, as well as classification models to detect reliable absolute change and positive and negative fluctuations in everyday functioning. Functional health was assessed every six months by means of the Instrumental Activities of Daily Living-Compensation (IADL-C) scale. Results show that total IADL-C score and subscores can be predicted by means of activity-aware smart home data, as well as a reliable change in these scores. Positive and negative fluctuations in everyday functioning are harder to detect using in-home behavioral data, yet changes in social skills have shown to be predictable. Future work must focus on improving the sensitivity of the presented models and performing an in-depth feature selection to improve overall accuracy. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Predicting Ga and Cu Profiles in Co-Evaporated Cu(In,Ga)Se 2 Using Modified Diffusion Equations and a Spreadsheet

    DOE PAGES

    Repins, Ingrid L.; Harvey, Steve; Bowers, Karen; ...

    2017-05-15

    Cu(In,Ga)Se 2(CIGS) photovoltaic absorbers frequently develop Ga gradients during growth. These gradients vary as a function of growth recipe, and are important to device performance. Prediction of Ga profiles using classic diffusion equations is not possible because In and Ga atoms occupy the same lattice sites and thus diffuse interdependently, and there is not yet a detailed experimental knowledge of the chemical potential as a function of composition that describes this interaction. Here, we show how diffusion equations can be modified to account for site sharing between In and Ga atoms. The analysis has been implemented in an Excel spreadsheet,more » and outputs predicted Cu, In, and Ga profiles for entered deposition recipes. A single set of diffusion coefficients and activation energies are chosen, such that simulated elemental profiles track with published data and those from this study. Extent and limits of agreement between elemental profiles predicted from the growth recipes and the spreadsheet tool are demonstrated.« less

  18. Patients with mild Alzheimer's disease produced shorter outgoing saccades when reading sentences.

    PubMed

    Fernández, Gerardo; Schumacher, Marcela; Castro, Liliana; Orozco, David; Agamennoni, Osvaldo

    2015-09-30

    In the present work we analyzed forward saccades of thirty five elderly subjects (Controls) and of thirty five mild Alzheimer's disease (AD) during reading regular and high-predictable sentences. While they read, their eye movements were recorded. The pattern of forward saccade amplitudes as a function of word predictability was clearly longer in Controls. Our results suggest that Controls might use stored information of words for enhancing their reading performance. Further, cloze predictability increased outgoing saccades amplitudes, as this increase stronger in high-predictable sentences. Quite the contrary, patients with mild AD evidenced reduced forward saccades even at early stages of the disease. This reduction might reveal impairments in brain areas such as those corresponding to working memory, memory retrieval, and semantic memory functions that are already present at early stages of AD. Our findings might be relevant for expanding the options for the early detection and monitoring of in the early stages of AD. Furthermore, eye movements during reading could provide a new tool for measuring a drug's impact on patient's behavior. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Predicting Ga and Cu Profiles in Co-Evaporated Cu(In,Ga)Se 2 Using Modified Diffusion Equations and a Spreadsheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Repins, Ingrid L.; Harvey, Steve; Bowers, Karen

    Cu(In,Ga)Se 2(CIGS) photovoltaic absorbers frequently develop Ga gradients during growth. These gradients vary as a function of growth recipe, and are important to device performance. Prediction of Ga profiles using classic diffusion equations is not possible because In and Ga atoms occupy the same lattice sites and thus diffuse interdependently, and there is not yet a detailed experimental knowledge of the chemical potential as a function of composition that describes this interaction. Here, we show how diffusion equations can be modified to account for site sharing between In and Ga atoms. The analysis has been implemented in an Excel spreadsheet,more » and outputs predicted Cu, In, and Ga profiles for entered deposition recipes. A single set of diffusion coefficients and activation energies are chosen, such that simulated elemental profiles track with published data and those from this study. Extent and limits of agreement between elemental profiles predicted from the growth recipes and the spreadsheet tool are demonstrated.« less

  20. Uncertainty quantification for nuclear density functional theory and information content of new measurements.

    PubMed

    McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-03-27

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  1. Classroom Preschool Science Learning: The Learner, Instructional Tools, and Peer-Learning Assignments

    NASA Astrophysics Data System (ADS)

    Reuter, Jamie M.

    The recent decades have seen an increased focus on improving early science education. Goals include helping young children learn about pertinent concepts in science, and fostering early scientific reasoning and inquiry skills (e.g., NRC 2007, 2012, 2015). However, there is still much to learn about what constitutes appropriate frameworks that blend science education with developmentally appropriate learning environments. An important goal for the construction of early science is a better understanding of appropriate learning experiences and expectations for preschool children. This dissertation examines some of these concerns by focusing on three dimensions of science learning in the preschool classroom: (1) the learner; (2) instructional tools and pedagogy; and (3) the social context of learning with peers. In terms of the learner, the dissertation examines some dimensions of preschool children's scientific reasoning skills in the context of potentially relevant, developing general reasoning abilities. As young children undergo rapid cognitive changes during the preschool years, it is important to explore how these may influence scientific thinking. Two features of cognitive functioning have been carefully studied: (1) the demonstration of an epistemic awareness through an emerging theory of mind, and (2) the rapid improvement in executive functioning capacity. Both continue to develop through childhood and adolescence, but changes in early childhood are especially striking and have been neglected as regards their potential role in scientific thinking. The question is whether such skills relate to young children's capacity for scientific thinking. Another goal was to determine whether simple physics diagrams serve as effective instructional tools in supporting preschool children's scientific thinking. Specifically, in activities involving predicting and checking in scientific contexts, the question is whether such diagrams facilitate children's ability to accurately recall initial predictions, as well as discriminate between the outcome of a scientific manipulation and their original predictions (i.e., to determine whether one's predictions were confirmed). Finally, this dissertation also explores the social context of learning science with peers in the preschool classroom. Due to little prior research in this area, it is currently unclear whether and how preschool children may benefit from working with peers on science activities in the classroom. This work aims to examine preschoolers' collaboration on a science learning activity, as well as the developmental function for such collaborative skills over the preschool years.

  2. Decision-making tools in prostate cancer: from risk grouping to nomograms.

    PubMed

    Fontanella, Paolo; Benecchi, Luigi; Grasso, Angelica; Patel, Vipul; Albala, David; Abbou, Claude; Porpiglia, Francesco; Sandri, Marco; Rocco, Bernardo; Bianchi, Giampaolo

    2017-12-01

    Prostate cancer (PCa) is the most common solid neoplasm and the second leading cause of cancer death in men. After the Partin tables were developed, a number of predictive and prognostic tools became available for risk stratification. These tools have allowed the urologist to better characterize this disease and lead to more confident treatment decisions for patients. The purpose of this study is to critically review the decision-making tools currently available to the urologist, from the moment when PCa is first diagnosed until patients experience metastatic progression and death. A systematic and critical analysis through Medline, EMBASE, Scopus and Web of Science databases was carried out in February 2016 as per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The search was conducted using the following key words: "prostate cancer," "prediction tools," "nomograms." Seventy-two studies were identified in the literature search. We summarized the results into six sections: Tools for prediction of life expectancy (before treatment), Tools for prediction of pathological stage (before treatment), Tools for prediction of survival and cancer-specific mortality (before/after treatment), Tools for prediction of biochemical recurrence (before/after treatment), Tools for prediction of metastatic progression (after treatment) and in the last section biomarkers and genomics. The management of PCa patients requires a tailored approach to deliver a truly personalized treatment. The currently available tools are of great help in helping the urologist in the decision-making process. These tests perform very well in high-grade and low-grade disease, while for intermediate-grade disease further research is needed. Newly discovered markers, genomic tests, and advances in imaging acquisition through mpMRI will help in instilling confidence that the appropriate treatments are being offered to patients with prostate cancer.

  3. Functional insights from proteome-wide structural modeling of Treponema pallidum subspecies pallidum, the causative agent of syphilis.

    PubMed

    Houston, Simon; Lithgow, Karen Vivien; Osbak, Kara Krista; Kenyon, Chris Richard; Cameron, Caroline E

    2018-05-16

    Syphilis continues to be a major global health threat with 11 million new infections each year, and a global burden of 36 million cases. The causative agent of syphilis, Treponema pallidum subspecies pallidum, is a highly virulent bacterium, however the molecular mechanisms underlying T. pallidum pathogenesis remain to be definitively identified. This is due to the fact that T. pallidum is currently uncultivatable, inherently fragile and thus difficult to work with, and phylogenetically distinct with no conventional virulence factor homologs found in other pathogens. In fact, approximately 30% of its predicted protein-coding genes have no known orthologs or assigned functions. Here we employed a structural bioinformatics approach using Phyre2-based tertiary structure modeling to improve our understanding of T. pallidum protein function on a proteome-wide scale. Phyre2-based tertiary structure modeling generated high-confidence predictions for 80% of the T. pallidum proteome (780/978 predicted proteins). Tertiary structure modeling also inferred the same function as primary structure-based annotations from genome sequencing pipelines for 525/605 proteins (87%), which represents 54% (525/978) of all T. pallidum proteins. Of the 175 T. pallidum proteins modeled with high confidence that were not assigned functions in the previously annotated published proteome, 167 (95%) were able to be assigned predicted functions. Twenty-one of the 175 hypothetical proteins modeled with high confidence were also predicted to exhibit significant structural similarity with proteins experimentally confirmed to be required for virulence in other pathogens. Phyre2-based structural modeling is a powerful bioinformatics tool that has provided insight into the potential structure and function of the majority of T. pallidum proteins and helped validate the primary structure-based annotation of more than 50% of all T. pallidum proteins with high confidence. This work represents the first T. pallidum proteome-wide structural modeling study and is one of few studies to apply this approach for the functional annotation of a whole proteome.

  4. Design Optimization Tool for Synthetic Jet Actuators Using Lumped Element Modeling

    NASA Technical Reports Server (NTRS)

    Gallas, Quentin; Sheplak, Mark; Cattafesta, Louis N., III; Gorton, Susan A. (Technical Monitor)

    2005-01-01

    The performance specifications of any actuator are quantified in terms of an exhaustive list of parameters such as bandwidth, output control authority, etc. Flow-control applications benefit from a known actuator frequency response function that relates the input voltage to the output property of interest (e.g., maximum velocity, volumetric flow rate, momentum flux, etc.). Clearly, the required performance metrics are application specific, and methods are needed to achieve the optimal design of these devices. Design and optimization studies have been conducted for piezoelectric cantilever-type flow control actuators, but the modeling issues are simpler compared to synthetic jets. Here, lumped element modeling (LEM) is combined with equivalent circuit representations to estimate the nonlinear dynamic response of a synthetic jet as a function of device dimensions, material properties, and external flow conditions. These models provide reasonable agreement between predicted and measured frequency response functions and thus are suitable for use as design tools. In this work, we have developed a Matlab-based design optimization tool for piezoelectric synthetic jet actuators based on the lumped element models mentioned above. Significant improvements were achieved by optimizing the piezoceramic diaphragm dimensions. Synthetic-jet actuators were fabricated and benchtop tested to fully document their behavior and validate a companion optimization effort. It is hoped that the tool developed from this investigation will assist in the design and deployment of these actuators.

  5. Atmospheric Delay Reduction Using KARAT for GPS Analysis and Implications for VLBI

    NASA Technical Reports Server (NTRS)

    Ichikawa, Ryuichi; Hobiger, Thomas; Koyama, Yasuhiro; Kondo, Tetsuro

    2010-01-01

    We have been developing a state-of-the-art tool to estimate the atmospheric path delays by raytracing through mesoscale analysis (MANAL) data, which is operationally used for numerical weather prediction by the Japan Meteorological Agency (JMA). The tools, which we have named KAshima RAytracing Tools (KARAT)', are capable of calculating total slant delays and ray-bending angles considering real atmospheric phenomena. The KARAT can estimate atmospheric slant delays by an analytical 2-D ray-propagation model by Thayer and a 3-D Eikonal solver. We compared PPP solutions using KARAT with that using the Global Mapping Function (GMF) and Vienna Mapping Function 1 (VMF1) for GPS sites of the GEONET (GPS Earth Observation Network System) operated by Geographical Survey Institute (GSI). In our comparison 57 stations of GEONET during the year of 2008 were processed. The KARAT solutions are slightly better than the solutions using VMF1 and GMF with linear gradient model for horizontal and height positions. Our results imply that KARAT is a useful tool for an efficient reduction of atmospheric path delays in radio-based space geodetic techniques such as GNSS and VLBI.

  6. Predicting Gene Structure Changes Resulting from Genetic Variants via Exon Definition Features.

    PubMed

    Majoros, William H; Holt, Carson; Campbell, Michael S; Ware, Doreen; Yandell, Mark; Reddy, Timothy E

    2018-04-25

    Genetic variation that disrupts gene function by altering gene splicing between individuals can substantially influence traits and disease. In those cases, accurately predicting the effects of genetic variation on splicing can be highly valuable for investigating the mechanisms underlying those traits and diseases. While methods have been developed to generate high quality computational predictions of gene structures in reference genomes, the same methods perform poorly when used to predict the potentially deleterious effects of genetic changes that alter gene splicing between individuals. Underlying that discrepancy in predictive ability are the common assumptions by reference gene finding algorithms that genes are conserved, well-formed, and produce functional proteins. We describe a probabilistic approach for predicting recent changes to gene structure that may or may not conserve function. The model is applicable to both coding and noncoding genes, and can be trained on existing gene annotations without requiring curated examples of aberrant splicing. We apply this model to the problem of predicting altered splicing patterns in the genomes of individual humans, and we demonstrate that performing gene-structure prediction without relying on conserved coding features is feasible. The model predicts an unexpected abundance of variants that create de novo splice sites, an observation supported by both simulations and empirical data from RNA-seq experiments. While these de novo splice variants are commonly misinterpreted by other tools as coding or noncoding variants of little or no effect, we find that in some cases they can have large effects on splicing activity and protein products, and we propose that they may commonly act as cryptic factors in disease. The software is available from geneprediction.org/SGRF. bmajoros@duke.edu. Supplementary information is available at Bioinformatics online.

  7. Modeling the Endogenous Sunlight Inactivation Rates of Laboratory Strain and Wastewater E. coli and Enterococci Using Biological Weighting Functions.

    PubMed

    Silverman, Andrea I; Nelson, Kara L

    2016-11-15

    Models that predict sunlight inactivation rates of bacteria are valuable tools for predicting the fate of pathogens in recreational waters and designing natural wastewater treatment systems to meet disinfection goals. We developed biological weighting function (BWF)-based numerical models to estimate the endogenous sunlight inactivation rates of E. coli and enterococci. BWF-based models allow the prediction of inactivation rates under a range of environmental conditions that shift the magnitude or spectral distribution of sunlight irradiance (e.g., different times, latitudes, water absorbances, depth). Separate models were developed for laboratory strain bacteria cultured in the laboratory and indigenous organisms concentrated directly from wastewater. Wastewater bacteria were found to be 5-7 times less susceptible to full-spectrum simulated sunlight than the laboratory bacteria, highlighting the importance of conducting experiments with bacteria sourced directly from wastewater. The inactivation rate models fit experimental data well and were successful in predicting the inactivation rates of wastewater E. coli and enterococci measured in clear marine water by researchers from a different laboratory. Additional research is recommended to develop strategies to account for the effects of elevated water pH on predicted inactivation rates.

  8. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Implementation of structure-mapping inference by event-file binding and action planning: a model of tool-improvisation analogies.

    PubMed

    Fields, Chris

    2011-03-01

    Structure-mapping inferences are generally regarded as dependent upon relational concepts that are understood and expressible in language by subjects capable of analogical reasoning. However, tool-improvisation inferences are executed by members of a variety of non-human primate and other species. Tool improvisation requires correctly inferring the motion and force-transfer affordances of an object; hence tool improvisation requires structure mapping driven by relational properties. Observational and experimental evidence can be interpreted to indicate that structure-mapping analogies in tool improvisation are implemented by multi-step manipulation of event files by binding and action-planning mechanisms that act in a language-independent manner. A functional model of language-independent event-file manipulations that implement structure mapping in the tool-improvisation domain is developed. This model provides a mechanism by which motion and force representations commonly employed in tool-improvisation structure mappings may be sufficiently reinforced to be available to inwardly directed attention and hence conceptualization. Predictions and potential experimental tests of this model are outlined.

  10. Clinical prediction models for mortality and functional outcome following ischemic stroke: A systematic review and meta-analysis

    PubMed Central

    Crayton, Elise; Wolfe, Charles; Douiri, Abdel

    2018-01-01

    Objective We aim to identify and critically appraise clinical prediction models of mortality and function following ischaemic stroke. Methods Electronic databases, reference lists, citations were searched from inception to September 2015. Studies were selected for inclusion, according to pre-specified criteria and critically appraised by independent, blinded reviewers. The discrimination of the prediction models was measured by the area under the curve receiver operating characteristic curve or c-statistic in random effects meta-analysis. Heterogeneity was measured using I2. Appropriate appraisal tools and reporting guidelines were used in this review. Results 31395 references were screened, of which 109 articles were included in the review. These articles described 66 different predictive risk models. Appraisal identified poor methodological quality and a high risk of bias for most models. However, all models precede the development of reporting guidelines for prediction modelling studies. Generalisability of models could be improved, less than half of the included models have been externally validated(n = 27/66). 152 predictors of mortality and 192 predictors and functional outcome were identified. No studies assessing ability to improve patient outcome (model impact studies) were identified. Conclusions Further external validation and model impact studies to confirm the utility of existing models in supporting decision-making is required. Existing models have much potential. Those wishing to predict stroke outcome are advised to build on previous work, to update and adapt validated models to their specific contexts opposed to designing new ones. PMID:29377923

  11. Microbial Communities and Their Predicted Metabolic Functions in Growth Laminae of a Unique Large Conical Mat from Lake Untersee, East Antarctica

    PubMed Central

    Koo, Hyunmin; Mojib, Nazia; Hakim, Joseph A.; Hawes, Ian; Tanabe, Yukiko; Andersen, Dale T.; Bej, Asim K.

    2017-01-01

    In this study, we report the distribution of microbial taxa and their predicted metabolic functions observed in the top (U1), middle (U2), and inner (U3) decadal growth laminae of a unique large conical microbial mat from perennially ice-covered Lake Untersee of East Antarctica, using NextGen sequencing of the 16S rRNA gene and bioinformatics tools. The results showed that the U1 lamina was dominated by cyanobacteria, specifically Phormidium sp., Leptolyngbya sp., and Pseudanabaena sp. The U2 and U3 laminae had high abundances of Actinobacteria, Verrucomicrobia, Proteobacteria, and Bacteroidetes. Closely related taxa within each abundant bacterial taxon found in each lamina were further differentiated at the highest taxonomic resolution using the oligotyping method. PICRUSt analysis, which determines predicted KEGG functional categories from the gene contents and abundances among microbial communities, revealed a high number of sequences belonging to carbon fixation, energy metabolism, cyanophycin, chlorophyll, and photosynthesis proteins in the U1 lamina. The functional predictions of the microbial communities in U2 and U3 represented signal transduction, membrane transport, zinc transport and amino acid-, carbohydrate-, and arsenic- metabolisms. The Nearest Sequenced Taxon Index (NSTI) values processed through PICRUSt were 0.10, 0.13, and 0.11 for U1, U2, and U3 laminae, respectively. These values indicated a close correspondence with the reference microbial genome database, implying high confidence in the predicted metabolic functions of the microbial communities in each lamina. The distribution of microbial taxa observed in each lamina and their predicted metabolic functions provides additional insight into the complex microbial ecosystem at Lake Untersee, and lays the foundation for studies that will enhance our understanding of the mechanisms responsible for the formation of these unique mat structures and their evolutionary significance. PMID:28824553

  12. Microbial Communities and Their Predicted Metabolic Functions in Growth Laminae of a Unique Large Conical Mat from Lake Untersee, East Antarctica.

    PubMed

    Koo, Hyunmin; Mojib, Nazia; Hakim, Joseph A; Hawes, Ian; Tanabe, Yukiko; Andersen, Dale T; Bej, Asim K

    2017-01-01

    In this study, we report the distribution of microbial taxa and their predicted metabolic functions observed in the top (U1), middle (U2), and inner (U3) decadal growth laminae of a unique large conical microbial mat from perennially ice-covered Lake Untersee of East Antarctica, using NextGen sequencing of the 16S rRNA gene and bioinformatics tools. The results showed that the U1 lamina was dominated by cyanobacteria, specifically Phormidium sp., Leptolyngbya sp., and Pseudanabaena sp. The U2 and U3 laminae had high abundances of Actinobacteria, Verrucomicrobia, Proteobacteria, and Bacteroidetes. Closely related taxa within each abundant bacterial taxon found in each lamina were further differentiated at the highest taxonomic resolution using the oligotyping method. PICRUSt analysis, which determines predicted KEGG functional categories from the gene contents and abundances among microbial communities, revealed a high number of sequences belonging to carbon fixation, energy metabolism, cyanophycin, chlorophyll, and photosynthesis proteins in the U1 lamina. The functional predictions of the microbial communities in U2 and U3 represented signal transduction, membrane transport, zinc transport and amino acid-, carbohydrate-, and arsenic- metabolisms. The Nearest Sequenced Taxon Index (NSTI) values processed through PICRUSt were 0.10, 0.13, and 0.11 for U1, U2, and U3 laminae, respectively. These values indicated a close correspondence with the reference microbial genome database, implying high confidence in the predicted metabolic functions of the microbial communities in each lamina. The distribution of microbial taxa observed in each lamina and their predicted metabolic functions provides additional insight into the complex microbial ecosystem at Lake Untersee, and lays the foundation for studies that will enhance our understanding of the mechanisms responsible for the formation of these unique mat structures and their evolutionary significance.

  13. Algorithm for predicting death among older adults in the home care setting: study protocol for the Risk Evaluation for Support: Predictions for Elder-life in the Community Tool (RESPECT)

    PubMed Central

    Manuel, Douglas G; Taljaard, Monica; Chalifoux, Mathieu; Bennett, Carol; Costa, Andrew P; Bronskill, Susan; Kobewka, Daniel; Tanuseputro, Peter

    2016-01-01

    Introduction Older adults living in the community often have multiple, chronic conditions and functional impairments. A challenge for healthcare providers working in the community is the lack of a predictive tool that can be applied to the broad spectrum of mortality risks observed and may be used to inform care planning. Objective To predict survival time for older adults in the home care setting. The final mortality risk algorithm will be implemented as a web-based calculator that can be used by older adults needing care and by their caregivers. Design Open cohort study using the Resident Assessment Instrument for Home Care (RAI-HC) data in Ontario, Canada, from 1 January 2007 to 31 December 2013. Participants The derivation cohort will consist of ∼437 000 older adults who had an RAI-HC assessment between 1 January 2007 and 31 December 2012. A split sample validation cohort will include ∼122 000 older adults with an RAI-HC assessment between 1 January and 31 December 2013. Main outcome measures Predicted survival from the time of an RAI-HC assessment. All deaths (n≈245 000) will be ascertained through linkage to a population-based registry that is maintained by the Ministry of Health in Ontario. Statistical analysis Proportional hazards regression will be estimated after assessment of assumptions. Predictors will include sociodemographic factors, social support, health conditions, functional status, cognition, symptoms of decline and prior healthcare use. Model performance will be evaluated for 6-month and 12-month predicted risks, including measures of calibration (eg, calibration plots) and discrimination (eg, c-statistics). The final algorithm will use combined development and validation data. Ethics and dissemination Research ethics approval has been granted by the Sunnybrook Health Sciences Centre Review Board. Findings will be disseminated through presentations at conferences and in peer-reviewed journals. Trial registration number NCT02779309, Pre-results. PMID:27909039

  14. Predictive Mining of Time Series Data

    NASA Astrophysics Data System (ADS)

    Java, A.; Perlman, E. S.

    2002-05-01

    All-sky monitors are a relatively new development in astronomy, and their data represent a largely untapped resource. Proper utilization of this resource could lead to important discoveries not only in the physics of variable objects, but in how one observes such objects. We discuss the development of a Java toolbox for astronomical time series data. Rather than using methods conventional in astronomy (e.g., power spectrum and cross-correlation analysis) we employ rule discovery techniques commonly used in analyzing stock-market data. By clustering patterns found within the data, rule discovery allows one to build predictive models, allowing one to forecast when a given event might occur or whether the occurrence of one event will trigger a second. We have tested the toolbox and accompanying display tool on datasets (representing several classes of objects) from the RXTE All Sky Monitor. We use these datasets to illustrate the methods and functionality of the toolbox. We have found predictive patterns in several ASM datasets. We also discuss problems faced in the development process, particularly the difficulties of dealing with discretized and irregularly sampled data. A possible application would be in scheduling target of opportunity observations where the astronomer wants to observe an object when a certain event or series of events occurs. By combining such a toolbox with an automatic, Java query tool which regularly gathers data on objects of interest, the astronomer or telescope operator could use the real-time datastream to efficiently predict the occurrence of (for example) a flare or other event. By combining the toolbox with dynamic time warping data-mining tools, one could predict events which may happen on variable time scales.

  15. Defining sarcopenia in terms of incident adverse outcomes.

    PubMed

    Woo, Jean; Leung, Jason; Morley, J E

    2015-03-01

    The objectives of this study were to compare the performance of different diagnoses of sarcopenia using European Working Group on Sarcopenia in Older People, International Working Group on Sarcopenia, and the US Foundation of National Institutes of Health (FNIH) criteria, and the screening tool SARC-F, against the Asian Working Group for Sarcopenia consensus panel definitions, in predicting physical limitation, slow walking speed, and repeated chair stand performance, days of hospital stay and mortality at follow up. Longitudinal study. Community survey in Hong Kong. Participants were 4000 men and women 65 years and older living in the community. Information from questionnaire regarding activities of daily living, physical functioning limitations, and constituent questions of SARC-F; body mass index (BMI), grip strength (GS), walking speed, and appendicular muscle mass (ASM). FNIH, consensus panel definitions, and the screening tool SARC-F all have similar AUC values in predicting incident physical limitation and physical performance measures at 4 years, walking speed at 7 years, days of hospital stay at 7 years, and mortality at 10 years. None of the definitions predicted increase in physical limitation at 4 years or mortality at 10 years in women, and none predicted all the adverse outcomes. The highest AUC values were observed for walking speed at 4 and 7 years. When applied to a Chinese elderly population, criteria used for diagnosis of sarcopenia derived from European, Asian, and international consensus panels, from US cutoff values defined from incident physical limitation, and the SARC-F screening tool, all have similar performance in predicting incident physical limitation and mortality. Copyright © 2015 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  16. G23D: Online tool for mapping and visualization of genomic variants on 3D protein structures.

    PubMed

    Solomon, Oz; Kunik, Vered; Simon, Amos; Kol, Nitzan; Barel, Ortal; Lev, Atar; Amariglio, Ninette; Somech, Raz; Rechavi, Gidi; Eyal, Eran

    2016-08-26

    Evaluation of the possible implications of genomic variants is an increasingly important task in the current high throughput sequencing era. Structural information however is still not routinely exploited during this evaluation process. The main reasons can be attributed to the partial structural coverage of the human proteome and the lack of tools which conveniently convert genomic positions, which are the frequent output of genomic pipelines, to proteins and structure coordinates. We present G23D, a tool for conversion of human genomic coordinates to protein coordinates and protein structures. G23D allows mapping of genomic positions/variants on evolutionary related (and not only identical) protein three dimensional (3D) structures as well as on theoretical models. By doing so it significantly extends the space of variants for which structural insight is feasible. To facilitate interpretation of the variant consequence, pathogenic variants, functional sites and polymorphism sites are displayed on protein sequence and structure diagrams alongside the input variants. G23D also provides modeling of the mutant structure, analysis of intra-protein contacts and instant access to functional predictions and predictions of thermo-stability changes. G23D is available at http://www.sheba-cancer.org.il/G23D . G23D extends the fraction of variants for which structural analysis is applicable and provides better and faster accessibility for structural data to biologists and geneticists who routinely work with genomic information.

  17. The Enzyme Function Initiative†

    PubMed Central

    Gerlt, John A.; Allen, Karen N.; Almo, Steven C.; Armstrong, Richard N.; Babbitt, Patricia C.; Cronan, John E.; Dunaway-Mariano, Debra; Imker, Heidi J.; Jacobson, Matthew P.; Minor, Wladek; Poulter, C. Dale; Raushel, Frank M.; Sali, Andrej; Shoichet, Brian K.; Sweedler, Jonathan V.

    2011-01-01

    The Enzyme Function Initiative (EFI) was recently established to address the challenge of assigning reliable functions to enzymes discovered in bacterial genome projects; in this Current Topic we review the structure and operations of the EFI. The EFI includes the Superfamily/Genome, Protein, Structure, Computation, and Data/Dissemination Cores that provide the infrastructure for reliably predicting the in vitro functions of unknown enzymes. The initial targets for functional assignment are selected from five functionally diverse superfamilies (amidohydrolase, enolase, glutathione transferase, haloalkanoic acid dehalogenase, and isoprenoid synthase), with five superfamily-specific Bridging Projects experimentally testing the predicted in vitro enzymatic activities. The EFI also includes the Microbiology Core that evaluates the in vivo context of in vitro enzymatic functions and confirms the functional predictions of the EFI. The deliverables of the EFI to the scientific community include: 1) development of a large-scale, multidisciplinary sequence/structure-based strategy for functional assignment of unknown enzymes discovered in genome projects (target selection, protein production, structure determination, computation, experimental enzymology, microbiology, and structure-based annotation); 2) dissemination of the strategy to the community via publications, collaborations, workshops, and symposia; 3) computational and bioinformatic tools for using the strategy; 4) provision of experimental protocols and/or reagents for enzyme production and characterization; and 5) dissemination of data via the EFI’s website, enzymefunction.org. The realization of multidisciplinary strategies for functional assignment will begin to define the full metabolic diversity that exists in nature and will impact basic biochemical and evolutionary understanding, as well as a wide range of applications of central importance to industrial, medicinal and pharmaceutical efforts. PMID:21999478

  18. The Enzyme Function Initiative.

    PubMed

    Gerlt, John A; Allen, Karen N; Almo, Steven C; Armstrong, Richard N; Babbitt, Patricia C; Cronan, John E; Dunaway-Mariano, Debra; Imker, Heidi J; Jacobson, Matthew P; Minor, Wladek; Poulter, C Dale; Raushel, Frank M; Sali, Andrej; Shoichet, Brian K; Sweedler, Jonathan V

    2011-11-22

    The Enzyme Function Initiative (EFI) was recently established to address the challenge of assigning reliable functions to enzymes discovered in bacterial genome projects; in this Current Topic, we review the structure and operations of the EFI. The EFI includes the Superfamily/Genome, Protein, Structure, Computation, and Data/Dissemination Cores that provide the infrastructure for reliably predicting the in vitro functions of unknown enzymes. The initial targets for functional assignment are selected from five functionally diverse superfamilies (amidohydrolase, enolase, glutathione transferase, haloalkanoic acid dehalogenase, and isoprenoid synthase), with five superfamily specific Bridging Projects experimentally testing the predicted in vitro enzymatic activities. The EFI also includes the Microbiology Core that evaluates the in vivo context of in vitro enzymatic functions and confirms the functional predictions of the EFI. The deliverables of the EFI to the scientific community include (1) development of a large-scale, multidisciplinary sequence/structure-based strategy for functional assignment of unknown enzymes discovered in genome projects (target selection, protein production, structure determination, computation, experimental enzymology, microbiology, and structure-based annotation), (2) dissemination of the strategy to the community via publications, collaborations, workshops, and symposia, (3) computational and bioinformatic tools for using the strategy, (4) provision of experimental protocols and/or reagents for enzyme production and characterization, and (5) dissemination of data via the EFI's Website, http://enzymefunction.org. The realization of multidisciplinary strategies for functional assignment will begin to define the full metabolic diversity that exists in nature and will impact basic biochemical and evolutionary understanding, as well as a wide range of applications of central importance to industrial, medicinal, and pharmaceutical efforts. © 2011 American Chemical Society

  19. Stress Prediction System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA wanted to know how astronauts' bodies would react under various gravitational pulls and space suit weights. Under contract to NASA, the University of Michigan's Center for Ergonomics developed a model capable of predicting what type of stress and what degree of load a body could stand. The algorithm generated was commercialized with the ISTU (Isometric Strength Testing Unit) Functional Capacity Evaluation System, which simulates tasks such as lifting a heavy box or pushing a cart and evaluates the exertion expended. It also identifies the muscle group that limits the subject's performance. It is an effective tool of personnel evaluation, selection and job redesign.

  20. The basis function approach for modeling autocorrelation in ecological data.

    PubMed

    Hefley, Trevor J; Broms, Kristin M; Brost, Brian M; Buderman, Frances E; Kay, Shannon L; Scharf, Henry R; Tipton, John R; Williams, Perry J; Hooten, Mevin B

    2017-03-01

    Analyzing ecological data often requires modeling the autocorrelation created by spatial and temporal processes. Many seemingly disparate statistical methods used to account for autocorrelation can be expressed as regression models that include basis functions. Basis functions also enable ecologists to modify a wide range of existing ecological models in order to account for autocorrelation, which can improve inference and predictive accuracy. Furthermore, understanding the properties of basis functions is essential for evaluating the fit of spatial or time-series models, detecting a hidden form of collinearity, and analyzing large data sets. We present important concepts and properties related to basis functions and illustrate several tools and techniques ecologists can use when modeling autocorrelation in ecological data. © 2016 by the Ecological Society of America.

  1. Predicting functional divergence in protein evolution by site-specific rate shifts

    NASA Technical Reports Server (NTRS)

    Gaucher, Eric A.; Gu, Xun; Miyamoto, Michael M.; Benner, Steven A.

    2002-01-01

    Most modern tools that analyze protein evolution allow individual sites to mutate at constant rates over the history of the protein family. However, Walter Fitch observed in the 1970s that, if a protein changes its function, the mutability of individual sites might also change. This observation is captured in the "non-homogeneous gamma model", which extracts functional information from gene families by examining the different rates at which individual sites evolve. This model has recently been coupled with structural and molecular biology to identify sites that are likely to be involved in changing function within the gene family. Applying this to multiple gene families highlights the widespread divergence of functional behavior among proteins to generate paralogs and orthologs.

  2. Consortium for health and military performance and American College of Sports Medicine Summit: utility of functional movement assessment in identifying musculoskeletal injury risk.

    PubMed

    Teyhen, Deydre; Bergeron, Michael F; Deuster, Patricia; Baumgartner, Neal; Beutler, Anthony I; de la Motte, Sarah J; Jones, Bruce H; Lisman, Peter; Padua, Darin A; Pendergrass, Timothy L; Pyne, Scott W; Schoomaker, Eric; Sell, Timothy C; O'Connor, Francis

    2014-01-01

    Prevention of musculoskeletal injuries (MSKI) is critical in both civilian and military populations to enhance physical performance, optimize health, and minimize health care expenses. Developing a more unified approach through addressing identified movement impairments could result in improved dynamic balance, trunk stability, and functional movement quality while potentially minimizing the risk of incurring such injuries. Although the evidence supporting the utility of injury prediction and return-to-activity readiness screening tools is encouraging, considerable additional research is needed regarding improving sensitivity, specificity, and outcomes, and especially the implementation challenges and barriers in a military setting. If selected current functional movement assessments can be administered in an efficient and cost-effective manner, utilization of the existing tools may be a beneficial first step in decreasing the burden of MSKI, with a subsequent focus on secondary and tertiary prevention via further assessments on those with prior injury history.

  3. A simplified method of evaluating the stress wave environment of internal equipment

    NASA Technical Reports Server (NTRS)

    Colton, J. D.; Desmond, T. P.

    1979-01-01

    A simplified method called the transfer function technique (TFT) was devised for evaluating the stress wave environment in a structure containing internal equipment. The TFT consists of following the initial in-plane stress wave that propagates through a structure subjected to a dynamic load and characterizing how the wave is altered as it is transmitted through intersections of structural members. As a basis for evaluating the TFT, impact experiments and detailed stress wave analyses were performed for structures with two or three, or more members. Transfer functions that relate the wave transmitted through an intersection to the incident wave were deduced from the predicted wave response. By sequentially applying these transfer functions to a structure with several intersections, it was found that the environment produced by the initial stress wave propagating through the structure can be approximated well. The TFT can be used as a design tool or as an analytical tool to determine whether a more detailed wave analysis is warranted.

  4. The Evolutionary Ecology of Plant Disease: A Phylogenetic Perspective.

    PubMed

    Gilbert, Gregory S; Parker, Ingrid M

    2016-08-04

    An explicit phylogenetic perspective provides useful tools for phytopathology and plant disease ecology because the traits of both plants and microbes are shaped by their evolutionary histories. We present brief primers on phylogenetic signal and the analytical tools of phylogenetic ecology. We review the literature and find abundant evidence of phylogenetic signal in pathogens and plants for most traits involved in disease interactions. Plant nonhost resistance mechanisms and pathogen housekeeping functions are conserved at deeper phylogenetic levels, whereas molecular traits associated with rapid coevolutionary dynamics are more labile at branch tips. Horizontal gene transfer disrupts the phylogenetic signal for some microbial traits. Emergent traits, such as host range and disease severity, show clear phylogenetic signals. Therefore pathogen spread and disease impact are influenced by the phylogenetic structure of host assemblages. Phylogenetically rare species escape disease pressure. Phylogenetic tools could be used to develop predictive tools for phytosanitary risk analysis and reduce disease pressure in multispecies cropping systems.

  5. Missense mutations of MLH1 and MSH2 genes detected in patients with gastrointestinal cancer are associated with exonic splicing enhancers and silencers

    PubMed Central

    ZHU, MING; CHEN, HUI-MEI; WANG, YA-PING

    2013-01-01

    The MLH1 and MSH2 genes in DNA mismatch repair are important in the pathogenesis of gastrointestinal cancer. Recent studies of normal and alternative splicing suggest that the deleterious effects of missense mutations may in fact be splicing-related when they are located in exonic splicing enhancers (ESEs) or exonic splicing silencers (ESSs). In this study, we used ESE-finder and FAS-ESS software to analyze the potential ESE/ESS motifs of the 114 missense mutations detected in the two genes in East Asian gastrointestinal cancer patients. In addition, we used the SIFT tool to functionally analyze these mutations. The amount of the ESE losses (68) was 51.1% higher than the ESE gains (45) of all the mutations. However, the amount of the ESS gains (27) was 107.7% higher than the ESS losses (13). In total, 56 (49.1%) mutations possessed a potential exonic splicing regulator (ESR) error. Eighty-one mutations (71.1%) were predicted to be deleterious with a lower tolerance index as detected by the Sorting Intolerant from Tolerant (SIFT) tool. Among these, 38 (33.3%) mutations were predicted to be functionally deleterious and possess one potential ESR error, while 18 (15.8%) mutations were predicted to be functionally deleterious and exhibit two potential ESR errors. These may be more likely to affect exon splicing. Our results indicated that there is a strong correlation between missense mutations in MLH1 and MSH2 genes detected in East Asian gastrointestinal cancer patients and ESR motifs. In order to correctly understand the molecular nature of mutations, splicing patterns should be compared between wild-type and mutant samples. PMID:23760103

  6. High-throughput interpretation of gene structure changes in human and nonhuman resequencing data, using ACE

    PubMed Central

    Majoros, William H.; Campbell, Michael S.; Holt, Carson; DeNardo, Erin K.; Ware, Doreen; Allen, Andrew S.; Yandell, Mark; Reddy, Timothy E.

    2017-01-01

    Abstract Motivation: The accurate interpretation of genetic variants is critical for characterizing genotype–phenotype associations. Because the effects of genetic variants can depend strongly on their local genomic context, accurate genome annotations are essential. Furthermore, as some variants have the potential to disrupt or alter gene structure, variant interpretation efforts stand to gain from the use of individualized annotations that account for differences in gene structure between individuals or strains. Results: We describe a suite of software tools for identifying possible functional changes in gene structure that may result from sequence variants. ACE (‘Assessing Changes to Exons’) converts phased genotype calls to a collection of explicit haplotype sequences, maps transcript annotations onto them, detects gene-structure changes and their possible repercussions, and identifies several classes of possible loss of function. Novel transcripts predicted by ACE are commonly supported by spliced RNA-seq reads, and can be used to improve read alignment and transcript quantification when an individual-specific genome sequence is available. Using publicly available RNA-seq data, we show that ACE predictions confirm earlier results regarding the quantitative effects of nonsense-mediated decay, and we show that predicted loss-of-function events are highly concordant with patterns of intolerance to mutations across the human population. ACE can be readily applied to diverse species including animals and plants, making it a broadly useful tool for use in eukaryotic population-based resequencing projects, particularly for assessing the joint impact of all variants at a locus. Availability and Implementation: ACE is written in open-source C ++ and Perl and is available from geneprediction.org/ACE Contact: myandell@genetics.utah.edu or tim.reddy@duke.edu Supplementary information: Supplementary information is available at Bioinformatics online. PMID:28011790

  7. High-throughput interpretation of gene structure changes in human and nonhuman resequencing data, using ACE.

    PubMed

    Majoros, William H; Campbell, Michael S; Holt, Carson; DeNardo, Erin K; Ware, Doreen; Allen, Andrew S; Yandell, Mark; Reddy, Timothy E

    2017-05-15

    The accurate interpretation of genetic variants is critical for characterizing genotype-phenotype associations. Because the effects of genetic variants can depend strongly on their local genomic context, accurate genome annotations are essential. Furthermore, as some variants have the potential to disrupt or alter gene structure, variant interpretation efforts stand to gain from the use of individualized annotations that account for differences in gene structure between individuals or strains. We describe a suite of software tools for identifying possible functional changes in gene structure that may result from sequence variants. ACE ('Assessing Changes to Exons') converts phased genotype calls to a collection of explicit haplotype sequences, maps transcript annotations onto them, detects gene-structure changes and their possible repercussions, and identifies several classes of possible loss of function. Novel transcripts predicted by ACE are commonly supported by spliced RNA-seq reads, and can be used to improve read alignment and transcript quantification when an individual-specific genome sequence is available. Using publicly available RNA-seq data, we show that ACE predictions confirm earlier results regarding the quantitative effects of nonsense-mediated decay, and we show that predicted loss-of-function events are highly concordant with patterns of intolerance to mutations across the human population. ACE can be readily applied to diverse species including animals and plants, making it a broadly useful tool for use in eukaryotic population-based resequencing projects, particularly for assessing the joint impact of all variants at a locus. ACE is written in open-source C ++ and Perl and is available from geneprediction.org/ACE. myandell@genetics.utah.edu or tim.reddy@duke.edu. Supplementary information is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. Automated benchmarking of peptide-MHC class I binding predictions.

    PubMed

    Trolle, Thomas; Metushi, Imir G; Greenbaum, Jason A; Kim, Yohan; Sidney, John; Lund, Ole; Sette, Alessandro; Peters, Bjoern; Nielsen, Morten

    2015-07-01

    Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto_bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto_bench/mhci/join. mniel@cbs.dtu.dk or bpeters@liai.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Automated benchmarking of peptide-MHC class I binding predictions

    PubMed Central

    Trolle, Thomas; Metushi, Imir G.; Greenbaum, Jason A.; Kim, Yohan; Sidney, John; Lund, Ole; Sette, Alessandro; Peters, Bjoern; Nielsen, Morten

    2015-01-01

    Motivation: Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. Results: The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Availability and implementation: Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto_bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto_bench/mhci/join. Contact: mniel@cbs.dtu.dk or bpeters@liai.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25717196

  10. Leuconostoc mesenteroides growth in food products: prediction and sensitivity analysis by adaptive-network-based fuzzy inference systems.

    PubMed

    Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien

    2013-01-01

    An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. THE ANFIS AND ANN MODELS WERE COMPARED IN TERMS OF SIX STATISTICAL INDICES CALCULATED BY COMPARING THEIR PREDICTION RESULTS WITH ACTUAL DATA: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R (2)). Graphical plots were also used for model comparison. The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions.

  11. Leuconostoc Mesenteroides Growth in Food Products: Prediction and Sensitivity Analysis by Adaptive-Network-Based Fuzzy Inference Systems

    PubMed Central

    Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien

    2013-01-01

    Background An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. Methods The ANFIS and ANN models were compared in terms of six statistical indices calculated by comparing their prediction results with actual data: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R 2). Graphical plots were also used for model comparison. Conclusions The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. PMID:23705023

  12. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  13. Protein asparagine deamidation prediction based on structures with machine learning methods.

    PubMed

    Jia, Lei; Sun, Yaxiong

    2017-01-01

    Chemical stability is a major concern in the development of protein therapeutics due to its impact on both efficacy and safety. Protein "hotspots" are amino acid residues that are subject to various chemical modifications, including deamidation, isomerization, glycosylation, oxidation etc. A more accurate prediction method for potential hotspot residues would allow their elimination or reduction as early as possible in the drug discovery process. In this work, we focus on prediction models for asparagine (Asn) deamidation. Sequence-based prediction method simply identifies the NG motif (amino acid asparagine followed by a glycine) to be liable to deamidation. It still dominates deamidation evaluation process in most pharmaceutical setup due to its convenience. However, the simple sequence-based method is less accurate and often causes over-engineering a protein. We introduce structure-based prediction models by mining available experimental and structural data of deamidated proteins. Our training set contains 194 Asn residues from 25 proteins that all have available high-resolution crystal structures. Experimentally measured deamidation half-life of Asn in penta-peptides as well as 3D structure-based properties, such as solvent exposure, crystallographic B-factors, local secondary structure and dihedral angles etc., were used to train prediction models with several machine learning algorithms. The prediction tools were cross-validated as well as tested with an external test data set. The random forest model had high enrichment in ranking deamidated residues higher than non-deamidated residues while effectively eliminated false positive predictions. It is possible that such quantitative protein structure-function relationship tools can also be applied to other protein hotspot predictions. In addition, we extensively discussed metrics being used to evaluate the performance of predicting unbalanced data sets such as the deamidation case.

  14. State of Jet Noise Prediction-NASA Perspective

    NASA Technical Reports Server (NTRS)

    Bridges, James E.

    2008-01-01

    This presentation covers work primarily done under the Airport Noise Technical Challenge portion of the Supersonics Project in the Fundamental Aeronautics Program. To provide motivation and context, the presentation starts with a brief overview of the Airport Noise Technical Challenge. It then covers the state of NASA s jet noise prediction tools in empirical, RANS-based, and time-resolved categories. The empirical tools, requires seconds to provide a prediction of noise spectral directivity with an accuracy of a few dB, but only for axisymmetric configurations. The RANS-based tools are able to discern the impact of three-dimensional features, but are currently deficient in predicting noise from heated jets and jets with high speed and require hours to produce their prediction. The time-resolved codes are capable of predicting resonances and other time-dependent phenomena, but are very immature, requiring months to deliver predictions without unknown accuracies and dependabilities. In toto, however, when one considers the progress being made it appears that aeroacoustic prediction tools are soon to approach the level of sophistication and accuracy of aerodynamic engineering tools.

  15. GIANT API: an application programming interface for functional genomics.

    PubMed

    Roberts, Andrew M; Wong, Aaron K; Fisk, Ian; Troyanskaya, Olga G

    2016-07-08

    GIANT API provides biomedical researchers programmatic access to tissue-specific and global networks in humans and model organisms, and associated tools, which includes functional re-prioritization of existing genome-wide association study (GWAS) data. Using tissue-specific interaction networks, researchers are able to predict relationships between genes specific to a tissue or cell lineage, identify the changing roles of genes across tissues and uncover disease-gene associations. Additionally, GIANT API enables computational tools like NetWAS, which leverages tissue-specific networks for re-prioritization of GWAS results. The web services covered by the API include 144 tissue-specific functional gene networks in human, global functional networks for human and six common model organisms and the NetWAS method. GIANT API conforms to the REST architecture, which makes it stateless, cacheable and highly scalable. It can be used by a diverse range of clients including web browsers, command terminals, programming languages and standalone apps for data analysis and visualization. The API is freely available for use at http://giant-api.princeton.edu. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Predicting invasive species impacts: a community module functional response approach reveals context dependencies.

    PubMed

    Paterson, Rachel A; Dick, Jaimie T A; Pritchard, Daniel W; Ennis, Marilyn; Hatcher, Melanie J; Dunn, Alison M

    2015-03-01

    Predatory functional responses play integral roles in predator-prey dynamics, and their assessment promises greater understanding and prediction of the predatory impacts of invasive species. Other interspecific interactions, however, such as parasitism and higher-order predation, have the potential to modify predator-prey interactions and thus the predictive capability of the comparative functional response approach. We used a four-species community module (higher-order predator; focal native or invasive predators; parasites of focal predators; native prey) to compare the predatory functional responses of native Gammarus duebeni celticus and invasive Gammarus pulex amphipods towards three invertebrate prey species (Asellus aquaticus, Simulium spp., Baetis rhodani), thus, quantifying the context dependencies of parasitism and a higher-order fish predator on these functional responses. Our functional response experiments demonstrated that the invasive amphipod had a higher predatory impact (lower handling time) on two of three prey species, which reflects patterns of impact observed in the field. The community module also revealed that parasitism had context-dependent influences, for one prey species, with the potential to further reduce the predatory impact of the invasive amphipod or increase the predatory impact of the native amphipod in the presence of a higher-order fish predator. Partial consumption of prey was similar for both predators and occurred increasingly in the order A. aquaticus, Simulium spp. and B. rhodani. This was associated with increasing prey densities, but showed no context dependencies with parasitism or higher-order fish predator. This study supports the applicability of comparative functional responses as a tool to predict and assess invasive species impacts incorporating multiple context dependencies. © 2014 The Authors. Journal of Animal Ecology © 2014 British Ecological Society.

  17. The Association between Daytime Napping and Cognitive Functioning in Chronic Fatigue Syndrome

    PubMed Central

    Gotts, Zoe M.; Ellis, Jason G.; Deary, Vincent; Barclay, Nicola; Newton, Julia L.

    2015-01-01

    Objectives The precise relationship between sleep and physical and mental functioning in chronic fatigue syndrome (CFS) has not been examined directly, nor has the impact of daytime napping. This study aimed to examine self-reported sleep in patients with CFS and explore whether sleep quality and daytime napping, specific patient characteristics (gender, illness length) and levels of anxiety and depression, predicted daytime fatigue severity, levels of daytime sleepiness and cognitive functioning, all key dimensions of the illness experience. Methods 118 adults meeting the 1994 CDC case criteria for CFS completed a standardised sleep diary over 14 days. Momentary functional assessments of fatigue, sleepiness, cognition and mood were completed by patients as part of usual care. Levels of daytime functioning and disability were quantified using symptom assessment tools, measuring fatigue (Chalder Fatigue Scale), sleepiness (Epworth Sleepiness Scale), cognitive functioning (Trail Making Test, Cognitive Failures Questionnaire), and mood (Hospital Anxiety and Depression Scale). Results Hierarchical Regressions demonstrated that a shorter time since diagnosis, higher depression and longer wake time after sleep onset predicted 23.4% of the variance in fatigue severity (p <.001). Being male, higher depression and more afternoon naps predicted 25.6% of the variance in objective cognitive dysfunction (p <.001). Higher anxiety and depression and morning napping predicted 32.2% of the variance in subjective cognitive dysfunction (p <.001). When patients were classified into groups of mild and moderate sleepiness, those with longer daytime naps, those who mainly napped in the afternoon, and those with higher levels of anxiety, were more likely to be in the moderately sleepy group. Conclusions Napping, particularly in the afternoon is associated with poorer cognitive functioning and more daytime sleepiness in CFS. These findings have clinical implications for symptom management strategies. PMID:25575044

  18. The association between daytime napping and cognitive functioning in chronic fatigue syndrome.

    PubMed

    Gotts, Zoe M; Ellis, Jason G; Deary, Vincent; Barclay, Nicola; Newton, Julia L

    2015-01-01

    The precise relationship between sleep and physical and mental functioning in chronic fatigue syndrome (CFS) has not been examined directly, nor has the impact of daytime napping. This study aimed to examine self-reported sleep in patients with CFS and explore whether sleep quality and daytime napping, specific patient characteristics (gender, illness length) and levels of anxiety and depression, predicted daytime fatigue severity, levels of daytime sleepiness and cognitive functioning, all key dimensions of the illness experience. 118 adults meeting the 1994 CDC case criteria for CFS completed a standardised sleep diary over 14 days. Momentary functional assessments of fatigue, sleepiness, cognition and mood were completed by patients as part of usual care. Levels of daytime functioning and disability were quantified using symptom assessment tools, measuring fatigue (Chalder Fatigue Scale), sleepiness (Epworth Sleepiness Scale), cognitive functioning (Trail Making Test, Cognitive Failures Questionnaire), and mood (Hospital Anxiety and Depression Scale). Hierarchical Regressions demonstrated that a shorter time since diagnosis, higher depression and longer wake time after sleep onset predicted 23.4% of the variance in fatigue severity (p <.001). Being male, higher depression and more afternoon naps predicted 25.6% of the variance in objective cognitive dysfunction (p <.001). Higher anxiety and depression and morning napping predicted 32.2% of the variance in subjective cognitive dysfunction (p <.001). When patients were classified into groups of mild and moderate sleepiness, those with longer daytime naps, those who mainly napped in the afternoon, and those with higher levels of anxiety, were more likely to be in the moderately sleepy group. Napping, particularly in the afternoon is associated with poorer cognitive functioning and more daytime sleepiness in CFS. These findings have clinical implications for symptom management strategies.

  19. Genetic interaction networks: better understand to better predict

    PubMed Central

    Boucher, Benjamin; Jenna, Sarah

    2013-01-01

    A genetic interaction (GI) between two genes generally indicates that the phenotype of a double mutant differs from what is expected from each individual mutant. In the last decade, genome scale studies of quantitative GIs were completed using mainly synthetic genetic array technology and RNA interference in yeast and Caenorhabditis elegans. These studies raised questions regarding the functional interpretation of GIs, the relationship of genetic and molecular interaction networks, the usefulness of GI networks to infer gene function and co-functionality, the evolutionary conservation of GI, etc. While GIs have been used for decades to dissect signaling pathways in genetic models, their functional interpretations are still not trivial. The existence of a GI between two genes does not necessarily imply that these two genes code for interacting proteins or that the two genes are even expressed in the same cell. In fact, a GI only implies that the two genes share a functional relationship. These two genes may be involved in the same biological process or pathway; or they may also be involved in compensatory pathways with unrelated apparent function. Considering the powerful opportunity to better understand gene function, genetic relationship, robustness and evolution, provided by a genome-wide mapping of GIs, several in silico approaches have been employed to predict GIs in unicellular and multicellular organisms. Most of these methods used weighted data integration. In this article, we will review the later knowledge acquired on GI networks in metazoans by looking more closely into their relationship with pathways, biological processes and molecular complexes but also into their modularity and organization. We will also review the different in silico methods developed to predict GIs and will discuss how the knowledge acquired on GI networks can be used to design predictive tools with higher performances. PMID:24381582

  20. The short Synacthen (corticotropin) test can be used to predict recovery of hypothalamo-pituitary-adrenal axis function.

    PubMed

    Pofi, Riccardo; Feliciano, Chona; Sbardella, Emilia; Argese, Nicola; Woods, Conor P; Grossman, Ashley B; Jafar-Mohammadi, Bahram; Gleeson, Helena; Lenzi, Andrea; Isidori, Andrea M; Tomlinson, Jeremy W

    2018-05-25

    The 250μg Short Synacthen (corticotropin) Test (SST) is the most commonly used tool to assess hypothalamo-pituitary-adrenal (HPA) axis function. There are many potentially reversible causes of adrenal insufficiency (AI), but currently no data to guide clinicians as to the frequency of repeat testing or likelihood of HPA axis recovery. To use the SST results to predict recovery of adrenal function. A retrospective analysis of data from 1912 SSTs. 776 patients with reversible causes of AI were identified who had at least two SSTs performed. A subgroup analysis was performed on individuals previously treated with suppressive doses of glucocorticoids (n=110). Recovery of HPA axis function. SST 30-minute cortisol levels above or below 350nmol/L (12.7μg/dL) best predicted HPA axis recovery (AUC ROC=0.85; median recovery time 334 vs. 1368 days, p=8.5x10-13): 99% of patients with a 30-minute cortisol >350nmol/L recovered adrenal function within 4-years, compared with 49% in those with cortisol levels <350nmol/L. In patients exposed to suppressive doses of glucocorticoids, delta cortisol (30-minute - basal) was the best predictor of recovery (AUC ROC = 0.77; median recovery time 262 vs. 974 days, p=7.0x10-6). No patient with a delta cortisol <100nmol (3.6μg/dL) and a subsequent random cortisol <200nmol/L (7.3μg/dL) measured approximately 1-year later recovered HPA axis function. Cortisol levels across an SST can be used to predict recovery of AI and may guide the frequency of repeat testing and inform both clinicians and patients as to the likelihood of restoration of HPA axis function.

  1. Computational Identification and Functional Predictions of Long Noncoding RNA in Zea mays

    PubMed Central

    Boerner, Susan; McGinnis, Karen M.

    2012-01-01

    Background Computational analysis of cDNA sequences from multiple organisms suggests that a large portion of transcribed DNA does not code for a functional protein. In mammals, noncoding transcription is abundant, and often results in functional RNA molecules that do not appear to encode proteins. Many long noncoding RNAs (lncRNAs) appear to have epigenetic regulatory function in humans, including HOTAIR and XIST. While epigenetic gene regulation is clearly an essential mechanism in plants, relatively little is known about the presence or function of lncRNAs in plants. Methodology/Principal Findings To explore the connection between lncRNA and epigenetic regulation of gene expression in plants, a computational pipeline using the programming language Python has been developed and applied to maize full length cDNA sequences to identify, classify, and localize potential lncRNAs. The pipeline was used in parallel with an SVM tool for identifying ncRNAs to identify the maximal number of ncRNAs in the dataset. Although the available library of sequences was small and potentially biased toward protein coding transcripts, 15% of the sequences were predicted to be noncoding. Approximately 60% of these sequences appear to act as precursors for small RNA molecules and may function to regulate gene expression via a small RNA dependent mechanism. ncRNAs were predicted to originate from both genic and intergenic loci. Of the lncRNAs that originated from genic loci, ∼20% were antisense to the host gene loci. Conclusions/Significance Consistent with similar studies in other organisms, noncoding transcription appears to be widespread in the maize genome. Computational predictions indicate that maize lncRNAs may function to regulate expression of other genes through multiple RNA mediated mechanisms. PMID:22916204

  2. Novel method for comparing coverage by future methods of ballistic facial protection.

    PubMed

    Breeze, J; Allanson-Bailey, L C; Hepper, A E; Lewis, E A

    2015-01-01

    The wearing of eye protection by United Kingdom soldiers in Afghanistan has reduced the morbidity caused by explosive fragments. However, the remaining face remains uncovered because there is a lack of evidence to substantiate the procurement of methods to protect it. Using a new computerised tool we entered details of the entry sites of surface wounds caused by explosive fragments in all UK soldiers who were injured in the face between 1 January 2010 and 31 December 2011. We compared clinical and predicted immediate and long term outcomes (as defined by the Abbreviated Injury Score (AIS) and the Functional Capacity Index (pFCI), respectively). We also used the tool to predict how additional protection in the form of a visor and mandible guard would affect outcomes. A soldier wearing eye protection was 9 times (1.03/0.12) less likely to sustain an eye injury than one without. However, 38% of soldiers in this series were not wearing eye protection at the time of injury. There was no significant difference between the AIS and pFCI scores predicted by the tool and those found clinically. There is limited evidence to support the use of a mandible guard; its greatest asset is better protection of the nose, but a visor would be expected to reduce long-term morbidity more than eye protection alone, and we recommend future trials to assess its acceptability to users. We think that use of this novel tool can help in the selection of future methods of ballistic facial protection. Copyright © 2014. Published by Elsevier Ltd.

  3. Inter-kingdom prediction certainty evaluation of protein subcellular localization tools: microbial pathogenesis approach for deciphering host microbe interaction.

    PubMed

    Khan, Abdul Arif; Khan, Zakir; Kalam, Mohd Abul; Khan, Azmat Ali

    2018-01-01

    Microbial pathogenesis involves several aspects of host-pathogen interactions, including microbial proteins targeting host subcellular compartments and subsequent effects on host physiology. Such studies are supported by experimental data, but recent detection of bacterial proteins localization through computational eukaryotic subcellular protein targeting prediction tools has also come into practice. We evaluated inter-kingdom prediction certainty of these tools. The bacterial proteins experimentally known to target host subcellular compartments were predicted with eukaryotic subcellular targeting prediction tools, and prediction certainty was assessed. The results indicate that these tools alone are not sufficient for inter-kingdom protein targeting prediction. The correct prediction of pathogen's protein subcellular targeting depends on several factors, including presence of localization signal, transmembrane domain and molecular weight, etc., in addition to approach for subcellular targeting prediction. The detection of protein targeting in endomembrane system is comparatively difficult, as the proteins in this location are channelized to different compartments. In addition, the high specificity of training data set also creates low inter-kingdom prediction accuracy. Current data can help to suggest strategy for correct prediction of bacterial protein's subcellular localization in host cell. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Murine Hyperglycemic Vasculopathy and Cardiomyopathy: Whole-Genome Gene Expression Analysis Predicts Cellular Targets and Regulatory Networks Influenced by Mannose Binding Lectin

    PubMed Central

    Zou, Chenhui; La Bonte, Laura R.; Pavlov, Vasile I.; Stahl, Gregory L.

    2012-01-01

    Hyperglycemia, in the absence of type 1 or 2 diabetes, is an independent risk factor for cardiovascular disease. We have previously demonstrated a central role for mannose binding lectin (MBL)-mediated cardiac dysfunction in acute hyperglycemic mice. In this study, we applied whole-genome microarray data analysis to investigate MBL’s role in systematic gene expression changes. The data predict possible intracellular events taking place in multiple cellular compartments such as enhanced insulin signaling pathway sensitivity, promoted mitochondrial respiratory function, improved cellular energy expenditure and protein quality control, improved cytoskeleton structure, and facilitated intracellular trafficking, all of which may contribute to the organismal health of MBL null mice against acute hyperglycemia. Our data show a tight association between gene expression profile and tissue function which might be a very useful tool in predicting cellular targets and regulatory networks connected with in vivo observations, providing clues for further mechanistic studies. PMID:22375142

  5. Computational Analysis of Uncharacterized Proteins of Environmental Bacterial Genome

    NASA Astrophysics Data System (ADS)

    Coxe, K. J.; Kumar, M.

    2017-12-01

    Betaproteobacteria strain CB is a gram-negative bacterium in the phylum Proteobacteria and are found naturally in soil and water. In this complex environment, bacteria play a key role in efficiently eliminating the organic material and other pollutants from wastewater. To investigate the process of pollutant removal from wastewater using bacteria, it is important to characterize the proteins encoded by the bacterial genome. Our study combines a number of bioinformatics tools to predict the function of unassigned proteins in the bacterial genome. The genome of Betaproteobacteria strain CB contains 2,112 proteins in which function of 508 proteins are unknown, termed as uncharacterized proteins (UPs). The localization of the UPs with in the cell was determined and the structure of 38 UPs was accurately predicted. These UPs were predicted to belong to various classes of proteins such as enzymes, transporters, binding proteins, signal peptides, transmembrane proteins and other proteins. The outcome of this work will help better understand wastewater treatment mechanism.

  6. Estimating the average length of hospitalization due to pneumonia: a fuzzy approach.

    PubMed

    Nascimento, L F C; Rizol, P M S R; Peneluppi, A P

    2014-08-29

    Exposure to air pollutants is associated with hospitalizations due to pneumonia in children. We hypothesized the length of hospitalization due to pneumonia may be dependent on air pollutant concentrations. Therefore, we built a computational model using fuzzy logic tools to predict the mean time of hospitalization due to pneumonia in children living in São José dos Campos, SP, Brazil. The model was built with four inputs related to pollutant concentrations and effective temperature, and the output was related to the mean length of hospitalization. Each input had two membership functions and the output had four membership functions, generating 16 rules. The model was validated against real data, and a receiver operating characteristic (ROC) curve was constructed to evaluate model performance. The values predicted by the model were significantly correlated with real data. Sulfur dioxide and particulate matter significantly predicted the mean length of hospitalization in lags 0, 1, and 2. This model can contribute to the care provided to children with pneumonia.

  7. Estimating the average length of hospitalization due to pneumonia: a fuzzy approach.

    PubMed

    Nascimento, L F C; Rizol, P M S R; Peneluppi, A P

    2014-11-01

    Exposure to air pollutants is associated with hospitalizations due to pneumonia in children. We hypothesized the length of hospitalization due to pneumonia may be dependent on air pollutant concentrations. Therefore, we built a computational model using fuzzy logic tools to predict the mean time of hospitalization due to pneumonia in children living in São José dos Campos, SP, Brazil. The model was built with four inputs related to pollutant concentrations and effective temperature, and the output was related to the mean length of hospitalization. Each input had two membership functions and the output had four membership functions, generating 16 rules. The model was validated against real data, and a receiver operating characteristic (ROC) curve was constructed to evaluate model performance. The values predicted by the model were significantly correlated with real data. Sulfur dioxide and particulate matter significantly predicted the mean length of hospitalization in lags 0, 1, and 2. This model can contribute to the care provided to children with pneumonia.

  8. Structure and Stability of Molecular Crystals with Many-Body Dispersion-Inclusive Density Functional Tight Binding.

    PubMed

    Mortazavi, Majid; Brandenburg, Jan Gerit; Maurer, Reinhard J; Tkatchenko, Alexandre

    2018-01-18

    Accurate prediction of structure and stability of molecular crystals is crucial in materials science and requires reliable modeling of long-range dispersion interactions. Semiempirical electronic structure methods are computationally more efficient than their ab initio counterparts, allowing structure sampling with significant speedups. We combine the Tkatchenko-Scheffler van der Waals method (TS) and the many-body dispersion method (MBD) with third-order density functional tight-binding (DFTB3) via a charge population-based method. We find an overall good performance for the X23 benchmark database of molecular crystals, despite an underestimation of crystal volume that can be traced to the DFTB parametrization. We achieve accurate lattice energy predictions with DFT+MBD energetics on top of vdW-inclusive DFTB3 structures, resulting in a speedup of up to 3000 times compared with a full DFT treatment. This suggests that vdW-inclusive DFTB3 can serve as a viable structural prescreening tool in crystal structure prediction.

  9. Thermodynamic database for proteins: features and applications.

    PubMed

    Gromiha, M Michael; Sarai, Akinori

    2010-01-01

    We have developed a thermodynamic database for proteins and mutants, ProTherm, which is a collection of a large number of thermodynamic data on protein stability along with the sequence and structure information, experimental methods and conditions, and literature information. This is a valuable resource for understanding/predicting the stability of proteins, and it can be accessible at http://www.gibk26.bse.kyutech.ac.jp/jouhou/Protherm/protherm.html . ProTherm has several features including various search, display, and sorting options and visualization tools. We have analyzed the data in ProTherm to examine the relationship among thermodynamics, structure, and function of proteins. We describe the progress on the development of methods for understanding/predicting protein stability, such as (i) relationship between the stability of protein mutants and amino acid properties, (ii) average assignment method, (iii) empirical energy functions, (iv) torsion, distance, and contact potentials, and (v) machine learning techniques. The list of online resources for predicting protein stability has also been provided.

  10. Family-Based Benchmarking of Copy Number Variation Detection Software.

    PubMed

    Nutsua, Marcel Elie; Fischer, Annegret; Nebel, Almut; Hofmann, Sylvia; Schreiber, Stefan; Krawczak, Michael; Nothnagel, Michael

    2015-01-01

    The analysis of structural variants, in particular of copy-number variations (CNVs), has proven valuable in unraveling the genetic basis of human diseases. Hence, a large number of algorithms have been developed for the detection of CNVs in SNP array signal intensity data. Using the European and African HapMap trio data, we undertook a comparative evaluation of six commonly used CNV detection software tools, namely Affymetrix Power Tools (APT), QuantiSNP, PennCNV, GLAD, R-gada and VEGA, and assessed their level of pair-wise prediction concordance. The tool-specific CNV prediction accuracy was assessed in silico by way of intra-familial validation. Software tools differed greatly in terms of the number and length of the CNVs predicted as well as the number of markers included in a CNV. All software tools predicted substantially more deletions than duplications. Intra-familial validation revealed consistently low levels of prediction accuracy as measured by the proportion of validated CNVs (34-60%). Moreover, up to 20% of apparent family-based validations were found to be due to chance alone. Software using Hidden Markov models (HMM) showed a trend to predict fewer CNVs than segmentation-based algorithms albeit with greater validity. PennCNV yielded the highest prediction accuracy (60.9%). Finally, the pairwise concordance of CNV prediction was found to vary widely with the software tools involved. We recommend HMM-based software, in particular PennCNV, rather than segmentation-based algorithms when validity is the primary concern of CNV detection. QuantiSNP may be used as an additional tool to detect sets of CNVs not detectable by the other tools. Our study also reemphasizes the need for laboratory-based validation, such as qPCR, of CNVs predicted in silico.

  11. Insights into multimodal imaging classification of ADHD

    PubMed Central

    Colby, John B.; Rudie, Jeffrey D.; Brown, Jesse A.; Douglas, Pamela K.; Cohen, Mark S.; Shehzad, Zarrar

    2012-01-01

    Attention deficit hyperactivity disorder (ADHD) currently is diagnosed in children by clinicians via subjective ADHD-specific behavioral instruments and by reports from the parents and teachers. Considering its high prevalence and large economic and societal costs, a quantitative tool that aids in diagnosis by characterizing underlying neurobiology would be extremely valuable. This provided motivation for the ADHD-200 machine learning (ML) competition, a multisite collaborative effort to investigate imaging classifiers for ADHD. Here we present our ML approach, which used structural and functional magnetic resonance imaging data, combined with demographic information, to predict diagnostic status of individuals with ADHD from typically developing (TD) children across eight different research sites. Structural features included quantitative metrics from 113 cortical and non-cortical regions. Functional features included Pearson correlation functional connectivity matrices, nodal and global graph theoretical measures, nodal power spectra, voxelwise global connectivity, and voxelwise regional homogeneity. We performed feature ranking for each site and modality using the multiple support vector machine recursive feature elimination (SVM-RFE) algorithm, and feature subset selection by optimizing the expected generalization performance of a radial basis function kernel SVM (RBF-SVM) trained across a range of the top features. Site-specific RBF-SVMs using these optimal feature sets from each imaging modality were used to predict the class labels of an independent hold-out test set. A voting approach was used to combine these multiple predictions and assign final class labels. With this methodology we were able to predict diagnosis of ADHD with 55% accuracy (versus a 39% chance level in this sample), 33% sensitivity, and 80% specificity. This approach also allowed us to evaluate predictive structural and functional features giving insight into abnormal brain circuitry in ADHD. PMID:22912605

  12. Using the underlying biological organization of the Mycobacterium tuberculosis functional network for protein function prediction.

    PubMed

    Mazandu, Gaston K; Mulder, Nicola J

    2012-07-01

    Despite ever-increasing amounts of sequence and functional genomics data, there is still a deficiency of functional annotation for many newly sequenced proteins. For Mycobacterium tuberculosis (MTB), more than half of its genome is still uncharacterized, which hampers the search for new drug targets within the bacterial pathogen and limits our understanding of its pathogenicity. As for many other genomes, the annotations of proteins in the MTB proteome were generally inferred from sequence homology, which is effective but its applicability has limitations. We have carried out large-scale biological data integration to produce an MTB protein functional interaction network. Protein functional relationships were extracted from the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database, and additional functional interactions from microarray, sequence and protein signature data. The confidence level of protein relationships in the additional functional interaction data was evaluated using a dynamic data-driven scoring system. This functional network has been used to predict functions of uncharacterized proteins using Gene Ontology (GO) terms, and the semantic similarity between these terms measured using a state-of-the-art GO similarity metric. To achieve better trade-off between improvement of quality, genomic coverage and scalability, this prediction is done by observing the key principles driving the biological organization of the functional network. This study yields a new functionally characterized MTB strain CDC1551 proteome, consisting of 3804 and 3698 proteins out of 4195 with annotations in terms of the biological process and molecular function ontologies, respectively. These data can contribute to research into the Development of effective anti-tubercular drugs with novel biological mechanisms of action. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Modelling proteins' hidden conformations to predict antibiotic resistance

    NASA Astrophysics Data System (ADS)

    Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.

    2016-10-01

    TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM's specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models' prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design.

  14. Geary autocorrelation and DCCA coefficient: Application to predict apoptosis protein subcellular localization via PSSM

    NASA Astrophysics Data System (ADS)

    Liang, Yunyun; Liu, Sanyang; Zhang, Shengli

    2017-02-01

    Apoptosis is a fundamental process controlling normal tissue homeostasis by regulating a balance between cell proliferation and death. Predicting subcellular location of apoptosis proteins is very helpful for understanding its mechanism of programmed cell death. Prediction of apoptosis protein subcellular location is still a challenging and complicated task, and existing methods mainly based on protein primary sequences. In this paper, we propose a new position-specific scoring matrix (PSSM)-based model by using Geary autocorrelation function and detrended cross-correlation coefficient (DCCA coefficient). Then a 270-dimensional (270D) feature vector is constructed on three widely used datasets: ZD98, ZW225 and CL317, and support vector machine is adopted as classifier. The overall prediction accuracies are significantly improved by rigorous jackknife test. The results show that our model offers a reliable and effective PSSM-based tool for prediction of apoptosis protein subcellular localization.

  15. Modelling proteins’ hidden conformations to predict antibiotic resistance

    PubMed Central

    Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.

    2016-01-01

    TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM’s specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models’ prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design. PMID:27708258

  16. LOCALIZER: subcellular localization prediction of both plant and effector proteins in the plant cell

    PubMed Central

    Sperschneider, Jana; Catanzariti, Ann-Maree; DeBoer, Kathleen; Petre, Benjamin; Gardiner, Donald M.; Singh, Karam B.; Dodds, Peter N.; Taylor, Jennifer M.

    2017-01-01

    Pathogens secrete effector proteins and many operate inside plant cells to enable infection. Some effectors have been found to enter subcellular compartments by mimicking host targeting sequences. Although many computational methods exist to predict plant protein subcellular localization, they perform poorly for effectors. We introduce LOCALIZER for predicting plant and effector protein localization to chloroplasts, mitochondria, and nuclei. LOCALIZER shows greater prediction accuracy for chloroplast and mitochondrial targeting compared to other methods for 652 plant proteins. For 107 eukaryotic effectors, LOCALIZER outperforms other methods and predicts a previously unrecognized chloroplast transit peptide for the ToxA effector, which we show translocates into tobacco chloroplasts. Secretome-wide predictions and confocal microscopy reveal that rust fungi might have evolved multiple effectors that target chloroplasts or nuclei. LOCALIZER is the first method for predicting effector localisation in plants and is a valuable tool for prioritizing effector candidates for functional investigations. LOCALIZER is available at http://localizer.csiro.au/. PMID:28300209

  17. Great interactions: How binding incorrect partners can teach us about protein recognition and function.

    PubMed

    Vamparys, Lydie; Laurent, Benoist; Carbone, Alessandra; Sacquin-Mora, Sophie

    2016-10-01

    Protein-protein interactions play a key part in most biological processes and understanding their mechanism is a fundamental problem leading to numerous practical applications. The prediction of protein binding sites in particular is of paramount importance since proteins now represent a major class of therapeutic targets. Amongst others methods, docking simulations between two proteins known to interact can be a useful tool for the prediction of likely binding patches on a protein surface. From the analysis of the protein interfaces generated by a massive cross-docking experiment using the 168 proteins of the Docking Benchmark 2.0, where all possible protein pairs, and not only experimental ones, have been docked together, we show that it is also possible to predict a protein's binding residues without having any prior knowledge regarding its potential interaction partners. Evaluating the performance of cross-docking predictions using the area under the specificity-sensitivity ROC curve (AUC) leads to an AUC value of 0.77 for the complete benchmark (compared to the 0.5 AUC value obtained for random predictions). Furthermore, a new clustering analysis performed on the binding patches that are scattered on the protein surface show that their distribution and growth will depend on the protein's functional group. Finally, in several cases, the binding-site predictions resulting from the cross-docking simulations will lead to the identification of an alternate interface, which corresponds to the interaction with a biomolecular partner that is not included in the original benchmark. Proteins 2016; 84:1408-1421. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc.

  18. Novel prediction model of renal function after nephrectomy from automated renal volumetry with preoperative multidetector computed tomography (MDCT).

    PubMed

    Isotani, Shuji; Shimoyama, Hirofumi; Yokota, Isao; Noma, Yasuhiro; Kitamura, Kousuke; China, Toshiyuki; Saito, Keisuke; Hisasue, Shin-ichi; Ide, Hisamitsu; Muto, Satoru; Yamaguchi, Raizo; Ukimura, Osamu; Gill, Inderbir S; Horie, Shigeo

    2015-10-01

    The predictive model of postoperative renal function may impact on planning nephrectomy. To develop the novel predictive model using combination of clinical indices with computer volumetry to measure the preserved renal cortex volume (RCV) using multidetector computed tomography (MDCT), and to prospectively validate performance of the model. Total 60 patients undergoing radical nephrectomy from 2011 to 2013 participated, including a development cohort of 39 patients and an external validation cohort of 21 patients. RCV was calculated by voxel count using software (Vincent, FUJIFILM). Renal function before and after radical nephrectomy was assessed via the estimated glomerular filtration rate (eGFR). Factors affecting postoperative eGFR were examined by regression analysis to develop the novel model for predicting postoperative eGFR with a backward elimination method. The predictive model was externally validated and the performance of the model was compared with that of the previously reported models. The postoperative eGFR value was associated with age, preoperative eGFR, preserved renal parenchymal volume (RPV), preserved RCV, % of RPV alteration, and % of RCV alteration (p < 0.01). The significant correlated variables for %eGFR alteration were %RCV preservation (r = 0.58, p < 0.01) and %RPV preservation (r = 0.54, p < 0.01). We developed our regression model as follows: postoperative eGFR = 57.87 - 0.55(age) - 15.01(body surface area) + 0.30(preoperative eGFR) + 52.92(%RCV preservation). Strong correlation was seen between postoperative eGFR and the calculated estimation model (r = 0.83; p < 0.001). The external validation cohort (n = 21) showed our model outperformed previously reported models. Combining MDCT renal volumetry and clinical indices might yield an important tool for predicting postoperative renal function.

  19. RNA-SSPT: RNA Secondary Structure Prediction Tools.

    PubMed

    Ahmad, Freed; Mahboob, Shahid; Gulzar, Tahsin; Din, Salah U; Hanif, Tanzeela; Ahmad, Hifza; Afzal, Muhammad

    2013-01-01

    The prediction of RNA structure is useful for understanding evolution for both in silico and in vitro studies. Physical methods like NMR studies to predict RNA secondary structure are expensive and difficult. Computational RNA secondary structure prediction is easier. Comparative sequence analysis provides the best solution. But secondary structure prediction of a single RNA sequence is challenging. RNA-SSPT is a tool that computationally predicts secondary structure of a single RNA sequence. Most of the RNA secondary structure prediction tools do not allow pseudoknots in the structure or are unable to locate them. Nussinov dynamic programming algorithm has been implemented in RNA-SSPT. The current studies shows only energetically most favorable secondary structure is required and the algorithm modification is also available that produces base pairs to lower the total free energy of the secondary structure. For visualization of RNA secondary structure, NAVIEW in C language is used and modified in C# for tool requirement. RNA-SSPT is built in C# using Dot Net 2.0 in Microsoft Visual Studio 2005 Professional edition. The accuracy of RNA-SSPT is tested in terms of Sensitivity and Positive Predicted Value. It is a tool which serves both secondary structure prediction and secondary structure visualization purposes.

  20. RNA-SSPT: RNA Secondary Structure Prediction Tools

    PubMed Central

    Ahmad, Freed; Mahboob, Shahid; Gulzar, Tahsin; din, Salah U; Hanif, Tanzeela; Ahmad, Hifza; Afzal, Muhammad

    2013-01-01

    The prediction of RNA structure is useful for understanding evolution for both in silico and in vitro studies. Physical methods like NMR studies to predict RNA secondary structure are expensive and difficult. Computational RNA secondary structure prediction is easier. Comparative sequence analysis provides the best solution. But secondary structure prediction of a single RNA sequence is challenging. RNA-SSPT is a tool that computationally predicts secondary structure of a single RNA sequence. Most of the RNA secondary structure prediction tools do not allow pseudoknots in the structure or are unable to locate them. Nussinov dynamic programming algorithm has been implemented in RNA-SSPT. The current studies shows only energetically most favorable secondary structure is required and the algorithm modification is also available that produces base pairs to lower the total free energy of the secondary structure. For visualization of RNA secondary structure, NAVIEW in C language is used and modified in C# for tool requirement. RNA-SSPT is built in C# using Dot Net 2.0 in Microsoft Visual Studio 2005 Professional edition. The accuracy of RNA-SSPT is tested in terms of Sensitivity and Positive Predicted Value. It is a tool which serves both secondary structure prediction and secondary structure visualization purposes. PMID:24250115

  1. Category-selective attention modulates unconscious processes in the middle occipital gyrus.

    PubMed

    Tu, Shen; Qiu, Jiang; Martens, Ulla; Zhang, Qinglin

    2013-06-01

    Many studies have revealed the top-down modulation (spatial attention, attentional load, etc.) on unconscious processing. However, there is little research about how category-selective attention could modulate the unconscious processing. In the present study, using functional magnetic resonance imaging (fMRI), the results showed that category-selective attention modulated unconscious face/tool processing in the middle occipital gyrus (MOG). Interestingly, MOG effects were of opposed direction for face and tool processes. During unconscious face processing, activation in MOG decreased under the face-selective attention compared with tool-selective attention. This result was in line with the predictive coding theory. During unconscious tool processing, however, activation in MOG increased under the tool-selective attention compared with face-selective attention. The different effects might be ascribed to an interaction between top-down category-selective processes and bottom-up processes in the partial awareness level as proposed by Kouider, De Gardelle, Sackur, and Dupoux (2010). Specifically, we suppose an "excessive activation" hypothesis. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Dereplication, Aggregation and Scoring Tool (DAS Tool) v1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SIEBER, CHRISTIAN

    Communities of uncultivated microbes are critical to ecosystem function and microorganism health, and a key objective of metagenomic studies is to analyze organism-specific metabolic pathways and reconstruct community interaction networks. This requires accurate assignment of genes to genomes, yet existing binning methods often fail to predict a reasonable number of genomes and report many bins of low quality and completeness. Furthermore, the performance of existing algorithms varies between samples and biotypes. Here, we present a dereplication, aggregation and scoring strategy, DAS Tool, that combines the strengths of a flexible set of established binning algorithms. DAS Tools applied to a constructedmore » community generated more accurate bins than any automated method. Further, when applied to samples of different complexity, including soil, natural oil seeps, and the human gut, DAS Tool recovered substantially more near-complete genomes than any single binning method alone. Included were three genomes from a novel lineage . The ability to reconstruct many near-complete genomes from metagenomics data will greatly advance genome-centric analyses of ecosystems.« less

  3. Habitat Modeling and Preferences of Marine Mammals as Function of Oceanographic Characteristics: Development of Predictive Tools for Assessing the Risks and the Impacts Due to Sound Emissions

    DTIC Science & Technology

    2011-09-30

    Arianna Azzellino Piazza Leonardo da Vinci , 32, 20133 Milano, Italy phone: (+39) 02-239-964-31 fax: (+39) 02-239-964-99 email...WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Polytechnic University of Milan,Piazza Leonardo da Vinci , 32,20133 Milano, Italy

  4. Which neuromuscular or cognitive test is the optimal screening tool to predict falls in frail community-dwelling older people?

    PubMed

    Shimada, Hiroyuki; Suzukawa, Megumi; Tiedemann, Anne; Kobayashi, Kumiko; Yoshida, Hideyo; Suzuki, Takao

    2009-01-01

    The use of falls risk screening tools may aid in targeting fall prevention interventions in older individuals most likely to benefit. To determine the optimal physical or cognitive test to screen for falls risk in frail older people. This prospective cohort study involved recruitment from 213 day-care centers in Japan. The feasibility study included 3,340 ambulatory individuals aged 65 years or older enrolled in the Tsukui Ordered Useful Care for Health (TOUCH) program. The external validation study included a subsample of 455 individuals who completed all tests. Physical tests included grip strength (GS), chair stand test (CST), one-leg standing test (OLS), functional reach test (FRT), tandem walking test (TWT), 6-meter walking speed at a comfortable pace (CWS) and at maximum pace (MWS), and timed up-and-go test (TUG). The mental status questionnaire (MSQ) was used to measure cognitive function. The incidence of falls during 1 year was investigated by self-report or an interview with the participant's family and care staff. The most practicable tests were the GS and MSQ, which could be administered to more than 90% of the participants regardless of the activities of daily living status. The FRT and TWT had lower feasibility than other lower limb function tests. During the 1-year retrospective analysis of falls, 99 (21.8%) of the 455 validation study participants had fallen at least once. Fallers showed significantly poorer performance than non-fallers in the OLS (p = 0.003), TWT (p = 0.001), CWS (p = 0.013), MWS (p = 0.007), and TUG (p = 0.011). The OLS, CWS, and MWS remained significantly associated with falls when performance cut-points were determined. Logistic regression analysis revealed that the TWT was a significant and independent, yet weak predictor of falls. A weighting system which considered feasibility and validity scored the CWS (at a cut-point of 0.7 m/s) as the best test to predict risk of falls. Clinical tests of neuromuscular function can predict risk of falls in frail older people. When feasibility and validity were considered, the CWS was the best test for use as a screening tool in frail older people, however, these preliminary results require confirmation in further research. Copyright 2009 S. Karger AG, Basel.

  5. A statistical model to predict one-year risk of death in patients with cystic fibrosis.

    PubMed

    Aaron, Shawn D; Stephenson, Anne L; Cameron, Donald W; Whitmore, George A

    2015-11-01

    We constructed a statistical model to assess the risk of death for cystic fibrosis (CF) patients between scheduled annual clinical visits. Our model includes a CF health index that shows the influence of risk factors on CF chronic health and on the severity and frequency of CF exacerbations. Our study used Canadian CF registry data for 3,794 CF patients born after 1970. Data up to 2010 were analyzed, yielding 44,390 annual visit records. Our stochastic process model postulates that CF health between annual clinical visits is a superposition of chronic disease progression and an exacerbation shock stream. Death occurs when an exacerbation carries CF health across a critical threshold. The data constitute censored survival data, and hence, threshold regression was used to connect CF death to study covariates. Maximum likelihood estimates were used to determine which clinical covariates were included within the regression functions for both CF chronic health and CF exacerbations. Lung function, Pseudomonas aeruginosa infection, CF-related diabetes, weight deficiency, pancreatic insufficiency, and the deltaF508 homozygous mutation were significantly associated with CF chronic health status. Lung function, age, gender, age at CF diagnosis, P aeruginosa infection, body mass index <18.5, number of previous hospitalizations for CF exacerbations in the preceding year, and decline in forced expiratory volume in 1 second in the preceding year were significantly associated with CF exacerbations. When combined in one summative model, the regression functions for CF chronic health and CF exacerbation risk provided a simple clinical scoring tool for assessing 1-year risk of death for an individual CF patient. Goodness-of-fit tests of the model showed very encouraging results. We confirmed predictive validity of the model by comparing actual and estimated deaths in repeated hold-out samples from the data set and showed excellent agreement between estimated and actual mortality. Our threshold regression model incorporates a composite CF chronic health status index and an exacerbation risk index to produce an accurate clinical scoring tool for prediction of 1-year survival of CF patients. Our tool can be used by clinicians to decide on optimal timing for lung transplant referral. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Visual Predictive Check in Models with Time-Varying Input Function.

    PubMed

    Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio

    2015-11-01

    The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.

  7. Turbulence-driven Coronal Heating and Improvements to Empirical Forecasting of the Solar Wind

    NASA Astrophysics Data System (ADS)

    Woolsey, Lauren N.; Cranmer, Steven R.

    2014-06-01

    Forecasting models of the solar wind often rely on simple parameterizations of the magnetic field that ignore the effects of the full magnetic field geometry. In this paper, we present the results of two solar wind prediction models that consider the full magnetic field profile and include the effects of Alfvén waves on coronal heating and wind acceleration. The one-dimensional magnetohydrodynamic code ZEPHYR self-consistently finds solar wind solutions without the need for empirical heating functions. Another one-dimensional code, introduced in this paper (The Efficient Modified-Parker-Equation-Solving Tool, TEMPEST), can act as a smaller, stand-alone code for use in forecasting pipelines. TEMPEST is written in Python and will become a publicly available library of functions that is easy to adapt and expand. We discuss important relations between the magnetic field profile and properties of the solar wind that can be used to independently validate prediction models. ZEPHYR provides the foundation and calibration for TEMPEST, and ultimately we will use these models to predict observations and explain space weather created by the bulk solar wind. We are able to reproduce with both models the general anticorrelation seen in comparisons of observed wind speed at 1 AU and the flux tube expansion factor. There is significantly less spread than comparing the results of the two models than between ZEPHYR and a traditional flux tube expansion relation. We suggest that the new code, TEMPEST, will become a valuable tool in the forecasting of space weather.

  8. Usefulness of the rivermead postconcussion symptoms questionnaire and the trail-making test for outcome prediction in patients with mild traumatic brain injury.

    PubMed

    de Guise, Elaine; Bélanger, Sara; Tinawi, Simon; Anderson, Kirsten; LeBlanc, Joanne; Lamoureux, Julie; Audrit, Hélène; Feyz, Mitra

    2016-01-01

    The aim of the study was to determine if the Rivermead Postconcussion Symptoms Questionnaire (RPQ) is a better tool for outcome prediction than an objective neuropsychological assessment following mild traumatic brain injury (mTBI). The study included 47 patients with mTBI referred to an outpatient rehabilitation clinic. The RPQ and a brief neuropsychological battery were performed in the first few days following the trauma. The outcome measure used was the Mayo-Portland Adaptability Inventory-4 (MPAI-4) which was completed within the first 3 months. The only variable associated with results on the MPAI-4 was the RPQ score (p < .001). The predictive outcome model including age, education, and the results of the Trail-Making Test-Parts A and B (TMT) had a pseudo-R(2) of .02. When the RPQ score was added, the pseudo-R(2) climbed to .19. This model indicates that the usefulness of the RPQ score and the TMT in predicting moderate-to-severe limitations, while controlling for confounders, is substantial as suggested by a significant increase in the model chi-square value, delta (1df) = 6.517, p < .001. The RPQ and the TMT provide clinicians with a brief and reliable tool for predicting outcome functioning and can help target the need for further intervention and rehabilitation following mTBI.

  9. Computational Fluid Dynamic Investigation of Loss Mechanisms in a Pulse-Tube Refrigerator

    NASA Astrophysics Data System (ADS)

    Martin, K.; Esguerra, J.; Dodson, C.; Razani, A.

    2015-12-01

    In predicting Pulse-Tube Cryocooler (PTC) performance, One-Dimensional (1-D) PTR design and analysis tools such as Gedeon Associates SAGE® typically include models for performance degradation due to thermodynamically irreversible processes. SAGE®, in particular, accounts for convective loss, turbulent conductive loss and numerical diffusion “loss” via correlation functions based on analysis and empirical testing. In this study, we compare CFD and SAGE® estimates of PTR refrigeration performance for four distinct pulse-tube lengths. Performance predictions from PTR CFD models are compared to SAGE® predictions for all four cases. Then, to further demonstrate the benefits of higher-fidelity and multidimensional CFD simulation, the PTR loss mechanisms are characterized in terms of their spatial and temporal locations.

  10. Investigation of type-I interferon dysregulation by arenaviruses : a multidisciplinary approach.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozina, Carol L.; Moorman, Matthew Wallace; Branda, Catherine

    2011-09-01

    This report provides a detailed overview of the work performed for project number 130781, 'A Systems Biology Approach to Understanding Viral Hemorrhagic Fever Pathogenesis.' We report progress in five key areas: single cell isolation devices and control systems, fluorescent cytokine and transcription factor reporters, on-chip viral infection assays, molecular virology analysis of Arenavirus nucleoprotein structure-function, and development of computational tools to predict virus-host protein interactions. Although a great deal of work remains from that begun here, we have developed several novel single cell analysis tools and knowledge of Arenavirus biology that will facilitate and inform future publications and funding proposals.

  11. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    PubMed

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  12. PTMScout, a Web Resource for Analysis of High Throughput Post-translational Proteomics Studies*

    PubMed Central

    Naegle, Kristen M.; Gymrek, Melissa; Joughin, Brian A.; Wagner, Joel P.; Welsch, Roy E.; Yaffe, Michael B.; Lauffenburger, Douglas A.; White, Forest M.

    2010-01-01

    The rate of discovery of post-translational modification (PTM) sites is increasing rapidly and is significantly outpacing our biological understanding of the function and regulation of those modifications. To help meet this challenge, we have created PTMScout, a web-based interface for viewing, manipulating, and analyzing high throughput experimental measurements of PTMs in an effort to facilitate biological understanding of protein modifications in signaling networks. PTMScout is constructed around a custom database of PTM experiments and contains information from external protein and post-translational resources, including gene ontology annotations, Pfam domains, and Scansite predictions of kinase and phosphopeptide binding domain interactions. PTMScout functionality comprises data set comparison tools, data set summary views, and tools for protein assignments of peptides identified by mass spectrometry. Analysis tools in PTMScout focus on informed subset selection via common criteria and on automated hypothesis generation through subset labeling derived from identification of statistically significant enrichment of other annotations in the experiment. Subset selection can be applied through the PTMScout flexible query interface available for quantitative data measurements and data annotations as well as an interface for importing data set groupings by external means, such as unsupervised learning. We exemplify the various functions of PTMScout in application to data sets that contain relative quantitative measurements as well as data sets lacking quantitative measurements, producing a set of interesting biological hypotheses. PTMScout is designed to be a widely accessible tool, enabling generation of multiple types of biological hypotheses from high throughput PTM experiments and advancing functional assignment of novel PTM sites. PTMScout is available at http://ptmscout.mit.edu. PMID:20631208

  13. Evaluating the Addition of a Dinoflagellate Phytoplankton Functional Type Using Radiance Anomalies for Monterey Bay, CA

    NASA Astrophysics Data System (ADS)

    Houskeeper, H. F.; Kudela, R. M.

    2016-12-01

    Ocean color sensors have enabled daily, global monitoring of phytoplankton productivity in the world's oceans. However, to observe key structures such as food webs, or to identify regime shifts of dominant species, tools capable of distinguishing between phytoplankton functional types using satellite remote sensing reflectance are necessary. One such tool developed by Alvain et al. (2005), PHYSAT, successfully linked four phytoplankton functional types to chlorophyll-normalized remote sensing spectra, or radiance anomalies, in case-1 waters. Yet this tool was unable to characterize dinoflagellates because of their ubiquitous background presence in the open ocean. We employ a radiance anomaly technique based on PHYSAT to target phytoplankton functional types in Monterey Bay, a region where dinoflagellate populations are larger and more variable than in open ocean waters, and thus where they may be viable targets for satellite remote sensing characterization. We compare with an existing Santa Cruz Wharf photo-pigment time series spanning from 2006 to the present to regionally ground-truth the method's predictions, and we assess its accuracy in characterizing dinoflagellates, a phytoplankton group that impacts the region's fish stocks and water quality. For example, an increase in dinoflagellate abundance beginning in 2005 led to declines in commercially important fish stocks that persisted throughout the following year. Certain species of dinoflagellates in Monterey Bay are also responsible for some of the harmful algal bloom events that negatively impact the shellfish industry. Moving toward better tools to characterize phytoplankton blooms is important for understanding ecosystem shifts, as well as protecting human health in the surrounding areas.

  14. Proteome-wide search for functional motifs altered in tumors: Prediction of nuclear export signals inactivated by cancer-related mutations

    PubMed Central

    Prieto, Gorka; Fullaondo, Asier; Rodríguez, Jose A.

    2016-01-01

    Large-scale sequencing projects are uncovering a growing number of missense mutations in human tumors. Understanding the phenotypic consequences of these alterations represents a formidable challenge. In silico prediction of functionally relevant amino acid motifs disrupted by cancer mutations could provide insight into the potential impact of a mutation, and guide functional tests. We have previously described Wregex, a tool for the identification of potential functional motifs, such as nuclear export signals (NESs), in proteins. Here, we present an improved version that allows motif prediction to be combined with data from large repositories, such as the Catalogue of Somatic Mutations in Cancer (COSMIC), and to be applied to a whole proteome scale. As an example, we have searched the human proteome for candidate NES motifs that could be altered by cancer-related mutations included in the COSMIC database. A subset of the candidate NESs identified was experimentally tested using an in vivo nuclear export assay. A significant proportion of the selected motifs exhibited nuclear export activity, which was abrogated by the COSMIC mutations. In addition, our search identified a cancer mutation that inactivates the NES of the human deubiquitinase USP21, and leads to the aberrant accumulation of this protein in the nucleus. PMID:27174732

  15. Nonparametric functional data estimation applied to ozone data: prediction and extreme value analysis.

    PubMed

    Quintela-del-Río, Alejandro; Francisco-Fernández, Mario

    2011-02-01

    The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Active edge control in the precessions polishing process for manufacturing large mirror segments

    NASA Astrophysics Data System (ADS)

    Li, Hongyu; Zhang, Wei; Walker, David; Yu, Gouyo

    2014-09-01

    The segmentation of the primary mirror is the only promising solution for building the next generation of ground telescopes. However, manufacturing segmented mirrors presents its own challenges. The edge mis-figure impacts directly on the telescope's scientific output. The `Edge effect' significantly dominates the polishing precision. Therefore, the edge control is regarded as one of the most difficult technical issues in the segment production that needs to be addressed urgently. This paper reports an active edge control technique for the mirror segments fabrication using the Precession's polishing technique. The strategy in this technique requires that the large spot be selected on the bulk area for fast polishing, and the small spot is used for edge figuring. This can be performed by tool lift and optimizing the dell time to compensate for non-uniform material removal at the edge zone. This requires accurate and stable edge tool influence functions. To obtain the full tool influence function at the edge, we have demonstrated in previous work a novel hybrid-measurement method which uses both simultaneous phase interferometry and profilometry. In this paper, the edge effect under `Bonnet tool' polishing is investigated. The pressure distribution is analyzed by means of finite element analysis (FEA). According to the `Preston' equation, the shape of the edge tool influence functions is predicted. With this help, the multiple process parameters at the edge zone are optimized. This is demonstrated on a 200mm crosscorners hexagonal part with a result of PV less than 200nm for entire surface.

  17. Capture, Learning, and Classification of Upper Extremity Movement Primitives in Healthy Controls and Stroke Patients

    PubMed Central

    Guerra, Jorge; Uddin, Jasim; Nilsen, Dawn; Mclnerney, James; Fadoo, Ammarah; Omofuma, Isirame B.; Hughes, Shatif; Agrawal, Sunil; Allen, Peter; Schambra, Heidi M.

    2017-01-01

    There currently exist no practical tools to identify functional movements in the upper extremities (UEs). This absence has limited the precise therapeutic dosing of patients recovering from stroke. In this proof-of-principle study, we aimed to develop an accurate approach for classifying UE functional movement primitives, which comprise functional movements. Data were generated from inertial measurement units (IMUs) placed on upper body segments of older healthy individuals and chronic stroke patients. Subjects performed activities commonly trained during rehabilitation after stroke. Data processing involved the use of a sliding window to obtain statistical descriptors, and resulting features were processed by a Hidden Markov Model (HMM). The likelihoods of the states, resulting from the HMM, were segmented by a second sliding window and their averages were calculated. The final predictions were mapped to human functional movement primitives using a Logistic Regression algorithm. Algorithm performance was assessed with a leave-one-out analysis, which determined its sensitivity, specificity, and positive and negative predictive values for all classified primitives. In healthy control and stroke participants, our approach identified functional movement primitives embedded in training activities with, on average, 80% precision. This approach may support functional movement dosing in stroke rehabilitation. PMID:28813877

  18. miRNAtools: Advanced Training Using the miRNA Web of Knowledge.

    PubMed

    Stępień, Ewa Ł; Costa, Marina C; Enguita, Francisco J

    2018-02-16

    Micro-RNAs (miRNAs) are small non-coding RNAs that act as negative regulators of the genomic output. Their intrinsic importance within cell biology and human disease is well known. Their mechanism of action based on the base pairing binding to their cognate targets have helped the development not only of many computer applications for the prediction of miRNA target recognition but also of specific applications for functional assessment and analysis. Learning about miRNA function requires practical training in the use of specific computer and web-based applications that are complementary to wet-lab studies. In order to guide the learning process about miRNAs, we have created miRNAtools (http://mirnatools.eu), a web repository of miRNA tools and tutorials. This article compiles tools with which miRNAs and their regulatory action can be analyzed and that function to collect and organize information dispersed on the web. The miRNAtools website contains a collection of tutorials that can be used by students and tutors engaged in advanced training courses. The tutorials engage in analyses of the functions of selected miRNAs, starting with their nomenclature and genomic localization and finishing with their involvement in specific cellular functions.

  19. Understanding Interrater Reliability and Validity of Risk Assessment Tools Used to Predict Adverse Clinical Events.

    PubMed

    Siedlecki, Sandra L; Albert, Nancy M

    This article will describe how to assess interrater reliability and validity of risk assessment tools, using easy-to-follow formulas, and to provide calculations that demonstrate principles discussed. Clinical nurse specialists should be able to identify risk assessment tools that provide high-quality interrater reliability and the highest validity for predicting true events of importance to clinical settings. Making best practice recommendations for assessment tool use is critical to high-quality patient care and safe practices that impact patient outcomes and nursing resources. Optimal risk assessment tool selection requires knowledge about interrater reliability and tool validity. The clinical nurse specialist will understand the reliability and validity issues associated with risk assessment tools, and be able to evaluate tools using basic calculations. Risk assessment tools are developed to objectively predict quality and safety events and ultimately reduce the risk of event occurrence through preventive interventions. To ensure high-quality tool use, clinical nurse specialists must critically assess tool properties. The better the tool's ability to predict adverse events, the more likely that event risk is mediated. Interrater reliability and validity assessment is relatively an easy skill to master and will result in better decisions when selecting or making recommendations for risk assessment tool use.

  20. miRToolsGallery: a tag-based and rankable microRNA bioinformatics resources database portal

    PubMed Central

    Chen, Liang; Heikkinen, Liisa; Wang, ChangLiang; Yang, Yang; Knott, K Emily

    2018-01-01

    Abstract Hundreds of bioinformatics tools have been developed for MicroRNA (miRNA) investigations including those used for identification, target prediction, structure and expression profile analysis. However, finding the correct tool for a specific application requires the tedious and laborious process of locating, downloading, testing and validating the appropriate tool from a group of nearly a thousand. In order to facilitate this process, we developed a novel database portal named miRToolsGallery. We constructed the portal by manually curating > 950 miRNA analysis tools and resources. In the portal, a query to locate the appropriate tool is expedited by being searchable, filterable and rankable. The ranking feature is vital to quickly identify and prioritize the more useful from the obscure tools. Tools are ranked via different criteria including the PageRank algorithm, date of publication, number of citations, average of votes and number of publications. miRToolsGallery provides links and data for the comprehensive collection of currently available miRNA tools with a ranking function which can be adjusted using different criteria according to specific requirements. Database URL: http://www.mirtoolsgallery.org PMID:29688355

  1. BALLIST: A computer program to empirically predict the bumper thickness required to prevent perforation of the Space Station by orbital debris

    NASA Technical Reports Server (NTRS)

    Rule, William Keith

    1991-01-01

    A computer program called BALLIST that is intended to be a design tool for engineers is described. BALLlST empirically predicts the bumper thickness required to prevent perforation of the Space Station pressure wall by a projectile (such as orbital debris) as a function of the projectile's velocity. 'Ballistic' limit curves (bumper thickness vs. projectile velocity) are calculated and are displayed on the screen as well as being stored in an ASCII file. A Whipple style of spacecraft wall configuration is assumed. The predictions are based on a database of impact test results. NASA/Marshall Space Flight Center currently has the capability to generate such test results. Numerical simulation results of impact conditions that can not be tested (high velocities or large particles) can also be used for predictions.

  2. An automated benchmarking platform for MHC class II binding prediction methods.

    PubMed

    Andreatta, Massimo; Trolle, Thomas; Yan, Zhen; Greenbaum, Jason A; Peters, Bjoern; Nielsen, Morten

    2018-05-01

    Computational methods for the prediction of peptide-MHC binding have become an integral and essential component for candidate selection in experimental T cell epitope discovery studies. The sheer amount of published prediction methods-and often discordant reports on their performance-poses a considerable quandary to the experimentalist who needs to choose the best tool for their research. With the goal to provide an unbiased, transparent evaluation of the state-of-the-art in the field, we created an automated platform to benchmark peptide-MHC class II binding prediction tools. The platform evaluates the absolute and relative predictive performance of all participating tools on data newly entered into the Immune Epitope Database (IEDB) before they are made public, thereby providing a frequent, unbiased assessment of available prediction tools. The benchmark runs on a weekly basis, is fully automated, and displays up-to-date results on a publicly accessible website. The initial benchmark described here included six commonly used prediction servers, but other tools are encouraged to join with a simple sign-up procedure. Performance evaluation on 59 data sets composed of over 10 000 binding affinity measurements suggested that NetMHCIIpan is currently the most accurate tool, followed by NN-align and the IEDB consensus method. Weekly reports on the participating methods can be found online at: http://tools.iedb.org/auto_bench/mhcii/weekly/. mniel@bioinformatics.dtu.dk. Supplementary data are available at Bioinformatics online.

  3. Experimental Assessment of Splicing Variants Using Expression Minigenes and Comparison with In Silico Predictions

    PubMed Central

    Sharma, Neeraj; Sosnay, Patrick R.; Ramalho, Anabela S.; Douville, Christopher; Franca, Arianna; Gottschalk, Laura B.; Park, Jeenah; Lee, Melissa; Vecchio-Pagan, Briana; Raraigh, Karen S.; Amaral, Margarida D.; Karchin, Rachel; Cutting, Garry R.

    2015-01-01

    Assessment of the functional consequences of variants near splice sites is a major challenge in the diagnostic laboratory. To address this issue, we created expression minigenes (EMGs) to determine the RNA and protein products generated by splice site variants (n = 10) implicated in cystic fibrosis (CF). Experimental results were compared with the splicing predictions of eight in silico tools. EMGs containing the full-length Cystic Fibrosis Transmembrane Conductance Regulator (CFTR) coding sequence and flanking intron sequences generated wild-type transcript and fully processed protein in Human Embryonic Kidney (HEK293) and CF bronchial epithelial (CFBE41o-) cells. Quantification of variant induced aberrant mRNA isoforms was concordant using fragment analysis and pyrosequencing. The splicing patterns of c.1585−1G>A and c.2657+5G>A were comparable to those reported in primary cells from individuals bearing these variants. Bioinformatics predictions were consistent with experimental results for 9/10 variants (MES), 8/10 variants (NNSplice), and 7/10 variants (SSAT and Sroogle). Programs that estimate the consequences of mis-splicing predicted 11/16 (HSF and ASSEDA) and 10/16 (Fsplice and SplicePort) experimentally observed mRNA isoforms. EMGs provide a robust experimental approach for clinical interpretation of splice site variants and refinement of in silico tools. PMID:25066652

  4. New support vector machine-based method for microRNA target prediction.

    PubMed

    Li, L; Gao, Q; Mao, X; Cao, Y

    2014-06-09

    MicroRNA (miRNA) plays important roles in cell differentiation, proliferation, growth, mobility, and apoptosis. An accurate list of precise target genes is necessary in order to fully understand the importance of miRNAs in animal development and disease. Several computational methods have been proposed for miRNA target-gene identification. However, these methods still have limitations with respect to their sensitivity and accuracy. Thus, we developed a new miRNA target-prediction method based on the support vector machine (SVM) model. The model supplies information of two binding sites (primary and secondary) for a radial basis function kernel as a similarity measure for SVM features. The information is categorized based on structural, thermodynamic, and sequence conservation. Using high-confidence datasets selected from public miRNA target databases, we obtained a human miRNA target SVM classifier model with high performance and provided an efficient tool for human miRNA target gene identification. Experiments have shown that our method is a reliable tool for miRNA target-gene prediction, and a successful application of an SVM classifier. Compared with other methods, the method proposed here improves the sensitivity and accuracy of miRNA prediction. Its performance can be further improved by providing more training examples.

  5. Crysalis: an integrated server for computational analysis and design of protein crystallization.

    PubMed

    Wang, Huilin; Feng, Liubin; Zhang, Ziding; Webb, Geoffrey I; Lin, Donghai; Song, Jiangning

    2016-02-24

    The failure of multi-step experimental procedures to yield diffraction-quality crystals is a major bottleneck in protein structure determination. Accordingly, several bioinformatics methods have been successfully developed and employed to select crystallizable proteins. Unfortunately, the majority of existing in silico methods only allow the prediction of crystallization propensity, seldom enabling computational design of protein mutants that can be targeted for enhancing protein crystallizability. Here, we present Crysalis, an integrated crystallization analysis tool that builds on support-vector regression (SVR) models to facilitate computational protein crystallization prediction, analysis, and design. More specifically, the functionality of this new tool includes: (1) rapid selection of target crystallizable proteins at the proteome level, (2) identification of site non-optimality for protein crystallization and systematic analysis of all potential single-point mutations that might enhance protein crystallization propensity, and (3) annotation of target protein based on predicted structural properties. We applied the design mode of Crysalis to identify site non-optimality for protein crystallization on a proteome-scale, focusing on proteins currently classified as non-crystallizable. Our results revealed that site non-optimality is based on biases related to residues, predicted structures, physicochemical properties, and sequence loci, which provides in-depth understanding of the features influencing protein crystallization. Crysalis is freely available at http://nmrcen.xmu.edu.cn/crysalis/.

  6. Crysalis: an integrated server for computational analysis and design of protein crystallization

    PubMed Central

    Wang, Huilin; Feng, Liubin; Zhang, Ziding; Webb, Geoffrey I.; Lin, Donghai; Song, Jiangning

    2016-01-01

    The failure of multi-step experimental procedures to yield diffraction-quality crystals is a major bottleneck in protein structure determination. Accordingly, several bioinformatics methods have been successfully developed and employed to select crystallizable proteins. Unfortunately, the majority of existing in silico methods only allow the prediction of crystallization propensity, seldom enabling computational design of protein mutants that can be targeted for enhancing protein crystallizability. Here, we present Crysalis, an integrated crystallization analysis tool that builds on support-vector regression (SVR) models to facilitate computational protein crystallization prediction, analysis, and design. More specifically, the functionality of this new tool includes: (1) rapid selection of target crystallizable proteins at the proteome level, (2) identification of site non-optimality for protein crystallization and systematic analysis of all potential single-point mutations that might enhance protein crystallization propensity, and (3) annotation of target protein based on predicted structural properties. We applied the design mode of Crysalis to identify site non-optimality for protein crystallization on a proteome-scale, focusing on proteins currently classified as non-crystallizable. Our results revealed that site non-optimality is based on biases related to residues, predicted structures, physicochemical properties, and sequence loci, which provides in-depth understanding of the features influencing protein crystallization. Crysalis is freely available at http://nmrcen.xmu.edu.cn/crysalis/. PMID:26906024

  7. A quick reality check for microRNA target prediction.

    PubMed

    Kast, Juergen

    2011-04-01

    The regulation of protein abundance by microRNA (miRNA)-mediated repression of mRNA translation is a rapidly growing area of interest in biochemical research. In animal cells, the miRNA seed sequence does not perfectly match that of the mRNA it targets, resulting in a large number of possible miRNA targets and varied extents of repression. Several software tools are available for the prediction of miRNA targets, yet the overlap between them is limited. Jovanovic et al. have developed and applied a targeted, quantitative approach to validate predicted miRNA target proteins. Using a proteome database, they have set up and tested selected reaction monitoring assays for approximately 20% of more than 800 predicted let-7 targets, as well as control genes in Caenorhabditis elegans. Their results demonstrate that such assays can be developed quickly and with relative ease, and applied in a high-throughput setup to verify known and identify novel miRNA targets. They also show, however, that the choice of the biological system and material has a noticeable influence on the frequency, extent and direction of the observed changes. Nonetheless, selected reaction monitoring assays, such as those developed by Jovanovic et al., represent an attractive new tool in the study of miRNA function at the organism level.

  8. Computational Modeling in Liver Surgery

    PubMed Central

    Christ, Bruno; Dahmen, Uta; Herrmann, Karl-Heinz; König, Matthias; Reichenbach, Jürgen R.; Ricken, Tim; Schleicher, Jana; Ole Schwen, Lars; Vlaic, Sebastian; Waschinsky, Navina

    2017-01-01

    The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery. PMID:29249974

  9. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367

  10. Pairwise contact energy statistical potentials can help to find probability of point mutations.

    PubMed

    Saravanan, K M; Suvaithenamudhan, S; Parthasarathy, S; Selvaraj, S

    2017-01-01

    To adopt a particular fold, a protein requires several interactions between its amino acid residues. The energetic contribution of these residue-residue interactions can be approximated by extracting statistical potentials from known high resolution structures. Several methods based on statistical potentials extracted from unrelated proteins are found to make a better prediction of probability of point mutations. We postulate that the statistical potentials extracted from known structures of similar folds with varying sequence identity can be a powerful tool to examine probability of point mutation. By keeping this in mind, we have derived pairwise residue and atomic contact energy potentials for the different functional families that adopt the (α/β) 8 TIM-Barrel fold. We carried out computational point mutations at various conserved residue positions in yeast Triose phosphate isomerase enzyme for which experimental results are already reported. We have also performed molecular dynamics simulations on a subset of point mutants to make a comparative study. The difference in pairwise residue and atomic contact energy of wildtype and various point mutations reveals probability of mutations at a particular position. Interestingly, we found that our computational prediction agrees with the experimental studies of Silverman et al. (Proc Natl Acad Sci 2001;98:3092-3097) and perform better prediction than i Mutant and Cologne University Protein Stability Analysis Tool. The present work thus suggests deriving pairwise contact energy potentials and molecular dynamics simulations of functionally important folds could help us to predict probability of point mutations which may ultimately reduce the time and cost of mutation experiments. Proteins 2016; 85:54-64. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Modelling a model?!! Prediction of observed and calculated daily pan evaporation in New Mexico, U.S.A.

    NASA Astrophysics Data System (ADS)

    Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.

    2012-04-01

    Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.

  12. Optical Modeling Activities for NASA's James Webb Space Telescope (JWST). 4; Overview and Introduction of Matlab Based Toolkits used to Interface with Optical Design Software

    NASA Technical Reports Server (NTRS)

    Howard, Joseph

    2007-01-01

    This is part four of a series on the ongoing optical modeling activities for James Webb Space Telescope (JWST). The first two discussed modeling JWST on-orbit performance using wavefront sensitivities to predict line of sight motion induced blur, and stability during thermal transients. The third investigates the aberrations resulting from alignment and figure compensation of the controllable degrees of freedom (primary and secondary mirrors), which may be encountered during ground alignment and on-orbit commissioning of the observatory. The work here introduces some of the math software tools used to perform the work of the previous three papers of this series. NASA has recently approved these in-house tools for public release as open source, so this presentation also serves as a quick tutorial on their use. The tools are collections of functions written in Matlab, which interface with optical design software (CodeV, OSLO, and Zemax) using either COM or DDE communication protocol. The functions are discussed, and examples are given.

  13. Optical modeling activities for NASA's James Webb Space Telescope (JWST): IV. Overview and introduction of MATLAB based toolkits used to interface with optical design software

    NASA Astrophysics Data System (ADS)

    Howard, Joseph M.

    2007-09-01

    This paper is part four of a series on the ongoing optical modeling activities for the James Webb Space Telescope (JWST). The first two papers discussed modeling JWST on-orbit performance using wavefront sensitivities to predict line of sight motion induced blur, and stability during thermal transients. The third paper investigates the aberrations resulting from alignment and figure compensation of the controllable degrees of freedom (primary and secondary mirrors), which may be encountered during ground alignment and on-orbit commissioning of the observatory. The work here introduces some of the math software tools used to perform the work of the previous three papers of this series. NASA has recently approved these in-house tools for public release as open source, so this presentation also serves as a quick tutorial on their use. The tools are collections of functions written for use in MATLAB to interface with optical design software (CODE V, OSLO, and ZEMAX) using either COM or DDE communication protocol. The functions are discussed, and examples are given.

  14. Updating Risk Prediction Tools: A Case Study in Prostate Cancer

    PubMed Central

    Ankerst, Donna P.; Koniarski, Tim; Liang, Yuanyuan; Leach, Robin J.; Feng, Ziding; Sanda, Martin G.; Partin, Alan W.; Chan, Daniel W; Kagan, Jacob; Sokoll, Lori; Wei, John T; Thompson, Ian M.

    2013-01-01

    Online risk prediction tools for common cancers are now easily accessible and widely used by patients and doctors for informed decision-making concerning screening and diagnosis. A practical problem is as cancer research moves forward and new biomarkers and risk factors are discovered, there is a need to update the risk algorithms to include them. Typically the new markers and risk factors cannot be retrospectively measured on the same study participants used to develop the original prediction tool, necessitating the merging of a separate study of different participants, which may be much smaller in sample size and of a different design. Validation of the updated tool on a third independent data set is warranted before the updated tool can go online. This article reports on the application of Bayes rule for updating risk prediction tools to include a set of biomarkers measured in an external study to the original study used to develop the risk prediction tool. The procedure is illustrated in the context of updating the online Prostate Cancer Prevention Trial Risk Calculator to incorporate the new markers %freePSA and [−2]proPSA measured on an external case control study performed in Texas, U.S.. Recent state-of-the art methods in validation of risk prediction tools and evaluation of the improvement of updated to original tools are implemented using an external validation set provided by the U.S. Early Detection Research Network. PMID:22095849

  15. Updating risk prediction tools: a case study in prostate cancer.

    PubMed

    Ankerst, Donna P; Koniarski, Tim; Liang, Yuanyuan; Leach, Robin J; Feng, Ziding; Sanda, Martin G; Partin, Alan W; Chan, Daniel W; Kagan, Jacob; Sokoll, Lori; Wei, John T; Thompson, Ian M

    2012-01-01

    Online risk prediction tools for common cancers are now easily accessible and widely used by patients and doctors for informed decision-making concerning screening and diagnosis. A practical problem is as cancer research moves forward and new biomarkers and risk factors are discovered, there is a need to update the risk algorithms to include them. Typically, the new markers and risk factors cannot be retrospectively measured on the same study participants used to develop the original prediction tool, necessitating the merging of a separate study of different participants, which may be much smaller in sample size and of a different design. Validation of the updated tool on a third independent data set is warranted before the updated tool can go online. This article reports on the application of Bayes rule for updating risk prediction tools to include a set of biomarkers measured in an external study to the original study used to develop the risk prediction tool. The procedure is illustrated in the context of updating the online Prostate Cancer Prevention Trial Risk Calculator to incorporate the new markers %freePSA and [-2]proPSA measured on an external case-control study performed in Texas, U.S.. Recent state-of-the art methods in validation of risk prediction tools and evaluation of the improvement of updated to original tools are implemented using an external validation set provided by the U.S. Early Detection Research Network. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. CAE Based Die Face Engineering Development to Contribute to the Revitalization of the Tool & Die Industry

    NASA Astrophysics Data System (ADS)

    Tang, Arthur; Lee, Wing C.; St. Pierre, Shawn; He, Jeanne; Liu, Kesu; Chen, Chin C.

    2005-08-01

    Over the past two decades, the Computer Aided Engineering (CAE) tools have emerged as one of the most important engineering tools in various industries, due to its flexibility and accuracy in prediction. Nowadays, CAE tools are widely used in the sheet metal forming industry to predict the forming feasibility of a wide variety of complex components, ranging from aerospace and automotive components to household products. As the demand of CAE based formability accelerates, the need for a robust and streamlined die face engineering tool becomes more crucial, especially in the early stage when the tooling layout is not available, but a product design decision must be made. Ability to generate blank, binder and addendum surfaces with an appropriate layout of Drawbead, Punch Opening Line, Trim Line are the primary features and functions of a CAE based die face engineering tool. Once the die face layout is ready, a formability study should be followed to verify the die face layout is adequate to produce a formable part. If successful, the established die face surface should be exported back to the CAD/CAM environment to speed up the tooling and manufacturing design process with confidence that this particular part is formable with this given die face. With a CAE tool as described above, the tool & die industry will be greatly impacted as the processes will enable the bypass of hardware try-out and shorten the overall vehicle production timing. The trend has shown that OEMs and first tiers will source to low cost producers in the world which will have a negative impact to the traditional tool & die makers in the developed countries. CAE based tool as described should be adopted, along with many other solutions, in order to maintain efficiency of producing high quality product and meeting time-to-market requirements. This paper will describe how a CAE based die face engineering (DFE) tool could be further developed to enable the traditional tool & die makers to meet the challenge ahead.

  17. The Efficacy of Violence Prediction: A Meta-Analytic Comparison of Nine Risk Assessment Tools

    ERIC Educational Resources Information Center

    Yang, Min; Wong, Stephen C. P.; Coid, Jeremy

    2010-01-01

    Actuarial risk assessment tools are used extensively to predict future violence, but previous studies comparing their predictive accuracies have produced inconsistent findings as a result of various methodological issues. We conducted meta-analyses of the effect sizes of 9 commonly used risk assessment tools and their subscales to compare their…

  18. In silico site-directed mutagenesis informs species-specific predictions of chemical susceptibility derived from the Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS) tool

    EPA Science Inventory

    The Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS) tool was developed to address needs for rapid, cost effective methods of species extrapolation of chemical susceptibility. Specifically, the SeqAPASS tool compares the primary sequence (Level 1), functiona...

  19. Evaluation of whole genome sequencing and software tools for drug susceptibility testing of Mycobacterium tuberculosis.

    PubMed

    van Beek, J; Haanperä, M; Smit, P W; Mentula, S; Soini, H

    2018-04-11

    Culture-based assays are currently the reference standard for drug susceptibility testing for Mycobacterium tuberculosis. They provide good sensitivity and specificity but are time consuming. The objective of this study was to evaluate whether whole genome sequencing (WGS), combined with software tools for data analysis, can replace routine culture-based assays for drug susceptibility testing of M. tuberculosis. M. tuberculosis cultures sent to the Finnish mycobacterial reference laboratory in 2014 (n = 211) were phenotypically tested by Mycobacteria Growth Indicator Tube (MGIT) for first-line drug susceptibilities. WGS was performed for all isolates using the Illumina MiSeq system, and data were analysed using five software tools (PhyResSE, Mykrobe Predictor, TB Profiler, TGS-TB and KvarQ). Diagnostic time and reagent costs were estimated for both methods. The sensitivity of the five software tools to predict any resistance among strains was almost identical, ranging from 74% to 80%, and specificity was more than 95% for all software tools except for TGS-TB. The sensitivity and specificity to predict resistance to individual drugs varied considerably among the software tools. Reagent costs for MGIT and WGS were €26 and €143 per isolate respectively. Turnaround time for MGIT was 19 days (range 10-50 days) for first-line drugs, and turnaround time for WGS was estimated to be 5 days (range 3-7 days). WGS could be used as a prescreening assay for drug susceptibility testing with confirmation of resistant strains by MGIT. The functionality and ease of use of the software tools need to be improved. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  20. Clinical application of the Melbourne risk prediction tool in a high-risk upper abdominal surgical population: an observational cohort study.

    PubMed

    Parry, S; Denehy, L; Berney, S; Browning, L

    2014-03-01

    (1) To determine the ability of the Melbourne risk prediction tool to predict a pulmonary complication as defined by the Melbourne Group Scale in a medically defined high-risk upper abdominal surgery population during the postoperative period; (2) to identify the incidence of postoperative pulmonary complications; and (3) to examine the risk factors for postoperative pulmonary complications in this high-risk population. Observational cohort study. Tertiary Australian referral centre. 50 individuals who underwent medically defined high-risk upper abdominal surgery. Presence of postoperative pulmonary complications was screened daily for seven days using the Melbourne Group Scale (Version 2). Postoperative pulmonary risk prediction was calculated according to the Melbourne risk prediction tool. (1) Melbourne risk prediction tool; and (2) the incidence of postoperative pulmonary complications. Sixty-six percent (33/50) underwent hepatobiliary or upper gastrointestinal surgery. Mean (SD) anaesthetic duration was 377.8 (165.5) minutes. The risk prediction tool classified 84% (42/50) as high risk. Overall postoperative pulmonary complication incidence was 42% (21/50). The tool was 91% sensitive and 21% specific with a 50% chance of correct classification. This is the first study to externally validate the Melbourne risk prediction tool in an independent medically defined high-risk population. There was a higher incidence of pulmonary complications postoperatively observed compared to that previously reported. Results demonstrated poor validity of the tool in a population already defined medically as high risk and when applied postoperatively. This observational study has identified several important points to consider in future trials. Copyright © 2013 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  1. High-Speed Edge Trimming of CFRP and Online Monitoring of Performance of Router Tools Using Acoustic Emission

    PubMed Central

    Prakash, Rangasamy; Krishnaraj, Vijayan; Zitoune, Redouane; Sheikh-Ahmad, Jamal

    2016-01-01

    Carbon fiber reinforced polymers (CFRPs) have found wide-ranging applications in numerous industrial fields such as aerospace, automotive, and shipping industries due to their excellent mechanical properties that lead to enhanced functional performance. In this paper, an experimental study on edge trimming of CFRP was done with various cutting conditions and different geometry of tools such as helical-, fluted-, and burr-type tools. The investigation involves the measurement of cutting forces for the different machining conditions and its effect on the surface quality of the trimmed edges. The modern cutting tools (router tools or burr tools) selected for machining CFRPs, have complex geometries in cutting edges and surfaces, and therefore a traditional method of direct tool wear evaluation is not applicable. An acoustic emission (AE) sensing was employed for on-line monitoring of the performance of router tools to determine the relationship between AE signal and length of machining for different kinds of geometry of tools. The investigation showed that the router tool with a flat cutting edge has better performance by generating lower cutting force and better surface finish with no delamination on trimmed edges. The mathematical modeling for the prediction of cutting forces was also done using Artificial Neural Network and Regression Analysis. PMID:28773919

  2. Revisiting the Holy Grail: using plant functional traits to understand ecological processes.

    PubMed

    Funk, Jennifer L; Larson, Julie E; Ames, Gregory M; Butterfield, Bradley J; Cavender-Bares, Jeannine; Firn, Jennifer; Laughlin, Daniel C; Sutton-Grier, Ariana E; Williams, Laura; Wright, Justin

    2017-05-01

    One of ecology's grand challenges is developing general rules to explain and predict highly complex systems. Understanding and predicting ecological processes from species' traits has been considered a 'Holy Grail' in ecology. Plant functional traits are increasingly being used to develop mechanistic models that can predict how ecological communities will respond to abiotic and biotic perturbations and how species will affect ecosystem function and services in a rapidly changing world; however, significant challenges remain. In this review, we highlight recent work and outstanding questions in three areas: (i) selecting relevant traits; (ii) describing intraspecific trait variation and incorporating this variation into models; and (iii) scaling trait data to community- and ecosystem-level processes. Over the past decade, there have been significant advances in the characterization of plant strategies based on traits and trait relationships, and the integration of traits into multivariate indices and models of community and ecosystem function. However, the utility of trait-based approaches in ecology will benefit from efforts that demonstrate how these traits and indices influence organismal, community, and ecosystem processes across vegetation types, which may be achieved through meta-analysis and enhancement of trait databases. Additionally, intraspecific trait variation and species interactions need to be incorporated into predictive models using tools such as Bayesian hierarchical modelling. Finally, existing models linking traits to community and ecosystem processes need to be empirically tested for their applicability to be realized. © 2016 Cambridge Philosophical Society.

  3. Transferability of species distribution models: a functional habitat approach for two regionally threatened butterflies.

    PubMed

    Vanreusel, Wouter; Maes, Dirk; Van Dyck, Hans

    2007-02-01

    Numerous models for predicting species distribution have been developed for conservation purposes. Most of them make use of environmental data (e.g., climate, topography, land use) at a coarse grid resolution (often kilometres). Such approaches are useful for conservation policy issues including reserve-network selection. The efficiency of predictive models for species distribution is usually tested on the area for which they were developed. Although highly interesting from the point of view of conservation efficiency, transferability of such models to independent areas is still under debate. We tested the transferability of habitat-based predictive distribution models for two regionally threatened butterflies, the green hairstreak (Callophrys rubi) and the grayling (Hipparchia semele), within and among three nature reserves in northeastern Belgium. We built predictive models based on spatially detailed maps of area-wide distribution and density of ecological resources. We used resources directly related to ecological functions (host plants, nectar sources, shelter, microclimate) rather than environmental surrogate variables. We obtained models that performed well with few resource variables. All models were transferable--although to different degrees--among the independent areas within the same broad geographical region. We argue that habitat models based on essential functional resources could transfer better in space than models that use indirect environmental variables. Because functional variables can easily be interpreted and even be directly affected by terrain managers, these models can be useful tools to guide species-adapted reserve management.

  4. Introducing the Forensic Research/Reference on Genetics knowledge base, FROG-kb

    PubMed Central

    2012-01-01

    Background Online tools and databases based on multi-allelic short tandem repeat polymorphisms (STRPs) are actively used in forensic teaching, research, and investigations. The Fst value of each CODIS marker tends to be low across the populations of the world and most populations typically have all the common STRP alleles present diminishing the ability of these systems to discriminate ethnicity. Recently, considerable research is being conducted on single nucleotide polymorphisms (SNPs) to be considered for human identification and description. However, online tools and databases that can be used for forensic research and investigation are limited. Methods The back end DBMS (Database Management System) for FROG-kb is Oracle version 10. The front end is implemented with specific code using technologies such as Java, Java Servlet, JSP, JQuery, and GoogleCharts. Results We present an open access web application, FROG-kb (Forensic Research/Reference on Genetics-knowledge base, http://frog.med.yale.edu), that is useful for teaching and research relevant to forensics and can serve as a tool facilitating forensic practice. The underlying data for FROG-kb are provided by the already extensively used and referenced ALlele FREquency Database, ALFRED (http://alfred.med.yale.edu). In addition to displaying data in an organized manner, computational tools that use the underlying allele frequencies with user-provided data are implemented in FROG-kb. These tools are organized by the different published SNP/marker panels available. This web tool currently has implemented general functions possible for two types of SNP panels, individual identification and ancestry inference, and a prediction function specific to a phenotype informative panel for eye color. Conclusion The current online version of FROG-kb already provides new and useful functionality. We expect FROG-kb to grow and expand in capabilities and welcome input from the forensic community in identifying datasets and functionalities that will be most helpful and useful. Thus, the structure and functionality of FROG-kb will be revised in an ongoing process of improvement. This paper describes the state as of early June 2012. PMID:22938150

  5. modPDZpep: a web resource for structure based analysis of human PDZ-mediated interaction networks.

    PubMed

    Sain, Neetu; Mohanty, Debasisa

    2016-09-21

    PDZ domains recognize short sequence stretches usually present in C-terminal of their interaction partners. Because of the involvement of PDZ domains in many important biological processes, several attempts have been made for developing bioinformatics tools for genome-wide identification of PDZ interaction networks. Currently available tools for prediction of interaction partners of PDZ domains utilize machine learning approach. Since, they have been trained using experimental substrate specificity data for specific PDZ families, their applicability is limited to PDZ families closely related to the training set. These tools also do not allow analysis of PDZ-peptide interaction interfaces. We have used a structure based approach to develop modPDZpep, a program to predict the interaction partners of human PDZ domains and analyze structural details of PDZ interaction interfaces. modPDZpep predicts interaction partners by using structural models of PDZ-peptide complexes and evaluating binding energy scores using residue based statistical pair potentials. Since, it does not require training using experimental data on peptide binding affinity, it can predict substrates for diverse PDZ families. Because of the use of simple scoring function for binding energy, it is also fast enough for genome scale structure based analysis of PDZ interaction networks. Benchmarking using artificial as well as real negative datasets indicates good predictive power with ROC-AUC values in the range of 0.7 to 0.9 for a large number of human PDZ domains. Another novel feature of modPDZpep is its ability to map novel PDZ mediated interactions in human protein-protein interaction networks, either by utilizing available experimental phage display data or by structure based predictions. In summary, we have developed modPDZpep, a web-server for structure based analysis of human PDZ domains. It is freely available at http://www.nii.ac.in/modPDZpep.html or http://202.54.226.235/modPDZpep.html . This article was reviewed by Michael Gromiha and Zoltán Gáspári.

  6. Discriminating the reaction types of plant type III polyketide synthases

    PubMed Central

    Shimizu, Yugo; Ogata, Hiroyuki; Goto, Susumu

    2017-01-01

    Abstract Motivation: Functional prediction of paralogs is challenging in bioinformatics because of rapid functional diversification after gene duplication events combined with parallel acquisitions of similar functions by different paralogs. Plant type III polyketide synthases (PKSs), producing various secondary metabolites, represent a paralogous family that has undergone gene duplication and functional alteration. Currently, there is no computational method available for the functional prediction of type III PKSs. Results: We developed a plant type III PKS reaction predictor, pPAP, based on the recently proposed classification of type III PKSs. pPAP combines two kinds of similarity measures: one calculated by profile hidden Markov models (pHMMs) built from functionally and structurally important partial sequence regions, and the other based on mutual information between residue positions. pPAP targets PKSs acting on ring-type starter substrates, and classifies their functions into four reaction types. The pHMM approach discriminated two reaction types with high accuracy (97.5%, 39/40), but its accuracy decreased when discriminating three reaction types (87.8%, 43/49). When combined with a correlation-based approach, all 49 PKSs were correctly discriminated, and pPAP was still highly accurate (91.4%, 64/70) even after adding other reaction types. These results suggest pPAP, which is based on linear discriminant analyses of similarity measures, is effective for plant type III PKS function prediction. Availability and Implementation: pPAP is freely available at ftp://ftp.genome.jp/pub/tools/ppap/ Contact: goto@kuicr.kyoto-u.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28334262

  7. Self-rated health is associated with subsequent functional decline among older adults in Japan.

    PubMed

    Hirosaki, Mayumi; Okumiya, Kiyohito; Wada, Taizo; Ishine, Masayuki; Sakamoto, Ryota; Ishimoto, Yasuko; Kasahara, Yoriko; Kimura, Yumi; Fukutomi, Eriko; Chen, Wen Ling; Nakatsuka, Masahiro; Fujisawa, Michiko; Otsuka, Kuniaki; Matsubayashi, Kozo

    2017-09-01

    Previous studies have reported that self-rated health (SRH) predicts subsequent mortality. However, less is known about the association between SRH and functional ability. The aim of this study was to examine whether SRH predicts decline in basic activities of daily living (ADL), even after adjustment for depression, among community-dwelling older adults in Japan. A three-year prospective cohort study was conducted among 654 residents aged 65 years and older without disability in performing basic ADL at baseline. SRH was assessed using a visual analogue scale (range; 0-100), and dichotomized into low and high groups. Information on functional ability, sociodemographic factors, depressive symptoms, and medical conditions were obtained using a self-administered questionnaire. Logistic regression analysis was used to examine the association between baseline SRH and functional decline three years later. One hundred and eight (16.5%) participants reported a decline in basic ADL at the three-year follow-up. Multiple logistic regression analysis showed that the low SRH group had a higher risk for functional decline compared to the high SRH group, even after controlling for potential confounding factors (odds ratio (OR) = 2.4; 95% confidence interval (CI) = 1.3-4.4). Furthermore, a 10-point difference in SRH score was associated with subsequent functional decline (OR = 1.37; 95% CI = 1.16-1.61). SRH was an independent predictor of functional decline. SRH could be a simple assessment tool for predicting the loss or maintenance of functional ability in community-dwelling older adults. Positive self-evaluation might be useful to maintain an active lifestyle and stay healthy.

  8. An interpretation of photometric parameters of bright desert regions of Mars and their dependence on wave length

    NASA Technical Reports Server (NTRS)

    Weaver, W. R.; Meador, W. E.

    1977-01-01

    Photometric data from the bright desert areas of Mars were used to determine the dependence of the three photometric parameters of the photometric function on wavelength and to provide qualitative predictions about the physical properties of the surface. Knowledge of the parameters allowed the brightness of these areas of Mars to be determined for any scattering geometry in the wavelength range of 0.45 to 0.70 micron. The changes that occur in the photometric parameters due to changes in wavelength were shown to be consistent with their physical interpretations, and the predictions of surface properties were shown to be consistent with conditions expected to exist in these regions of Mars. The photometric function was shown to have potential as a diagnostic tool for the qualitative determination of surface properties, and the consistency of the behavior of the photometric parameters was considered to be support for the validity of the photometric function.

  9. Active Interaction Mapping as a tool to elucidate hierarchical functions of biological processes.

    PubMed

    Farré, Jean-Claude; Kramer, Michael; Ideker, Trey; Subramani, Suresh

    2017-07-03

    Increasingly, various 'omics data are contributing significantly to our understanding of novel biological processes, but it has not been possible to iteratively elucidate hierarchical functions in complex phenomena. We describe a general systems biology approach called Active Interaction Mapping (AI-MAP), which elucidates the hierarchy of functions for any biological process. Existing and new 'omics data sets can be iteratively added to create and improve hierarchical models which enhance our understanding of particular biological processes. The best datatypes to further improve an AI-MAP model are predicted computationally. We applied this approach to our understanding of general and selective autophagy, which are conserved in most eukaryotes, setting the stage for the broader application to other cellular processes of interest. In the particular application to autophagy-related processes, we uncovered and validated new autophagy and autophagy-related processes, expanded known autophagy processes with new components, integrated known non-autophagic processes with autophagy and predict other unexplored connections.

  10. Comparative genomics of metabolic capacities of regulons controlled by cis-regulatory RNA motifs in bacteria.

    PubMed

    Sun, Eric I; Leyn, Semen A; Kazanov, Marat D; Saier, Milton H; Novichkov, Pavel S; Rodionov, Dmitry A

    2013-09-02

    In silico comparative genomics approaches have been efficiently used for functional prediction and reconstruction of metabolic and regulatory networks. Riboswitches are metabolite-sensing structures often found in bacterial mRNA leaders controlling gene expression on transcriptional or translational levels.An increasing number of riboswitches and other cis-regulatory RNAs have been recently classified into numerous RNA families in the Rfam database. High conservation of these RNA motifs provides a unique advantage for their genomic identification and comparative analysis. A comparative genomics approach implemented in the RegPredict tool was used for reconstruction and functional annotation of regulons controlled by RNAs from 43 Rfam families in diverse taxonomic groups of Bacteria. The inferred regulons include ~5200 cis-regulatory RNAs and more than 12000 target genes in 255 microbial genomes. All predicted RNA-regulated genes were classified into specific and overall functional categories. Analysis of taxonomic distribution of these categories allowed us to establish major functional preferences for each analyzed cis-regulatory RNA motif family. Overall, most RNA motif regulons showed predictable functional content in accordance with their experimentally established effector ligands. Our results suggest that some RNA motifs (including thiamin pyrophosphate and cobalamin riboswitches that control the cofactor metabolism) are widespread and likely originated from the last common ancestor of all bacteria. However, many more analyzed RNA motifs are restricted to a narrow taxonomic group of bacteria and likely represent more recent evolutionary innovations. The reconstructed regulatory networks for major known RNA motifs substantially expand the existing knowledge of transcriptional regulation in bacteria. The inferred regulons can be used for genetic experiments, functional annotations of genes, metabolic reconstruction and evolutionary analysis. The obtained genome-wide collection of reference RNA motif regulons is available in the RegPrecise database (http://regprecise.lbl.gov/).

  11. Phase-Amplitude Response Functions for Transient-State Stimuli

    PubMed Central

    2013-01-01

    Abstract The phase response curve (PRC) is a powerful tool to study the effect of a perturbation on the phase of an oscillator, assuming that all the dynamics can be explained by the phase variable. However, factors like the rate of convergence to the oscillator, strong forcing or high stimulation frequency may invalidate the above assumption and raise the question of how is the phase variation away from an attractor. The concept of isochrons turns out to be crucial to answer this question; from it, we have built up Phase Response Functions (PRF) and, in the present paper, we complete the extension of advancement functions to the transient states by defining the Amplitude Response Function (ARF) to control changes in the transversal variables. Based on the knowledge of both the PRF and the ARF, we study the case of a pulse-train stimulus, and compare the predictions given by the PRC-approach (a 1D map) to those given by the PRF-ARF-approach (a 2D map); we observe differences up to two orders of magnitude in favor of the 2D predictions, especially when the stimulation frequency is high or the strength of the stimulus is large. We also explore the role of hyperbolicity of the limit cycle as well as geometric aspects of the isochrons. Summing up, we aim at enlightening the contribution of transient effects in predicting the phase response and showing the limits of the phase reduction approach to prevent from falling into wrong predictions in synchronization problems. List of Abbreviations PRC phase response curve, phase resetting curve. PRF phase response function. ARF amplitude response function. PMID:23945295

  12. Three-dimensional prediction of soil physical, chemical, and hydrological properties in a forested catchment of the Santa Catalina CZO

    NASA Astrophysics Data System (ADS)

    Shepard, C.; Holleran, M.; Lybrand, R. A.; Rasmussen, C.

    2014-12-01

    Understanding critical zone evolution and function requires an accurate assessment of local soil properties. Two-dimensional (2D) digital soil mapping provides a general assessment of soil characteristics across a sampled landscape, but lacks the ability to predict soil properties with depth. The utilization of mass-preserving spline functions enable the extrapolation of soil properties with depth, extending predictive functions to three-dimensions (3D). The present study was completed in the Marshall Gulch (MG) catchment, located in the Santa Catalina Mountains, 30 km northwest of Tucson, Arizona, as part of the Santa Catalina-Jemez Mountains Critical Zone Observatory. Twenty-four soil pits were excavated and described following standard procedures. Mass-preserving splines were used to extrapolate mass carbon (kg C m-2); percent clay, silt, and sand (%); sodium mass flux (kg Na m-2); and pH for 24 sampled soil pits in 1-cm depth increments. Saturated volumetric water content (θs) and volumetric water content at 10 kPa (θ10) were predicted using ROSETTA and established empirical relationships. The described profiles were all sampled to differing depths; to compensate for the unevenness of the profile descriptions, the soil depths were standardized from 0.0 to 1.0 and then split into five equal standard depth sections. A logit-transformation was used to normalize the target variables. Step-wise regressions were calculated using available environmental covariates to predict the properties of each variable across the catchment in each depth section, and interpolated model residuals added back to the predicted layers to generate the final soil maps. Logit-transformed R2 for the predictive functions varied widely, ranging from 0.20 to 0.79, with logit-transformed RMSE ranging from 0.15 to 2.77. The MG catchment was further classified into clusters with similar properties based on the environmental covariates, and representative depth functions for each target variable in each cluster calculated. Mass-preserving splines combined with stepwise regressions are an effective tool for predicting soil physical, chemical, and hydrological properties with depth, enhancing our understanding of the critical zone.

  13. A community resource benchmarking predictions of peptide binding to MHC-I molecules.

    PubMed

    Peters, Bjoern; Bui, Huynh-Hoa; Frankild, Sune; Nielson, Morten; Lundegaard, Claus; Kostem, Emrah; Basch, Derek; Lamberth, Kasper; Harndahl, Mikkel; Fleri, Ward; Wilson, Stephen S; Sidney, John; Lund, Ole; Buus, Soren; Sette, Alessandro

    2006-06-09

    Recognition of peptides bound to major histocompatibility complex (MHC) class I molecules by T lymphocytes is an essential part of immune surveillance. Each MHC allele has a characteristic peptide binding preference, which can be captured in prediction algorithms, allowing for the rapid scan of entire pathogen proteomes for peptide likely to bind MHC. Here we make public a large set of 48,828 quantitative peptide-binding affinity measurements relating to 48 different mouse, human, macaque, and chimpanzee MHC class I alleles. We use this data to establish a set of benchmark predictions with one neural network method and two matrix-based prediction methods extensively utilized in our groups. In general, the neural network outperforms the matrix-based predictions mainly due to its ability to generalize even on a small amount of data. We also retrieved predictions from tools publicly available on the internet. While differences in the data used to generate these predictions hamper direct comparisons, we do conclude that tools based on combinatorial peptide libraries perform remarkably well. The transparent prediction evaluation on this dataset provides tool developers with a benchmark for comparison of newly developed prediction methods. In addition, to generate and evaluate our own prediction methods, we have established an easily extensible web-based prediction framework that allows automated side-by-side comparisons of prediction methods implemented by experts. This is an advance over the current practice of tool developers having to generate reference predictions themselves, which can lead to underestimating the performance of prediction methods they are not as familiar with as their own. The overall goal of this effort is to provide a transparent prediction evaluation allowing bioinformaticians to identify promising features of prediction methods and providing guidance to immunologists regarding the reliability of prediction tools.

  14. Gbm.auto: A software tool to simplify spatial modelling and Marine Protected Area planning

    PubMed Central

    Officer, Rick; Clarke, Maurice; Reid, David G.; Brophy, Deirdre

    2017-01-01

    Boosted Regression Trees. Excellent for data-poor spatial management but hard to use Marine resource managers and scientists often advocate spatial approaches to manage data-poor species. Existing spatial prediction and management techniques are either insufficiently robust, struggle with sparse input data, or make suboptimal use of multiple explanatory variables. Boosted Regression Trees feature excellent performance and are well suited to modelling the distribution of data-limited species, but are extremely complicated and time-consuming to learn and use, hindering access for a wide potential user base and therefore limiting uptake and usage. BRTs automated and simplified for accessible general use with rich feature set We have built a software suite in R which integrates pre-existing functions with new tailor-made functions to automate the processing and predictive mapping of species abundance data: by automating and greatly simplifying Boosted Regression Tree spatial modelling, the gbm.auto R package suite makes this powerful statistical modelling technique more accessible to potential users in the ecological and modelling communities. The package and its documentation allow the user to generate maps of predicted abundance, visualise the representativeness of those abundance maps and to plot the relative influence of explanatory variables and their relationship to the response variables. Databases of the processed model objects and a report explaining all the steps taken within the model are also generated. The package includes a previously unavailable Decision Support Tool which combines estimated escapement biomass (the percentage of an exploited population which must be retained each year to conserve it) with the predicted abundance maps to generate maps showing the location and size of habitat that should be protected to conserve the target stocks (candidate MPAs), based on stakeholder priorities, such as the minimisation of fishing effort displacement. Gbm.auto for management in various settings By bridging the gap between advanced statistical methods for species distribution modelling and conservation science, management and policy, these tools can allow improved spatial abundance predictions, and therefore better management, decision-making, and conservation. Although this package was built to support spatial management of a data-limited marine elasmobranch fishery, it should be equally applicable to spatial abundance modelling, area protection, and stakeholder engagement in various scenarios. PMID:29216310

  15. Functional assessment in vocational rehabilitation: a systematic approach to diagnosis and goal setting.

    PubMed

    Crewe, N M; Athelstan, G T

    1981-07-01

    The Functional Assessment Inventory (FAI) has been developed for diagnostic use in vocational rehabilitation. This study involved field testing and initial validation of the Inventory as a diagnostic tool. Thirty vocational rehabilitation counselors administered the Inventory to 351 clients. Factor analysis identified 8 scales: Cognitive Function, Motor Function, Personality and Behavior. Vocational Qualifications, Medical Condition, Vision, Hearing, and Economic Disincentives. Content and concurrent validity of the Inventory were assessed by comparing the scores of clients grounded by medical diagnosis and by relating scores to counselors' judgments of severity of disability and employability. Clients with various primary disabilities appeared to differ from one another on the factor scales and on individual items in predictable ways. Total Functional Limitations scores were highly correlated with counselors' ratings of severity of disability and employability.

  16. Validation and Use of a Predictive Modeling Tool: Employing Scientific Findings to Improve Responsible Conduct of Research Education.

    PubMed

    Mulhearn, Tyler J; Watts, Logan L; Todd, E Michelle; Medeiros, Kelsey E; Connelly, Shane; Mumford, Michael D

    2017-01-01

    Although recent evidence suggests ethics education can be effective, the nature of specific training programs, and their effectiveness, varies considerably. Building on a recent path modeling effort, the present study developed and validated a predictive modeling tool for responsible conduct of research education. The predictive modeling tool allows users to enter ratings in relation to a given ethics training program and receive instantaneous evaluative information for course refinement. Validation work suggests the tool's predicted outcomes correlate strongly (r = 0.46) with objective course outcomes. Implications for training program development and refinement are discussed.

  17. Association of pain ratings with the prediction of early physical recovery after general and orthopaedic surgery-A quantitative study with repeated measures.

    PubMed

    Eriksson, Kerstin; Wikström, Lotta; Fridlund, Bengt; Årestedt, Kristofer; Broström, Anders

    2017-11-01

    To compare different levels of self-rated pain and determine if they predict anticipated early physical recovery in patients undergoing general and orthopaedic surgery. Previous research has indicated that average self-rated pain reflects patients' ability to recover the same day. However, there is a knowledge gap about the feasibility of using average pain ratings to predict patients' physical recovery for the next day. Descriptive, quantitative repeated measures. General and orthopaedic inpatients (n = 479) completed a questionnaire (October 2012-January 2015) about pain and recovery. Average pain intensity at rest and during activity was based on the Numeric Rating Scale and divided into three levels (0-3, 4-6, 7-10). Three out of five dimensions from the tool "Postoperative Recovery Profile" were used. Because few suffered severe pain, general and orthopaedic patients were analysed together. Binary logistic regression analysis showed that average pain intensity postoperative day 1 significantly predicted the impact on recovery day 2, except nausea, gastrointestinal function and bladder function when pain at rest and also nausea, appetite changes, and bladder function when pain during activity. High pain ratings (NRS 7-10) demonstrated to be a better predictor for recovery compared with moderate ratings (NRS 4-6), day 2, as it significantly predicted more items in recovery. Pain intensity reflected general and orthopaedic patients' physical recovery postoperative day 1 and predicted recovery for day 2. By monitoring patients' pain and impact on recovery, patients' need for support becomes visible which is valuable during hospital stays. © 2017 John Wiley & Sons Ltd.

  18. Estimation of Moisture Content of Forest Canopy and Floor from SAR Data Part I: Volume Scattering Case

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.; Saatchi, S.

    1996-01-01

    To understand and predict the functioning of forest biomes, their interaction with the atmosphere, and their growth rates, the knowledge of moisture content of their canopy and the floor soil is essential. The synthetic aperture radar on airborne and spaceborne platforms has proven to be a flexible tool for measuring electromagnetic back- scattering properties of vegetation related to their moisture content.

  19. Are Delinquents Different? Predictive Patterns for Low, Mid and High Delinquency Levels in a General Youth Sample via the HEW Youth Development Model's Impact Scales.

    ERIC Educational Resources Information Center

    Truckenmiller, James L.

    The Health, Education and Welfare (HEW) Office of Youth Development's National Strategy for Youth Development model was promoted as a community-based planning and procedural tool for enhancing positive youth development and reducing delinquency. To test the applicability of the model as a function of delinquency level, the program's Impact Scales…

  20. Transducer Analysis and ATILA++ Model Development

    DTIC Science & Technology

    2016-10-10

    the ATILA finite element software package. This will greatly enhance the state-of-the-art in transducer performance prediction and provide a tool...refereed publication. 15 IMPACT/APPLICATIONS This work is helping to enable the expansion of the functionality of the A TILA ++ finite element ...Sb. GRANT NUMBER N00014-13-1-0196 Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER Richard J. Meyer, Jr. 20675 Douglas C. Markley Se

Top