Science.gov

Sample records for target prediction tool

  1. Common features of microRNA target prediction tools

    PubMed Central

    Peterson, Sarah M.; Thompson, Jeffrey A.; Ufkin, Melanie L.; Sathyanarayana, Pradeep; Liaw, Lucy; Congdon, Clare Bates

    2014-01-01

    The human genome encodes for over 1800 microRNAs (miRNAs), which are short non-coding RNA molecules that function to regulate gene expression post-transcriptionally. Due to the potential for one miRNA to target multiple gene transcripts, miRNAs are recognized as a major mechanism to regulate gene expression and mRNA translation. Computational prediction of miRNA targets is a critical initial step in identifying miRNA:mRNA target interactions for experimental validation. The available tools for miRNA target prediction encompass a range of different computational approaches, from the modeling of physical interactions to the incorporation of machine learning. This review provides an overview of the major computational approaches to miRNA target prediction. Our discussion highlights three tools for their ease of use, reliance on relatively updated versions of miRBase, and range of capabilities, and these are DIANA-microT-CDS, miRanda-mirSVR, and TargetScan. In comparison across all miRNA target prediction tools, four main aspects of the miRNA:mRNA target interaction emerge as common features on which most target prediction is based: seed match, conservation, free energy, and site accessibility. This review explains these features and identifies how they are incorporated into currently available target prediction tools. MiRNA target prediction is a dynamic field with increasing attention on development of new analysis tools. This review attempts to provide a comprehensive assessment of these tools in a manner that is accessible across disciplines. Understanding the basis of these prediction methodologies will aid in user selection of the appropriate tools and interpretation of the tool output. PMID:24600468

  2. Common features of microRNA target prediction tools.

    PubMed

    Peterson, Sarah M; Thompson, Jeffrey A; Ufkin, Melanie L; Sathyanarayana, Pradeep; Liaw, Lucy; Congdon, Clare Bates

    2014-01-01

    The human genome encodes for over 1800 microRNAs (miRNAs), which are short non-coding RNA molecules that function to regulate gene expression post-transcriptionally. Due to the potential for one miRNA to target multiple gene transcripts, miRNAs are recognized as a major mechanism to regulate gene expression and mRNA translation. Computational prediction of miRNA targets is a critical initial step in identifying miRNA:mRNA target interactions for experimental validation. The available tools for miRNA target prediction encompass a range of different computational approaches, from the modeling of physical interactions to the incorporation of machine learning. This review provides an overview of the major computational approaches to miRNA target prediction. Our discussion highlights three tools for their ease of use, reliance on relatively updated versions of miRBase, and range of capabilities, and these are DIANA-microT-CDS, miRanda-mirSVR, and TargetScan. In comparison across all miRNA target prediction tools, four main aspects of the miRNA:mRNA target interaction emerge as common features on which most target prediction is based: seed match, conservation, free energy, and site accessibility. This review explains these features and identifies how they are incorporated into currently available target prediction tools. MiRNA target prediction is a dynamic field with increasing attention on development of new analysis tools. This review attempts to provide a comprehensive assessment of these tools in a manner that is accessible across disciplines. Understanding the basis of these prediction methodologies will aid in user selection of the appropriate tools and interpretation of the tool output.

  3. Enhanced clinical pharmacy service targeting tools: risk-predictive algorithms.

    PubMed

    El Hajji, Feras W D; Scullin, Claire; Scott, Michael G; McElnay, James C

    2015-04-01

    This study aimed to determine the value of using a mix of clinical pharmacy data and routine hospital admission spell data in the development of predictive algorithms. Exploration of risk factors in hospitalized patients, together with the targeting strategies devised, will enable the prioritization of clinical pharmacy services to optimize patient outcomes. Predictive algorithms were developed using a number of detailed steps using a 75% sample of integrated medicines management (IMM) patients, and validated using the remaining 25%. IMM patients receive targeted clinical pharmacy input throughout their hospital stay. The algorithms were applied to the validation sample, and predicted risk probability was generated for each patient from the coefficients. Risk threshold for the algorithms were determined by identifying the cut-off points of risk scores at which the algorithm would have the highest discriminative performance. Clinical pharmacy staffing levels were obtained from the pharmacy department staffing database. Numbers of previous emergency admissions and admission medicines together with age-adjusted co-morbidity and diuretic receipt formed a 12-month post-discharge and/or readmission risk algorithm. Age-adjusted co-morbidity proved to be the best index to predict mortality. Increased numbers of clinical pharmacy staff at ward level was correlated with a reduction in risk-adjusted mortality index (RAMI). Algorithms created were valid in predicting risk of in-hospital and post-discharge mortality and risk of hospital readmission 3, 6 and 12 months post-discharge. The provision of ward-based clinical pharmacy services is a key component to reducing RAMI and enabling the full benefits of pharmacy input to patient care to be realized. © 2014 John Wiley & Sons, Ltd.

  4. CCTop: An Intuitive, Flexible and Reliable CRISPR/Cas9 Target Prediction Tool

    PubMed Central

    del Sol Keyer, Maria; Wittbrodt, Joachim; Mateo, Juan L.

    2015-01-01

    Engineering of the CRISPR/Cas9 system has opened a plethora of new opportunities for site-directed mutagenesis and targeted genome modification. Fundamental to this is a stretch of twenty nucleotides at the 5’ end of a guide RNA that provides specificity to the bound Cas9 endonuclease. Since a sequence of twenty nucleotides can occur multiple times in a given genome and some mismatches seem to be accepted by the CRISPR/Cas9 complex, an efficient and reliable in silico selection and evaluation of the targeting site is key prerequisite for the experimental success. Here we present the CRISPR/Cas9 target online predictor (CCTop, http://crispr.cos.uni-heidelberg.de) to overcome limitations of already available tools. CCTop provides an intuitive user interface with reasonable default parameters that can easily be tuned by the user. From a given query sequence, CCTop identifies and ranks all candidate sgRNA target sites according to their off-target quality and displays full documentation. CCTop was experimentally validated for gene inactivation, non-homologous end-joining as well as homology directed repair. Thus, CCTop provides the bench biologist with a tool for the rapid and efficient identification of high quality target sites. PMID:25909470

  5. New target prediction and visualization tools incorporating open source molecular fingerprints for TB Mobile 2.0

    PubMed Central

    2014-01-01

    Background We recently developed a freely available mobile app (TB Mobile) for both iOS and Android platforms that displays Mycobacterium tuberculosis (Mtb) active molecule structures and their targets with links to associated data. The app was developed to make target information available to as large an audience as possible. Results We now report a major update of the iOS version of the app. This includes enhancements that use an implementation of ECFP_6 fingerprints that we have made open source. Using these fingerprints, the user can propose compounds with possible anti-TB activity, and view the compounds within a cluster landscape. Proposed compounds can also be compared to existing target data, using a näive Bayesian scoring system to rank probable targets. We have curated an additional 60 new compounds and their targets for Mtb and added these to the original set of 745 compounds. We have also curated 20 further compounds (many without targets in TB Mobile) to evaluate this version of the app with 805 compounds and associated targets. Conclusions TB Mobile can now manage a small collection of compounds that can be imported from external sources, or exported by various means such as email or app-to-app inter-process communication. This means that TB Mobile can be used as a node within a growing ecosystem of mobile apps for cheminformatics. It can also cluster compounds and use internal algorithms to help identify potential targets based on molecular similarity. TB Mobile represents a valuable dataset, data-visualization aid and target prediction tool. PMID:25302078

  6. New target prediction and visualization tools incorporating open source molecular fingerprints for TB Mobile 2.0.

    PubMed

    Clark, Alex M; Sarker, Malabika; Ekins, Sean

    2014-01-01

    We recently developed a freely available mobile app (TB Mobile) for both iOS and Android platforms that displays Mycobacterium tuberculosis (Mtb) active molecule structures and their targets with links to associated data. The app was developed to make target information available to as large an audience as possible. We now report a major update of the iOS version of the app. This includes enhancements that use an implementation of ECFP_6 fingerprints that we have made open source. Using these fingerprints, the user can propose compounds with possible anti-TB activity, and view the compounds within a cluster landscape. Proposed compounds can also be compared to existing target data, using a näive Bayesian scoring system to rank probable targets. We have curated an additional 60 new compounds and their targets for Mtb and added these to the original set of 745 compounds. We have also curated 20 further compounds (many without targets in TB Mobile) to evaluate this version of the app with 805 compounds and associated targets. TB Mobile can now manage a small collection of compounds that can be imported from external sources, or exported by various means such as email or app-to-app inter-process communication. This means that TB Mobile can be used as a node within a growing ecosystem of mobile apps for cheminformatics. It can also cluster compounds and use internal algorithms to help identify potential targets based on molecular similarity. TB Mobile represents a valuable dataset, data-visualization aid and target prediction tool.

  7. Plant microRNA-Target Interaction Identification Model Based on the Integration of Prediction Tools and Support Vector Machine

    PubMed Central

    Meng, Jun; Shi, Lin; Luan, Yushi

    2014-01-01

    Background Confident identification of microRNA-target interactions is significant for studying the function of microRNA (miRNA). Although some computational miRNA target prediction methods have been proposed for plants, results of various methods tend to be inconsistent and usually lead to more false positive. To address these issues, we developed an integrated model for identifying plant miRNA–target interactions. Results Three online miRNA target prediction toolkits and machine learning algorithms were integrated to identify and analyze Arabidopsis thaliana miRNA-target interactions. Principle component analysis (PCA) feature extraction and self-training technology were introduced to improve the performance. Results showed that the proposed model outperformed the previously existing methods. The results were validated by using degradome sequencing supported Arabidopsis thaliana miRNA-target interactions. The proposed model constructed on Arabidopsis thaliana was run over Oryza sativa and Vitis vinifera to demonstrate that our model is effective for other plant species. Conclusions The integrated model of online predictors and local PCA-SVM classifier gained credible and high quality miRNA-target interactions. The supervised learning algorithm of PCA-SVM classifier was employed in plant miRNA target identification for the first time. Its performance can be substantially improved if more experimentally proved training samples are provided. PMID:25051153

  8. Prediction of miRNA targets.

    PubMed

    Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis

    2015-01-01

    Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.

  9. Parametric bicubic spline and CAD tools for complex targets shape modelling in physical optics radar cross section prediction

    NASA Astrophysics Data System (ADS)

    Delogu, A.; Furini, F.

    1991-09-01

    Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.

  10. Target-D: a stratified individually randomized controlled trial of the diamond clinical prediction tool to triage and target treatment for depressive symptoms in general practice: study protocol for a randomized controlled trial.

    PubMed

    Gunn, Jane; Wachtler, Caroline; Fletcher, Susan; Davidson, Sandra; Mihalopoulos, Cathrine; Palmer, Victoria; Hegarty, Kelsey; Coe, Amy; Murray, Elizabeth; Dowrick, Christopher; Andrews, Gavin; Chondros, Patty

    2017-07-20

    Depression is a highly prevalent and costly disorder. Effective treatments are available but are not always delivered to the right person at the right time, with both under- and over-treatment a problem. Up to half the patients presenting to general practice report symptoms of depression, but general practitioners have no systematic way of efficiently identifying level of need and allocating treatment accordingly. Therefore, our team developed a new clinical prediction tool (CPT) to assist with this task. The CPT predicts depressive symptom severity in three months' time and based on these scores classifies individuals into three groups (minimal/mild, moderate, severe), then provides a matched treatment recommendation. This study aims to test whether using the CPT reduces depressive symptoms at three months compared with usual care. The Target-D study is an individually randomized controlled trial. Participants will be 1320 general practice patients with depressive symptoms who will be approached in the practice waiting room by a research assistant and invited to complete eligibility screening on an iPad. Eligible patients will provide informed consent and complete the CPT on a purpose-built website. A computer-generated allocation sequence stratified by practice and depressive symptom severity group, will randomly assign participants to intervention (treatment recommendation matched to predicted depressive symptom severity group) or comparison (usual care plus Target-D attention control) arms. Follow-up assessments will be completed online at three and 12 months. The primary outcome is depressive symptom severity at three months. Secondary outcomes include anxiety, mental health self-efficacy, quality of life, and cost-effectiveness. Intention-to-treat analyses will test for differences in outcome means between study arms overall and by depressive symptom severity group. To our knowledge, this is the first depressive symptom stratification tool designed for

  11. Tools for in silico target fishing.

    PubMed

    Cereto-Massagué, Adrià; Ojeda, María José; Valls, Cristina; Mulero, Miquel; Pujadas, Gerard; Garcia-Vallve, Santiago

    2015-01-01

    Computational target fishing methods are designed to identify the most probable target of a query molecule. This process may allow the prediction of the bioactivity of a compound, the identification of the mode of action of known drugs, the detection of drug polypharmacology, drug repositioning or the prediction of the adverse effects of a compound. The large amount of information regarding the bioactivity of thousands of small molecules now allows the development of these types of methods. In recent years, we have witnessed the emergence of many methods for in silico target fishing. Most of these methods are based on the similarity principle, i.e., that similar molecules might bind to the same targets and have similar bioactivities. However, the difficult validation of target fishing methods hinders comparisons of the performance of each method. In this review, we describe the different methods developed for target prediction, the bioactivity databases most frequently used by these methods, and the publicly available programs and servers that enable non-specialist users to obtain these types of predictions. It is expected that target prediction will have a large impact on drug development and on the functional food industry. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Enhancing emotional-based target prediction

    NASA Astrophysics Data System (ADS)

    Gosnell, Michael; Woodley, Robert

    2008-04-01

    This work extends existing agent-based target movement prediction to include key ideas of behavioral inertia, steady states, and catastrophic change from existing psychological, sociological, and mathematical work. Existing target prediction work inherently assumes a single steady state for target behavior, and attempts to classify behavior based on a single emotional state set. The enhanced, emotional-based target prediction maintains up to three distinct steady states, or typical behaviors, based on a target's operating conditions and observed behaviors. Each steady state has an associated behavioral inertia, similar to the standard deviation of behaviors within that state. The enhanced prediction framework also allows steady state transitions through catastrophic change and individual steady states could be used in an offline analysis with additional modeling efforts to better predict anticipated target reactions.

  13. TAPIR, a web server for the prediction of plant microRNA targets, including target mimics.

    PubMed

    Bonnet, Eric; He, Ying; Billiau, Kenny; Van de Peer, Yves

    2010-06-15

    We present a new web server called TAPIR, designed for the prediction of plant microRNA targets. The server offers the possibility to search for plant miRNA targets using a fast and a precise algorithm. The precise option is much slower but guarantees to find less perfectly paired miRNA-target duplexes. Furthermore, the precise option allows the prediction of target mimics, which are characterized by a miRNA-target duplex having a large loop, making them undetectable by traditional tools. The TAPIR web server can be accessed at: http://bioinformatics.psb.ugent.be/webtools/tapir. Supplementary data are available at Bioinformatics online.

  14. Towards a generalized energy prediction model for machine tools

    PubMed Central

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H.; Dornfeld, David A.; Helu, Moneer; Rachuri, Sudarsan

    2017-01-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process. PMID:28652687

  15. Towards a generalized energy prediction model for machine tools.

    PubMed

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan

    2017-04-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.

  16. A study of the 200-metre fast walk test as a possible new assessment tool to predict maximal heart rate and define target heart rate for exercise training of coronary heart disease patients.

    PubMed

    Casillas, Jean-Marie; Joussain, Charles; Gremeaux, Vincent; Hannequin, Armelle; Rapin, Amandine; Laurent, Yves; Benaïm, Charles

    2015-02-01

    To develop a new predictive model of maximal heart rate based on two walking tests at different speeds (comfortable and brisk walking) as an alternative to a cardiopulmonary exercise test during cardiac rehabilitation. Evaluation of a clinical assessment tool. A Cardiac Rehabilitation Department in France. A total of 148 patients (133 men), mean age of 59 ±9 years, at the end of an outpatient cardiac rehabilitation programme. Patients successively performed a 6-minute walk test, a 200 m fast-walk test (200mFWT), and a cardiopulmonary exercise test, with measure of heart rate at the end of each test. An all-possible regression procedure was used to determine the best predictive regression models of maximal heart rate. The best model was compared with the Fox equation in term of predictive error of maximal heart rate using the paired t-test. Results of the two walking tests correlated significantly with maximal heart rate determined during the cardiopulmonary exercise test, whereas anthropometric parameters and resting heart rate did not. The simplified predictive model with the most acceptable mean error was: maximal heart rate = 130 - 0.6 × age + 0.3 × HR200mFWT (R(2) = 0.24). This model was superior to the Fox formula (R(2) = 0.138). The relationship between training target heart rate calculated from measured reserve heart rate and that established using this predictive model was statistically significant (r = 0.528, p < 10(-6)). A formula combining heart rate measured during a safe simple fast walk test and age is more efficient than an equation only including age to predict maximal heart rate and training target heart rate. © The Author(s) 2014.

  17. Drug-Target Interactions: Prediction Methods and Applications.

    PubMed

    Anusuya, Shanmugam; Kesherwani, Manish; Priya, K Vishnu; Vimala, Antonydhason; Shanmugam, Gnanendra; Velmurugan, Devadasan; Gromiha, M Michael

    2018-01-01

    Identifying the interactions between drugs and target proteins is a key step in drug discovery. This not only aids to understand the disease mechanism, but also helps to identify unexpected therapeutic activity or adverse side effects of drugs. Hence, drug-target interaction prediction becomes an essential tool in the field of drug repurposing. The availability of heterogeneous biological data on known drug-target interactions enabled many researchers to develop various computational methods to decipher unknown drug-target interactions. This review provides an overview on these computational methods for predicting drug-target interactions along with available webservers and databases for drug-target interactions. Further, the applicability of drug-target interactions in various diseases for identifying lead compounds has been outlined. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  18. Predicting oligonucleotide affinity to nucleic acid targets.

    PubMed Central

    Mathews, D H; Burkard, M E; Freier, S M; Wyatt, J R; Turner, D H

    1999-01-01

    A computer program, OligoWalk, is reported that predicts the equilibrium affinity of complementary DNA or RNA oligonucleotides to an RNA target. This program considers the predicted stability of the oligonucleotide-target helix and the competition with predicted secondary structure of both the target and the oligonucleotide. Both unimolecular and bimolecular oligonucleotide self structure are considered with a user-defined concentration. The application of OligoWalk is illustrated with three comparisons to experimental results drawn from the literature. PMID:10580474

  19. Behavior Prediction Tools Strengthen Nanoelectronics

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Several years ago, NASA started making plans to send robots to explore the deep, dark craters on the Moon. As part of these plans, NASA needed modeling tools to help engineer unique electronics to withstand extremely cold temperatures. According to Jonathan Pellish, a flight systems test engineer at Goddard Space Flight Center, "An instrument sitting in a shadowed crater on one of the Moon s poles would hover around 43 K", that is, 43 kelvin, equivalent to -382 F. Such frigid temperatures are one of the main factors that make the extreme space environments encountered on the Moon and elsewhere so extreme. Radiation is another main concern. "Radiation is always present in the space environment," says Pellish. "Small to moderate solar energetic particle events happen regularly and extreme events happen less than a handful of times throughout the 7 active years of the 11-year solar cycle." Radiation can corrupt data, propagate to other systems, require component power cycling, and cause a host of other harmful effects. In order to explore places like the Moon, Jupiter, Saturn, Venus, and Mars, NASA must use electronic communication devices like transmitters and receivers and data collection devices like infrared cameras that can resist the effects of extreme temperature and radiation; otherwise, the electronics would not be reliable for the duration of the mission.

  20. Predictive Data Tools Find Uses in Schools

    ERIC Educational Resources Information Center

    Sparks, Sarah D.

    2011-01-01

    The use of analytic tools to predict student performance is exploding in higher education, and experts say the tools show even more promise for K-12 schools, in everything from teacher placement to dropout prevention. Use of such statistical techniques is hindered in precollegiate schools, however, by a lack of researchers trained to help…

  1. A quick reality check for microRNA target prediction.

    PubMed

    Kast, Juergen

    2011-04-01

    The regulation of protein abundance by microRNA (miRNA)-mediated repression of mRNA translation is a rapidly growing area of interest in biochemical research. In animal cells, the miRNA seed sequence does not perfectly match that of the mRNA it targets, resulting in a large number of possible miRNA targets and varied extents of repression. Several software tools are available for the prediction of miRNA targets, yet the overlap between them is limited. Jovanovic et al. have developed and applied a targeted, quantitative approach to validate predicted miRNA target proteins. Using a proteome database, they have set up and tested selected reaction monitoring assays for approximately 20% of more than 800 predicted let-7 targets, as well as control genes in Caenorhabditis elegans. Their results demonstrate that such assays can be developed quickly and with relative ease, and applied in a high-throughput setup to verify known and identify novel miRNA targets. They also show, however, that the choice of the biological system and material has a noticeable influence on the frequency, extent and direction of the observed changes. Nonetheless, selected reaction monitoring assays, such as those developed by Jovanovic et al., represent an attractive new tool in the study of miRNA function at the organism level.

  2. Predicting drug-target interactions using restricted Boltzmann machines.

    PubMed

    Wang, Yuhao; Zeng, Jianyang

    2013-07-01

    In silico prediction of drug-target interactions plays an important role toward identifying and developing new uses of existing or abandoned drugs. Network-based approaches have recently become a popular tool for discovering new drug-target interactions (DTIs). Unfortunately, most of these network-based approaches can only predict binary interactions between drugs and targets, and information about different types of interactions has not been well exploited for DTI prediction in previous studies. On the other hand, incorporating additional information about drug-target relationships or drug modes of action can improve prediction of DTIs. Furthermore, the predicted types of DTIs can broaden our understanding about the molecular basis of drug action. We propose a first machine learning approach to integrate multiple types of DTIs and predict unknown drug-target relationships or drug modes of action. We cast the new DTI prediction problem into a two-layer graphical model, called restricted Boltzmann machine, and apply a practical learning algorithm to train our model and make predictions. Tests on two public databases show that our restricted Boltzmann machine model can effectively capture the latent features of a DTI network and achieve excellent performance on predicting different types of DTIs, with the area under precision-recall curve up to 89.6. In addition, we demonstrate that integrating multiple types of DTIs can significantly outperform other predictions either by simply mixing multiple types of interactions without distinction or using only a single interaction type. Further tests show that our approach can infer a high fraction of novel DTIs that has been validated by known experiments in the literature or other databases. These results indicate that our approach can have highly practical relevance to DTI prediction and drug repositioning, and hence advance the drug discovery process. Software and datasets are available on request. Supplementary data are

  3. Gaussian process regression for tool wear prediction

    NASA Astrophysics Data System (ADS)

    Kong, Dongdong; Chen, Yongjie; Li, Ning

    2018-05-01

    To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.

  4. Orbiter Boundary Layer Transition Prediction Tool Enhancements

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; King, Rudolph A.; Kegerise, Michael A.; Wood, William A.; McGinley, Catherine B.; Berger, Karen T.; Anderson, Brian P.

    2010-01-01

    Updates to an analytic tool developed for Shuttle support to predict the onset of boundary layer transition resulting from thermal protection system damage or repair are presented. The boundary layer transition tool is part of a suite of tools that analyze the local aerothermodynamic environment to enable informed disposition of damage for making recommendations to fly as is or to repair. Using mission specific trajectory information and details of each d agmea site or repair, the expected time (and thus Mach number) of transition onset is predicted to help define proper environments for use in subsequent thermal and stress analysis of the thermal protection system and structure. The boundary layer transition criteria utilized within the tool were updated based on new local boundary layer properties obtained from high fidelity computational solutions. Also, new ground-based measurements were obtained to allow for a wider parametric variation with both protuberances and cavities and then the resulting correlations were calibrated against updated flight data. The end result is to provide correlations that allow increased confidence with the resulting transition predictions. Recently, a new approach was adopted to remove conservatism in terms of sustained turbulence along the wing leading edge. Finally, some of the newer flight data are also discussed in terms of how these results reflect back on the updated correlations.

  5. GAPIT: genome association and prediction integrated tool.

    PubMed

    Lipka, Alexander E; Tian, Feng; Wang, Qishan; Peiffer, Jason; Li, Meng; Bradbury, Peter J; Gore, Michael A; Buckler, Edward S; Zhang, Zhiwu

    2012-09-15

    Software programs that conduct genome-wide association studies and genomic prediction and selection need to use methodologies that maximize statistical power, provide high prediction accuracy and run in a computationally efficient manner. We developed an R package called Genome Association and Prediction Integrated Tool (GAPIT) that implements advanced statistical methods including the compressed mixed linear model (CMLM) and CMLM-based genomic prediction and selection. The GAPIT package can handle large datasets in excess of 10 000 individuals and 1 million single-nucleotide polymorphisms with minimal computational time, while providing user-friendly access and concise tables and graphs to interpret results. http://www.maizegenetics.net/GAPIT. zhiwu.zhang@cornell.edu Supplementary data are available at Bioinformatics online.

  6. Texture metric that predicts target detection performance

    NASA Astrophysics Data System (ADS)

    Culpepper, Joanne B.

    2015-12-01

    Two texture metrics based on gray level co-occurrence error (GLCE) are used to predict probability of detection and mean search time. The two texture metrics are local clutter metrics and are based on the statistics of GLCE probability distributions. The degree of correlation between various clutter metrics and the target detection performance of the nine military vehicles in complex natural scenes found in the Search_2 dataset are presented. Comparison is also made between four other common clutter metrics found in the literature: root sum of squares, Doyle, statistical variance, and target structure similarity. The experimental results show that the GLCE energy metric is a better predictor of target detection performance when searching for targets in natural scenes than the other clutter metrics studied.

  7. TargetSpy: a supervised machine learning approach for microRNA target prediction.

    PubMed

    Sturm, Martin; Hackenberg, Michael; Langenberger, David; Frishman, Dmitrij

    2010-05-28

    , suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org.

  8. TargetSpy: a supervised machine learning approach for microRNA target prediction

    PubMed Central

    2010-01-01

    in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org. PMID:20509939

  9. Web tools for predictive toxicology model building.

    PubMed

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  10. UniDrug-target: a computational tool to identify unique drug targets in pathogenic bacteria.

    PubMed

    Chanumolu, Sree Krishna; Rout, Chittaranjan; Chauhan, Rajinder S

    2012-01-01

    Targeting conserved proteins of bacteria through antibacterial medications has resulted in both the development of resistant strains and changes to human health by destroying beneficial microbes which eventually become breeding grounds for the evolution of resistances. Despite the availability of more than 800 genomes sequences, 430 pathways, 4743 enzymes, 9257 metabolic reactions and protein (three-dimensional) 3D structures in bacteria, no pathogen-specific computational drug target identification tool has been developed. A web server, UniDrug-Target, which combines bacterial biological information and computational methods to stringently identify pathogen-specific proteins as drug targets, has been designed. Besides predicting pathogen-specific proteins essentiality, chokepoint property, etc., three new algorithms were developed and implemented by using protein sequences, domains, structures, and metabolic reactions for construction of partial metabolic networks (PMNs), determination of conservation in critical residues, and variation analysis of residues forming similar cavities in proteins sequences. First, PMNs are constructed to determine the extent of disturbances in metabolite production by targeting a protein as drug target. Conservation of pathogen-specific protein's critical residues involved in cavity formation and biological function determined at domain-level with low-matching sequences. Last, variation analysis of residues forming similar cavities in proteins sequences from pathogenic versus non-pathogenic bacteria and humans is performed. The server is capable of predicting drug targets for any sequenced pathogenic bacteria having fasta sequences and annotated information. The utility of UniDrug-Target server was demonstrated for Mycobacterium tuberculosis (H37Rv). The UniDrug-Target identified 265 mycobacteria pathogen-specific proteins, including 17 essential proteins which can be potential drug targets. UniDrug-Target is expected to accelerate

  11. Predicting new molecular targets for known drugs

    PubMed Central

    Keiser, Michael J.; Setola, Vincent; Irwin, John J.; Laggner, Christian; Abbas, Atheir; Hufeisen, Sandra J.; Jensen, Niels H.; Kuijer, Michael B.; Matos, Roberto C.; Tran, Thuy B.; Whaley, Ryan; Glennon, Richard A.; Hert, Jérôme; Thomas, Kelan L.H.; Edwards, Douglas D.; Shoichet, Brian K.; Roth, Bryan L.

    2009-01-01

    Whereas drugs are intended to be selective, at least some bind to several physiologic targets, explaining both side effects and efficacy. As many drug-target combinations exist, it would be useful to explore possible interactions computationally. Here, we compared 3,665 FDA-approved and investigational drugs against hundreds of targets, defining each target by its ligands. Chemical similarities between drugs and ligand sets predicted thousands of unanticipated associations. Thirty were tested experimentally, including the antagonism of the β1 receptor by the transporter inhibitor Prozac, the inhibition of the 5-HT transporter by the ion channel drug Vadilex, and antagonism of the histamine H4 receptor by the enzyme inhibitor Rescriptor. Overall, 23 new drug-target associations were confirmed, five of which were potent (< 100 nM). The physiological relevance of one such, the drug DMT on serotonergic receptors, was confirmed in a knock-out mouse. The chemical similarity approach is systematic and comprehensive, and may suggest side-effects and new indications for many drugs. PMID:19881490

  12. A new methodology for predictive tool wear

    NASA Astrophysics Data System (ADS)

    Kim, Won-Sik

    turned with various cutting conditions and the results were compared with the proposed analytical wear models. The crater surfaces after machining have been carefully studied to shed light on the physics behind the crater wear. In addition, the abrasive wear mechanism plays a major role in the development of crater wear. Laser shock processing (LSP) has been applied to locally relieve the deleterious tensile residual stresses on the crater surface of a coated tool, thus to improve the hardness of the coating. This thesis shows that LSP has indeed improve wear resistance of CVD coated alumina tool inserts, which has residual stress due to high processing temperature. LSP utilizes a very short laser pulse with high energy density, which induces high-pressure stress wave propagation. The residual stresses are relieved by incident shock waves on the coating surface. Residual stress levels of LSP CVD alumina-coated carbide insert were evaluated by the X-ray diffractometer. Based on these results, LSP parameters such as number of laser pulses and laser energy density can be controlled to reduce residual stress. Crater wear shows that the wear resistance increase with LSP treated tool inserts. Because the hardness data are used to predict the wear, the improvement in hardness and wear resistance shows that the mechanism of crater wear also involves abrasive wear.

  13. MESSI: metabolic engineering target selection and best strain identification tool.

    PubMed

    Kang, Kang; Li, Jun; Lim, Boon Leong; Panagiotou, Gianni

    2015-01-01

    Metabolic engineering and synthetic biology are synergistically related fields for manipulating target pathways and designing microorganisms that can act as chemical factories. Saccharomyces cerevisiae's ideal bioprocessing traits make yeast a very attractive chemical factory for production of fuels, pharmaceuticals, nutraceuticals as well as a wide range of chemicals. However, future attempts of engineering S. cerevisiae's metabolism using synthetic biology need to move towards more integrative models that incorporate the high connectivity of metabolic pathways and regulatory processes and the interactions in genetic elements across those pathways and processes. To contribute in this direction, we have developed Metabolic Engineering target Selection and best Strain Identification tool (MESSI), a web server for predicting efficient chassis and regulatory components for yeast bio-based production. The server provides an integrative platform for users to analyse ready-to-use public high-throughput metabolomic data, which are transformed to metabolic pathway activities for identifying the most efficient S. cerevisiae strain for the production of a compound of interest. As input MESSI accepts metabolite KEGG IDs or pathway names. MESSI outputs a ranked list of S. cerevisiae strains based on aggregation algorithms. Furthermore, through a genome-wide association study of the metabolic pathway activities with the strains' natural variation, MESSI prioritizes genes and small variants as potential regulatory points and promising metabolic engineering targets. Users can choose various parameters in the whole process such as (i) weight and expectation of each metabolic pathway activity in the final ranking of the strains, (ii) Weighted AddScore Fuse or Weighted Borda Fuse aggregation algorithm, (iii) type of variants to be included, (iv) variant sets in different biological levels.Database URL: http://sbb.hku.hk/MESSI/. © The Author(s) 2015. Published by Oxford University

  14. Multiplex primer prediction software for divergent targets

    PubMed Central

    Gardner, Shea N.; Hiddessen, Amy L.; Williams, Peter L.; Hara, Christine; Wagner, Mark C.; Colston, Bill W.

    2009-01-01

    We describe a Multiplex Primer Prediction (MPP) algorithm to build multiplex compatible primer sets to amplify all members of large, diverse and unalignable sets of target sequences. The MPP algorithm is scalable to larger target sets than other available software, and it does not require a multiple sequence alignment. We applied it to questions in viral detection, and demonstrated that there are no universally conserved priming sequences among viruses and that it could require an unfeasibly large number of primers (∼3700 18-mers or ∼2000 10-mers) to generate amplicons from all sequenced viruses. We then designed primer sets separately for each viral family, and for several diverse species such as foot-and-mouth disease virus (FMDV), hemagglutinin (HA) and neuraminidase (NA) segments of influenza A virus, Norwalk virus, and HIV-1. We empirically demonstrated the application of the software with a multiplex set of 16 short (10 nt) primers designed to amplify the Poxviridae family to produce a specific amplicon from vaccinia virus. PMID:19759213

  15. Development of an attrition risk prediction tool.

    PubMed

    Fowler, John; Norrie, Peter

    To review lecturers' and students' perceptions of the factors that may lead to attrition from pre-registration nursing and midwifery programmes and to identify ways to reduce the impact of such factors on the student's experience. Comparable attrition rates for nursing and midwifery students across various universities are difficult to monitor accurately; however, estimates that there is approximately a 25% national attrition rate are not uncommon. The financial and human implications of this are significant and worthy of investigation. A study was carried out in one medium-sized UK school of nursing and midwifery, aimed at identifying perceived factors associated with attrition and retention. Thirty-five lecturers were interviewed individually; 605 students completed a questionnaire, and of these, 10 were individually interviewed. Attrition data kept by the student service department were reviewed. Data were collected over an 18-month period in 2007-2008. Regression analysis of the student data identified eight significant predictors. Four of these were 'positive' factors in that they aided student retention and four were 'negative' in that they were associated with students' thoughts of resigning. Student attrition and retention is multifactorial, and, as such, needs to be managed holistically. One aspect of this management could be an attrition risk prediction tool.

  16. A critical assessment of topologically associating domain prediction tools

    PubMed Central

    Dali, Rola

    2017-01-01

    Abstract Topologically associating domains (TADs) have been proposed to be the basic unit of chromosome folding and have been shown to play key roles in genome organization and gene regulation. Several different tools are available for TAD prediction, but their properties have never been thoroughly assessed. In this manuscript, we compare the output of seven different TAD prediction tools on two published Hi-C data sets. TAD predictions varied greatly between tools in number, size distribution and other biological properties. Assessed against a manual annotation of TADs, individual TAD boundary predictions were found to be quite reliable, but their assembly into complete TAD structures was much less so. In addition, many tools were sensitive to sequencing depth and resolution of the interaction frequency matrix. This manuscript provides users and designers of TAD prediction tools with information that will help guide the choice of tools and the interpretation of their predictions. PMID:28334773

  17. A thermal sensation prediction tool for use by the profession

    SciT

    Fountain, M.E.; Huizenga, C.

    1997-12-31

    As part of a recent ASHRAE research project (781-RP), a thermal sensation prediction tool has been developed. This paper introduces the tool, describes the component thermal sensation models, and presents examples of how the tool can be used in practice. Since the main end product of the HVAC industry is the comfort of occupants indoors, tools for predicting occupant thermal response can be an important asset to designers of indoor climate control systems. The software tool presented in this paper incorporates several existing models for predicting occupant comfort.

  18. NIH tools facilitate matching cancer drugs with gene targets

    Cancer.gov

    A new study details how a suite of web-based tools provides the research community with greatly improved capacity to compare data derived from large collections of genomic information against thousands of drugs. By comparing drugs and genetic targets, re

  19. Molecular Targeted Viral Nanoparticles as Tools for Imaging Cancer

    PubMed Central

    Cho, C.F.; Sourabh, S.; Simpson, E.J.; Steinmetz, N.F.; Luyt, L.G.; Lewis, J.D.

    2015-01-01

    Viral nanoparticles (VNPs) are a novel class of bionanomaterials that harness the natural biocompatibility of viruses for the development of therapeutics, vaccines, and imaging tools. The plant virus, cowpea mosaic virus (CPMV), has been successfully engineered to create novel cancer-targeted imaging agents by incorporating fluorescent dyes, polyethylene glycol (PEG) polymers, and targeting moieties. Using straightforward conjugation strategies, VNPs with high selectivity for cancer-specific molecular targets can be synthesized for in vivo imaging of tumors. Here we describe the synthesis and purification of CPMV-based VNPs, the functionalization of these VNPs using click chemistry, and their use for imaging xenograft tumors in animal models. VNPs decorated with fluorescent dyes, PEG, and targeting ligands can be synthesized in one day, and imaging studies can be performed over hours, days, or weeks, depending on the application. PMID:24243252

  20. Aptamers as tools for target prioritization and lead identification.

    PubMed

    Burgstaller, Petra; Girod, Anne; Blind, Michael

    2002-12-15

    The increasing number of potential drug target candidates has driven the development of novel technologies designed to identify functionally important targets and enhance the subsequent lead discovery process. Highly specific synthetic nucleic acid ligands--also known as aptamers--offer a new exciting route in the drug discovery process by linking target validation directly with HTS. Recently, aptamers have proven to be valuable tools for modulating the function of endogenous cellular proteins in their natural environment. A set of technologies has been developed to use these sophisticated ligands for the validation of potential drug targets in disease models. Moreover, aptamers that are specific antagonists of protein function can act as substitute interaction partners in HTS assays to facilitate the identification of small-molecule lead compounds.

  1. Uncertainty Prediction in Passive Target Motion Analysis

    DTIC Science & Technology

    2016-05-12

    fundamental property of bearings- only target motion analysis (TMA) is that bearing B to the Attorney Docket No. 300118 3 of 25 target 10 results...the measurements used to estimate them are often non-linear. This is true for the bearing observation: = tan −1 ( () () ) ( 3 ...Parameter Evaluation Plot ( PEP ) is one example of such a grid-based approach. U.S. Patent No. 7,020,046 discloses one version of this method and is

  2. Oligonucleotide Aptamers: New Tools for Targeted Cancer Therapy

    PubMed Central

    Sun, Hongguang; Zhu, Xun; Lu, Patrick Y; Rosato, Roberto R; Tan, Wen; Zu, Youli

    2014-01-01

    Aptamers are a class of small nucleic acid ligands that are composed of RNA or single-stranded DNA oligonucleotides and have high specificity and affinity for their targets. Similar to antibodies, aptamers interact with their targets by recognizing a specific three-dimensional structure and are thus termed “chemical antibodies.” In contrast to protein antibodies, aptamers offer unique chemical and biological characteristics based on their oligonucleotide properties. Hence, they are more suitable for the development of novel clinical applications. Aptamer technology has been widely investigated in various biomedical fields for biomarker discovery, in vitro diagnosis, in vivo imaging, and targeted therapy. This review will discuss the potential applications of aptamer technology as a new tool for targeted cancer therapy with emphasis on the development of aptamers that are able to specifically target cell surface biomarkers. Additionally, we will describe several approaches for the use of aptamers in targeted therapeutics, including aptamer-drug conjugation, aptamer-nanoparticle conjugation, aptamer-mediated targeted gene therapy, aptamer-mediated immunotherapy, and aptamer-mediated biotherapy. PMID:25093706

  3. Inter-kingdom prediction certainty evaluation of protein subcellular localization tools: microbial pathogenesis approach for deciphering host microbe interaction.

    PubMed

    Khan, Abdul Arif; Khan, Zakir; Kalam, Mohd Abul; Khan, Azmat Ali

    2018-01-01

    Microbial pathogenesis involves several aspects of host-pathogen interactions, including microbial proteins targeting host subcellular compartments and subsequent effects on host physiology. Such studies are supported by experimental data, but recent detection of bacterial proteins localization through computational eukaryotic subcellular protein targeting prediction tools has also come into practice. We evaluated inter-kingdom prediction certainty of these tools. The bacterial proteins experimentally known to target host subcellular compartments were predicted with eukaryotic subcellular targeting prediction tools, and prediction certainty was assessed. The results indicate that these tools alone are not sufficient for inter-kingdom protein targeting prediction. The correct prediction of pathogen's protein subcellular targeting depends on several factors, including presence of localization signal, transmembrane domain and molecular weight, etc., in addition to approach for subcellular targeting prediction. The detection of protein targeting in endomembrane system is comparatively difficult, as the proteins in this location are channelized to different compartments. In addition, the high specificity of training data set also creates low inter-kingdom prediction accuracy. Current data can help to suggest strategy for correct prediction of bacterial protein's subcellular localization in host cell. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. TARGET - TASK ANALYSIS REPORT GENERATION TOOL, VERSION 1.0

    NASA Technical Reports Server (NTRS)

    Ortiz, C. J.

    1994-01-01

    The Task Analysis Report Generation Tool, TARGET, is a graphical interface tool used to capture procedural knowledge and translate that knowledge into a hierarchical report. TARGET is based on VISTA, a knowledge acquisition tool developed by the Naval Systems Training Center. TARGET assists a programmer and/or task expert organize and understand the steps involved in accomplishing a task. The user can label individual steps in the task through a dialogue-box and get immediate graphical feedback for analysis. TARGET users can decompose tasks into basic action kernels or minimal steps to provide a clear picture of all basic actions needed to accomplish a job. This method allows the user to go back and critically examine the overall flow and makeup of the process. The user can switch between graphics (box flow diagrams) and text (task hierarchy) versions to more easily study the process being documented. As the practice of decomposition continues, tasks and their subtasks can be continually modified to more accurately reflect the user's procedures and rationale. This program is designed to help a programmer document an expert's task thus allowing the programmer to build an expert system which can help others perform the task. Flexibility is a key element of the system design and of the knowledge acquisition session. If the expert is not able to find time to work on the knowledge acquisition process with the program developer, the developer and subject matter expert may work in iterative sessions. TARGET is easy to use and is tailored to accommodate users ranging from the novice to the experienced expert systems builder. TARGET is written in C-language for IBM PC series and compatible computers running MS-DOS and Microsoft Windows version 3.0 or 3.1. No source code is supplied. The executable also requires 2Mb of RAM, a Microsoft compatible mouse, a VGA display and an 80286, 386 or 486 processor machine. The standard distribution medium for TARGET is one 5.25 inch 360K

  5. Motion prediction of a non-cooperative space target

    NASA Astrophysics Data System (ADS)

    Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan

    2018-01-01

    Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion prediction of a free-floating non-cooperative target in space is studied and a motion prediction algorithm is proposed. In order to predict the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the predicted motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. Accurate prediction needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the predicted motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an accurate identification can be obtained in time. The numerical results show that the proposed motion prediction algorithm is able to predict the motion of the target.

  6. Prophinder: a computational tool for prophage prediction in prokaryotic genomes.

    PubMed

    Lima-Mendez, Gipsi; Van Helden, Jacques; Toussaint, Ariane; Leplae, Raphaël

    2008-03-15

    Prophinder is a prophage prediction tool coupled with a prediction database, a web server and web service. Predicted prophages will help to fill the gaps in the current sparse phage sequence space, which should cover an estimated 100 million species. Systematic and reliable predictions will enable further studies of prophages contribution to the bacteriophage gene pool and to better understand gene shuffling between prophages and phages infecting the same host. Softare is available at http://aclame.ulb.ac.be/prophinder

  7. Updating Risk Prediction Tools: A Case Study in Prostate Cancer

    PubMed Central

    Ankerst, Donna P.; Koniarski, Tim; Liang, Yuanyuan; Leach, Robin J.; Feng, Ziding; Sanda, Martin G.; Partin, Alan W.; Chan, Daniel W; Kagan, Jacob; Sokoll, Lori; Wei, John T; Thompson, Ian M.

    2013-01-01

    Online risk prediction tools for common cancers are now easily accessible and widely used by patients and doctors for informed decision-making concerning screening and diagnosis. A practical problem is as cancer research moves forward and new biomarkers and risk factors are discovered, there is a need to update the risk algorithms to include them. Typically the new markers and risk factors cannot be retrospectively measured on the same study participants used to develop the original prediction tool, necessitating the merging of a separate study of different participants, which may be much smaller in sample size and of a different design. Validation of the updated tool on a third independent data set is warranted before the updated tool can go online. This article reports on the application of Bayes rule for updating risk prediction tools to include a set of biomarkers measured in an external study to the original study used to develop the risk prediction tool. The procedure is illustrated in the context of updating the online Prostate Cancer Prevention Trial Risk Calculator to incorporate the new markers %freePSA and [−2]proPSA measured on an external case control study performed in Texas, U.S.. Recent state-of-the art methods in validation of risk prediction tools and evaluation of the improvement of updated to original tools are implemented using an external validation set provided by the U.S. Early Detection Research Network. PMID:22095849

  8. MPFit: Computational Tool for Predicting Moonlighting Proteins.

    PubMed

    Khan, Ishita; McGraw, Joshua; Kihara, Daisuke

    2017-01-01

    An increasing number of proteins have been found which are capable of performing two or more distinct functions. These proteins, known as moonlighting proteins, have drawn much attention recently as they may play critical roles in disease pathways and development. However, because moonlighting proteins are often found serendipitously, our understanding of moonlighting proteins is still quite limited. In order to lay the foundation for systematic moonlighting proteins studies, we developed MPFit, a software package for predicting moonlighting proteins from their omics features including protein-protein and gene interaction networks. Here, we describe and demonstrate the algorithm of MPFit, the idea behind it, and provide instruction for using the software.

  9. Predicting performance with traffic analysis tools : final report.

    DOT National Transportation Integrated Search

    2008-03-01

    This document provides insights into the common pitfalls and challenges associated with use of traffic analysis tools for predicting future performance of a transportation facility. It provides five in-depth case studies that demonstrate common ways ...

  10. Predictive Technologies: Can Smart Tools Augment the Brain's Predictive Abilities?

    PubMed Central

    Pezzulo, Giovanni; D'Ausilio, Alessandro; Gaggioli, Andrea

    2016-01-01

    The ability of “looking into the future”—namely, the capacity of anticipating future states of the environment or of the body—represents a fundamental function of human (and animal) brains. A goalkeeper who tries to guess the ball's direction; a chess player who attempts to anticipate the opponent's next move; or a man-in-love who tries to calculate what are the chances of her saying yes—in all these cases, people are simulating possible future states of the world, in order to maximize the success of their decisions or actions. Research in neuroscience is showing that our ability to predict the behavior of physical or social phenomena is largely dependent on the brain's ability to integrate current and past information to generate (probabilistic) simulations of the future. But could predictive processing be augmented using advanced technologies? In this contribution, we discuss how computational technologies may be used to support, facilitate or enhance the prediction of future events, by considering exemplificative scenarios across different domains, from simpler sensorimotor decisions to more complex cognitive tasks. We also examine the key scientific and technical challenges that must be faced to turn this vision into reality. PMID:27199648

  11. Water Impact Prediction Tool for Recoverable Rockets

    NASA Technical Reports Server (NTRS)

    Rooker, William; Glaese, John; Clayton, Joe

    2011-01-01

    Reusing components from a rocket launch can be cost saving. NASA's space shuttle system has reusable components that return to the Earth and impact the ocean. A primary example is the Space Shuttle Solid Rocket Booster (SRB) that descends on parachutes to the Earth after separation and impacts the ocean. Water impact generates significant structural loads that can damage the booster, so it is important to study this event in detail in the design of the recovery system. Some recent examples of damage due to water impact include the Ares I-X First Stage deformation as seen in Figure 1 and the loss of the SpaceX Falcon 9 First Stage.To ensure that a component can be recovered or that the design of the recovery system is adequate, an adequate set of structural loads is necessary for use in failure assessments. However, this task is difficult since there are many conditions that affect how a component impacts the water and the resulting structural loading that a component sees. These conditions include the angle of impact with respect to the water, the horizontal and vertical velocities, the rotation rate, the wave height and speed, and many others. There have been attempts to simulate water impact. One approach is to analyze water impact using explicit finite element techniques such as those employed by the LS-Dyna tool [1]. Though very detailed, this approach is time consuming and would not be suitable for running Monte Carlo or optimization analyses. The purpose of this paper is to describe a multi-body simulation tool that runs quickly and that captures the environments a component might see. The simulation incorporates the air and water interaction with the component, the component dynamics (i.e. modes and mode shapes), any applicable parachutes and lines, the interaction of winds and gusts, and the wave height and speed. It is capable of quickly conducting Monte Carlo studies to better capture the environments and genetic algorithm optimizations to reproduce a

  12. Predicting Operator Execution Times Using CogTool

    NASA Technical Reports Server (NTRS)

    Santiago-Espada, Yamira; Latorella, Kara A.

    2013-01-01

    Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.

  13. Virtual Beach: Decision Support Tools for Beach Pathogen Prediction

    EPA Science Inventory

    The Virtual Beach Managers Tool (VB) is decision-making software developed to help local beach managers make decisions as to when beaches should be closed due to predicted high levels of water borne pathogens. The tool is being developed under the umbrella of EPA's Advanced Monit...

  14. Deep-Learning-Based Drug-Target Interaction Prediction.

    PubMed

    Wen, Ming; Zhang, Zhimin; Niu, Shaoyu; Sha, Haozhi; Yang, Ruihan; Yun, Yonghuan; Lu, Hongmei

    2017-04-07

    Identifying interactions between known drugs and targets is a major challenge in drug repositioning. In silico prediction of drug-target interaction (DTI) can speed up the expensive and time-consuming experimental work by providing the most potent DTIs. In silico prediction of DTI can also provide insights about the potential drug-drug interaction and promote the exploration of drug side effects. Traditionally, the performance of DTI prediction depends heavily on the descriptors used to represent the drugs and the target proteins. In this paper, to accurately predict new DTIs between approved drugs and targets without separating the targets into different classes, we developed a deep-learning-based algorithmic framework named DeepDTIs. It first abstracts representations from raw input descriptors using unsupervised pretraining and then applies known label pairs of interaction to build a classification model. Compared with other methods, it is found that DeepDTIs reaches or outperforms other state-of-the-art methods. The DeepDTIs can be further used to predict whether a new drug targets to some existing targets or whether a new target interacts with some existing drugs.

  15. Examination of CRISPR/Cas9 design tools and the effect of target site accessibility on Cas9 activity.

    PubMed

    Lee, Ciaran M; Davis, Timothy H; Bao, Gang

    2018-04-01

    What is the topic of this review? In this review, we analyse the performance of recently described tools for CRISPR/Cas9 guide RNA design, in particular, design tools that predict CRISPR/Cas9 activity. What advances does it highlight? Recently, many tools designed to predict CRISPR/Cas9 activity have been reported. However, the majority of these tools lack experimental validation. Our analyses indicate that these tools have poor predictive power. Our preliminary results suggest that target site accessibility should be considered in order to develop better guide RNA design tools with improved predictive power. The recent adaptation of the clustered regulatory interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein 9 (Cas9) system for targeted genome engineering has led to its widespread application in many fields worldwide. In order to gain a better understanding of the design rules of CRISPR/Cas9 systems, several groups have carried out large library-based screens leading to some insight into sequence preferences among highly active target sites. To facilitate CRISPR/Cas9 design, these studies have spawned a plethora of guide RNA (gRNA) design tools with algorithms based solely on direct or indirect sequence features. Here, we demonstrate that the predictive power of these tools is poor, suggesting that sequence features alone cannot accurately inform the cutting efficiency of a particular CRISPR/Cas9 gRNA design. Furthermore, we demonstrate that DNA target site accessibility influences the activity of CRISPR/Cas9. With further optimization, we hypothesize that it will be possible to increase the predictive power of gRNA design tools by including both sequence and target site accessibility metrics. © 2017 The Authors. Experimental Physiology © 2017 The Physiological Society.

  16. RNA-SSPT: RNA Secondary Structure Prediction Tools.

    PubMed

    Ahmad, Freed; Mahboob, Shahid; Gulzar, Tahsin; Din, Salah U; Hanif, Tanzeela; Ahmad, Hifza; Afzal, Muhammad

    2013-01-01

    The prediction of RNA structure is useful for understanding evolution for both in silico and in vitro studies. Physical methods like NMR studies to predict RNA secondary structure are expensive and difficult. Computational RNA secondary structure prediction is easier. Comparative sequence analysis provides the best solution. But secondary structure prediction of a single RNA sequence is challenging. RNA-SSPT is a tool that computationally predicts secondary structure of a single RNA sequence. Most of the RNA secondary structure prediction tools do not allow pseudoknots in the structure or are unable to locate them. Nussinov dynamic programming algorithm has been implemented in RNA-SSPT. The current studies shows only energetically most favorable secondary structure is required and the algorithm modification is also available that produces base pairs to lower the total free energy of the secondary structure. For visualization of RNA secondary structure, NAVIEW in C language is used and modified in C# for tool requirement. RNA-SSPT is built in C# using Dot Net 2.0 in Microsoft Visual Studio 2005 Professional edition. The accuracy of RNA-SSPT is tested in terms of Sensitivity and Positive Predicted Value. It is a tool which serves both secondary structure prediction and secondary structure visualization purposes.

  17. RNA-SSPT: RNA Secondary Structure Prediction Tools

    PubMed Central

    Ahmad, Freed; Mahboob, Shahid; Gulzar, Tahsin; din, Salah U; Hanif, Tanzeela; Ahmad, Hifza; Afzal, Muhammad

    2013-01-01

    The prediction of RNA structure is useful for understanding evolution for both in silico and in vitro studies. Physical methods like NMR studies to predict RNA secondary structure are expensive and difficult. Computational RNA secondary structure prediction is easier. Comparative sequence analysis provides the best solution. But secondary structure prediction of a single RNA sequence is challenging. RNA-SSPT is a tool that computationally predicts secondary structure of a single RNA sequence. Most of the RNA secondary structure prediction tools do not allow pseudoknots in the structure or are unable to locate them. Nussinov dynamic programming algorithm has been implemented in RNA-SSPT. The current studies shows only energetically most favorable secondary structure is required and the algorithm modification is also available that produces base pairs to lower the total free energy of the secondary structure. For visualization of RNA secondary structure, NAVIEW in C language is used and modified in C# for tool requirement. RNA-SSPT is built in C# using Dot Net 2.0 in Microsoft Visual Studio 2005 Professional edition. The accuracy of RNA-SSPT is tested in terms of Sensitivity and Positive Predicted Value. It is a tool which serves both secondary structure prediction and secondary structure visualization purposes. PMID:24250115

  18. In silico study of breast cancer associated gene 3 using LION Target Engine and other tools.

    PubMed

    León, Darryl A; Cànaves, Jaume M

    2003-12-01

    Sequence analysis of individual targets is an important step in annotation and validation. As a test case, we investigated human breast cancer associated gene 3 (BCA3) with LION Target Engine and with other bioinformatics tools. LION Target Engine confirmed that the BCA3 gene is located on 11p15.4 and that the two most likely splice variants (lacking exon 3 and exons 3 and 5, respectively) exist. Based on our manual curation of sequence data, it is proposed that an additional variant (missing only exon 5) published in a public sequence repository, is a prediction artifact. A significant number of new orthologs were also identified, and these were the basis for a high-quality protein secondary structure prediction. Moreover, our research confirmed several distinct functional domains as described in earlier reports. Sequence conservation from multiple sequence alignments, splice variant identification, secondary structure predictions, and predicted phosphorylation sites suggest that the removal of interaction sites through alternative splicing might play a modulatory role in BCA3. This in silico approach shows the depth and relevance of an analysis that can be accomplished by including a variety of publicly available tools with an integrated and customizable life science informatics platform.

  19. DeepMirTar: a deep-learning approach for predicting human miRNA targets.

    PubMed

    Wen, Ming; Cong, Peisheng; Zhang, Zhimin; Lu, Hongmei; Li, Tonghua

    2018-06-01

    MicroRNAs (miRNAs) are small noncoding RNAs that function in RNA silencing and post-transcriptional regulation of gene expression by targeting messenger RNAs (mRNAs). Because the underlying mechanisms associated with miRNA binding to mRNA are not fully understood, a major challenge of miRNA studies involves the identification of miRNA-target sites on mRNA. In silico prediction of miRNA-target sites can expedite costly and time-consuming experimental work by providing the most promising miRNA-target-site candidates. In this study, we reported the design and implementation of DeepMirTar, a deep-learning-based approach for accurately predicting human miRNA targets at the site level. The predicted miRNA-target sites are those having canonical or non-canonical seed, and features, including high-level expert-designed, low-level expert-designed, and raw-data-level, were used to represent the miRNA-target site. Comparison with other state-of-the-art machine-learning methods and existing miRNA-target-prediction tools indicated that DeepMirTar improved overall predictive performance. DeepMirTar is freely available at https://github.com/Bjoux2/DeepMirTar_SdA. lith@tongji.edu.cn, hongmeilu@csu.edu.cn. Supplementary data are available at Bioinformatics online.

  20. Prediction methodologies for target scene generation in the aerothermal targets analysis program (ATAP)

    NASA Astrophysics Data System (ADS)

    Hudson, Douglas J.; Torres, Manuel; Dougherty, Catherine; Rajendran, Natesan; Thompson, Rhoe A.

    2003-09-01

    The Air Force Research Laboratory (AFRL) Aerothermal Targets Analysis Program (ATAP) is a user-friendly, engineering-level computational tool that features integrated aerodynamics, six-degree-of-freedom (6-DoF) trajectory/motion, convective and radiative heat transfer, and thermal/material response to provide an optimal blend of accuracy and speed for design and analysis applications. ATAP is sponsored by the Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) facility at Eglin AFB, where it is used with the CHAMP (Composite Hardbody and Missile Plume) technique for rapid infrared (IR) signature and imagery predictions. ATAP capabilities include an integrated 1-D conduction model for up to 5 in-depth material layers (with options for gaps/voids with radiative heat transfer), fin modeling, several surface ablation modeling options, a materials library with over 250 materials, options for user-defined materials, selectable/definable atmosphere and earth models, multiple trajectory options, and an array of aerodynamic prediction methods. All major code modeling features have been validated with ground-test data from wind tunnels, shock tubes, and ballistics ranges, and flight-test data for both U.S. and foreign strategic and theater systems. Numerous applications include the design and analysis of interceptors, booster and shroud configurations, window environments, tactical missiles, and reentry vehicles.

  1. Predicting cancer prognosis using interactive online tools: A systematic review and implications for cancer care providers

    PubMed Central

    Rabin, Borsika A.; Gaglio, Bridget; Sanders, Tristan; Nekhlyudov, Larissa; Dearing, James W.; Bull, Sheana; Glasgow, Russell E.; Marcus, Alfred

    2013-01-01

    Cancer prognosis is of keen interest for cancer patients, their caregivers and providers. Prognostic tools have been developed to guide patient-physician communication and decision-making. Given the proliferation of prognostic tools, it is timely to review existing online cancer prognostic tools and discuss implications for their use in clinical settings. Using a systematic approach, we searched the Internet, Medline, and consulted with experts to identify existing online prognostic tools. Each was reviewed for content and format. Twenty-two prognostic tools addressing 89 different cancers were identified. Tools primarily focused on prostate (n=11), colorectal (n=10), breast (n=8), and melanoma (n=6), though at least one tool was identified for most malignancies. The input variables for the tools included cancer characteristics (n=22), patient characteristics (n=18), and comorbidities (n=9). Effect of therapy on prognosis was included in 15 tools. The most common predicted outcome was cancer specific survival/mortality (n=17). Only a few tools (n=4) suggested patients as potential target users. A comprehensive repository of online prognostic tools was created to understand the state-of-the-art in prognostic tool availability and characteristics. Use of these tools may support communication and understanding about cancer prognosis. Dissemination, testing, refinement of existing, and development of new tools under different conditions are needed. PMID:23956026

  2. MOST: most-similar ligand based approach to target prediction.

    PubMed

    Huang, Tao; Mi, Hong; Lin, Cheng-Yuan; Zhao, Ling; Zhong, Linda L D; Liu, Feng-Bin; Zhang, Ge; Lu, Ai-Ping; Bian, Zhao-Xiang

    2017-03-11

    Many computational approaches have been used for target prediction, including machine learning, reverse docking, bioactivity spectra analysis, and chemical similarity searching. Recent studies have suggested that chemical similarity searching may be driven by the most-similar ligand. However, the extent of bioactivity of most-similar ligands has been oversimplified or even neglected in these studies, and this has impaired the prediction power. Here we propose the MOst-Similar ligand-based Target inference approach, namely MOST, which uses fingerprint similarity and explicit bioactivity of the most-similar ligands to predict targets of the query compound. Performance of MOST was evaluated by using combinations of different fingerprint schemes, machine learning methods, and bioactivity representations. In sevenfold cross-validation with a benchmark Ki dataset from CHEMBL release 19 containing 61,937 bioactivity data of 173 human targets, MOST achieved high average prediction accuracy (0.95 for pKi ≥ 5, and 0.87 for pKi ≥ 6). Morgan fingerprint was shown to be slightly better than FP2. Logistic Regression and Random Forest methods performed better than Naïve Bayes. In a temporal validation, the Ki dataset from CHEMBL19 were used to train models and predict the bioactivity of newly deposited ligands in CHEMBL20. MOST also performed well with high accuracy (0.90 for pKi ≥ 5, and 0.76 for pKi ≥ 6), when Logistic Regression and Morgan fingerprint were employed. Furthermore, the p values associated with explicit bioactivity were found be a robust index for removing false positive predictions. Implicit bioactivity did not offer this capability. Finally, p values generated with Logistic Regression, Morgan fingerprint and explicit activity were integrated with a false discovery rate (FDR) control procedure to reduce false positives in multiple-target prediction scenario, and the success of this strategy it was demonstrated with a case of fluanisone

  3. Predicting Drug-Target Interactions With Multi-Information Fusion.

    PubMed

    Peng, Lihong; Liao, Bo; Zhu, Wen; Li, Zejun; Li, Keqin

    2017-03-01

    Identifying potential associations between drugs and targets is a critical prerequisite for modern drug discovery and repurposing. However, predicting these associations is difficult because of the limitations of existing computational methods. Most models only consider chemical structures and protein sequences, and other models are oversimplified. Moreover, datasets used for analysis contain only true-positive interactions, and experimentally validated negative samples are unavailable. To overcome these limitations, we developed a semi-supervised based learning framework called NormMulInf through collaborative filtering theory by using labeled and unlabeled interaction information. The proposed method initially determines similarity measures, such as similarities among samples and local correlations among the labels of the samples, by integrating biological information. The similarity information is then integrated into a robust principal component analysis model, which is solved using augmented Lagrange multipliers. Experimental results on four classes of drug-target interaction networks suggest that the proposed approach can accurately classify and predict drug-target interactions. Part of the predicted interactions are reported in public databases. The proposed method can also predict possible targets for new drugs and can be used to determine whether atropine may interact with alpha1B- and beta1- adrenergic receptors. Furthermore, the developed technique identifies potential drugs for new targets and can be used to assess whether olanzapine and propiomazine may target 5HT2B. Finally, the proposed method can potentially address limitations on studies of multitarget drugs and multidrug targets.

  4. Motor cortex guides selection of predictable movement targets

    PubMed Central

    Woodgate, Philip J.W.; Strauss, Soeren; Sami, Saber A.; Heinke, Dietmar

    2016-01-01

    The present paper asks whether the motor cortex contributes to prediction-based guidance of target selection. This question was inspired by recent evidence that suggests (i) recurrent connections from the motor system into the attentional system may extract movement-relevant perceptual information and (ii) that the motor cortex cannot only generate predictions of the sensory consequences of movements but may also operate as predictor of perceptual events in general. To test this idea we employed a choice reaching task requiring participants to rapidly reach and touch a predictable or unpredictable colour target. Motor cortex activity was modulated via transcranial direct current stimulation (tDCS). In Experiment 1 target colour repetitions were predictable. Under such conditions anodal tDCS facilitated selection versus sham and cathodal tDCS. This improvement was apparent for trajectory curvature but not movement initiation. Conversely, where no predictability of colour was embedded reach performance was unaffected by tDCS. Finally, the results of a key-press experiment suggested that motor cortex involvement is restricted to tasks where the predictable target colour is movement-relevant. The outcomes are interpreted as evidence that the motor system contributes to the top-down guidance of selective attention to movement targets. PMID:25835319

  5. Updating risk prediction tools: a case study in prostate cancer.

    PubMed

    Ankerst, Donna P; Koniarski, Tim; Liang, Yuanyuan; Leach, Robin J; Feng, Ziding; Sanda, Martin G; Partin, Alan W; Chan, Daniel W; Kagan, Jacob; Sokoll, Lori; Wei, John T; Thompson, Ian M

    2012-01-01

    Online risk prediction tools for common cancers are now easily accessible and widely used by patients and doctors for informed decision-making concerning screening and diagnosis. A practical problem is as cancer research moves forward and new biomarkers and risk factors are discovered, there is a need to update the risk algorithms to include them. Typically, the new markers and risk factors cannot be retrospectively measured on the same study participants used to develop the original prediction tool, necessitating the merging of a separate study of different participants, which may be much smaller in sample size and of a different design. Validation of the updated tool on a third independent data set is warranted before the updated tool can go online. This article reports on the application of Bayes rule for updating risk prediction tools to include a set of biomarkers measured in an external study to the original study used to develop the risk prediction tool. The procedure is illustrated in the context of updating the online Prostate Cancer Prevention Trial Risk Calculator to incorporate the new markers %freePSA and [-2]proPSA measured on an external case-control study performed in Texas, U.S.. Recent state-of-the art methods in validation of risk prediction tools and evaluation of the improvement of updated to original tools are implemented using an external validation set provided by the U.S. Early Detection Research Network. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Predicting drug-target interaction for new drugs using enhanced similarity measures and super-target clustering.

    PubMed

    Shi, Jian-Yu; Yiu, Siu-Ming; Li, Yiming; Leung, Henry C M; Chin, Francis Y L

    2015-07-15

    Predicting drug-target interaction using computational approaches is an important step in drug discovery and repositioning. To predict whether there will be an interaction between a drug and a target, most existing methods identify similar drugs and targets in the database. The prediction is then made based on the known interactions of these drugs and targets. This idea is promising. However, there are two shortcomings that have not yet been addressed appropriately. Firstly, most of the methods only use 2D chemical structures and protein sequences to measure the similarity of drugs and targets respectively. However, this information may not fully capture the characteristics determining whether a drug will interact with a target. Secondly, there are very few known interactions, i.e. many interactions are "missing" in the database. Existing approaches are biased towards known interactions and have no good solutions to handle possibly missing interactions which affect the accuracy of the prediction. In this paper, we enhance the similarity measures to include non-structural (and non-sequence-based) information and introduce the concept of a "super-target" to handle the problem of possibly missing interactions. Based on evaluations on real data, we show that our similarity measure is better than the existing measures and our approach is able to achieve higher accuracy than the two best existing algorithms, WNN-GIP and KBMF2K. Our approach is available at http://web.hku.hk/∼liym1018/projects/drug/drug.html or http://www.bmlnwpu.org/us/tools/PredictingDTI_S2/METHODS.html. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Quantitative self-assembly prediction yields targeted nanomedicines

    NASA Astrophysics Data System (ADS)

    Shamay, Yosi; Shah, Janki; Işık, Mehtap; Mizrachi, Aviram; Leibold, Josef; Tschaharganeh, Darjus F.; Roxbury, Daniel; Budhathoki-Uprety, Januka; Nawaly, Karla; Sugarman, James L.; Baut, Emily; Neiman, Michelle R.; Dacek, Megan; Ganesh, Kripa S.; Johnson, Darren C.; Sridharan, Ramya; Chu, Karen L.; Rajasekhar, Vinagolu K.; Lowe, Scott W.; Chodera, John D.; Heller, Daniel A.

    2018-02-01

    Development of targeted nanoparticle drug carriers often requires complex synthetic schemes involving both supramolecular self-assembly and chemical modification. These processes are generally difficult to predict, execute, and control. We describe herein a targeted drug delivery system that is accurately and quantitatively predicted to self-assemble into nanoparticles based on the molecular structures of precursor molecules, which are the drugs themselves. The drugs assemble with the aid of sulfated indocyanines into particles with ultrahigh drug loadings of up to 90%. We devised quantitative structure-nanoparticle assembly prediction (QSNAP) models to identify and validate electrotopological molecular descriptors as highly predictive indicators of nano-assembly and nanoparticle size. The resulting nanoparticles selectively targeted kinase inhibitors to caveolin-1-expressing human colon cancer and autochthonous liver cancer models to yield striking therapeutic effects while avoiding pERK inhibition in healthy skin. This finding enables the computational design of nanomedicines based on quantitative models for drug payload selection.

  8. A systematic review on popularity, application and characteristics of protein secondary structure prediction tools.

    PubMed

    Kashani-Amin, Elaheh; Tabatabaei-Malazy, Ozra; Sakhteman, Amirhossein; Larijani, Bagher; Ebrahim-Habibi, Azadeh

    2018-02-27

    Prediction of proteins' secondary structure is one of the major steps in the generation of homology models. These models provide structural information which is used to design suitable ligands for potential medicinal targets. However, selecting a proper tool between multiple secondary structure prediction (SSP) options is challenging. The current study is an insight onto currently favored methods and tools, within various contexts. A systematic review was performed for a comprehensive access to recent (2013-2016) studies which used or recommended protein SSP tools. Three databases, Web of Science, PubMed and Scopus were systematically searched and 99 out of 209 studies were finally found eligible to extract data. Four categories of applications for 59 retrieved SSP tools were: (I) prediction of structural features of a given sequence, (II) evaluation of a method, (III) providing input for a new SSP method and (IV) integrating a SSP tool as a component for a program. PSIPRED was found to be the most popular tool in all four categories. JPred and tools utilizing PHD (Profile network from HeiDelberg) method occupied second and third places of popularity in categories I and II. JPred was only found in the two first categories, while PHD was present in three fields. This study provides a comprehensive insight about the recent usage of SSP tools which could be helpful for selecting a proper tool's choice. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Prediction of intracellular exposure bridges the gap between target- and cell-based drug discovery

    PubMed Central

    Gordon, Laurie J.; Wayne, Gareth J.; Almqvist, Helena; Axelsson, Hanna; Seashore-Ludlow, Brinton; Treyer, Andrea; Lundbäck, Thomas; West, Andy; Hann, Michael M.; Artursson, Per

    2017-01-01

    Inadequate target exposure is a major cause of high attrition in drug discovery. Here, we show that a label-free method for quantifying the intracellular bioavailability (Fic) of drug molecules predicts drug access to intracellular targets and hence, pharmacological effect. We determined Fic in multiple cellular assays and cell types representing different targets from a number of therapeutic areas, including cancer, inflammation, and dementia. Both cytosolic targets and targets localized in subcellular compartments were investigated. Fic gives insights on membrane-permeable compounds in terms of cellular potency and intracellular target engagement, compared with biochemical potency measurements alone. Knowledge of the amount of drug that is locally available to bind intracellular targets provides a powerful tool for compound selection in early drug discovery. PMID:28701380

  10. Cardiovascular RNA interference therapy: the broadening tool and target spectrum.

    PubMed

    Poller, Wolfgang; Tank, Juliane; Skurk, Carsten; Gast, Martina

    2013-08-16

    Understanding of the roles of noncoding RNAs (ncRNAs) within complex organisms has fundamentally changed. It is increasingly possible to use ncRNAs as diagnostic and therapeutic tools in medicine. Regarding disease pathogenesis, it has become evident that confinement to the analysis of protein-coding regions of the human genome is insufficient because ncRNA variants have been associated with important human diseases. Thus, inclusion of noncoding genomic elements in pathogenetic studies and their consideration as therapeutic targets is warranted. We consider aspects of the evolutionary and discovery history of ncRNAs, as far as they are relevant for the identification and selection of ncRNAs with likely therapeutic potential. Novel therapeutic strategies are based on ncRNAs, and we discuss here RNA interference as a highly versatile tool for gene silencing. RNA interference-mediating RNAs are small, but only parts of a far larger spectrum encompassing ncRNAs up to many kilobasepairs in size. We discuss therapeutic options in cardiovascular medicine offered by ncRNAs and key issues to be solved before clinical translation. Convergence of multiple technical advances is highlighted as a prerequisite for the translational progress achieved in recent years. Regarding safety, we review properties of RNA therapeutics, which may immunologically distinguish them from their endogenous counterparts, all of which underwent sophisticated evolutionary adaptation to specific biological contexts. Although our understanding of the noncoding human genome is only fragmentary to date, it is already feasible to develop RNA interference against a rapidly broadening spectrum of therapeutic targets and to translate this to the clinical setting under certain restrictions.

  11. Rosetta Structure Prediction as a Tool for Solving Difficult Molecular Replacement Problems.

    PubMed

    DiMaio, Frank

    2017-01-01

    Molecular replacement (MR), a method for solving the crystallographic phase problem using phases derived from a model of the target structure, has proven extremely valuable, accounting for the vast majority of structures solved by X-ray crystallography. However, when the resolution of data is low, or the starting model is very dissimilar to the target protein, solving structures via molecular replacement may be very challenging. In recent years, protein structure prediction methodology has emerged as a powerful tool in model building and model refinement for difficult molecular replacement problems. This chapter describes some of the tools available in Rosetta for model building and model refinement specifically geared toward difficult molecular replacement cases.

  12. Flight Experiment Verification of Shuttle Boundary Layer Transition Prediction Tool

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Berger, Karen T.; Horvath, Thomas J.; Wood, William A.

    2016-01-01

    Boundary layer transition at hypersonic conditions is critical to the design of future high-speed aircraft and spacecraft. Accurate methods to predict transition would directly impact the aerothermodynamic environments used to size a hypersonic vehicle's thermal protection system. A transition prediction tool, based on wind tunnel derived discrete roughness correlations, was developed and implemented for the Space Shuttle return-to-flight program. This tool was also used to design a boundary layer transition flight experiment in order to assess correlation uncertainties, particularly with regard to high Mach-number transition and tunnel-to-flight scaling. A review is provided of the results obtained from the flight experiment in order to evaluate the transition prediction tool implemented for the Shuttle program.

  13. New support vector machine-based method for microRNA target prediction.

    PubMed

    Li, L; Gao, Q; Mao, X; Cao, Y

    2014-06-09

    MicroRNA (miRNA) plays important roles in cell differentiation, proliferation, growth, mobility, and apoptosis. An accurate list of precise target genes is necessary in order to fully understand the importance of miRNAs in animal development and disease. Several computational methods have been proposed for miRNA target-gene identification. However, these methods still have limitations with respect to their sensitivity and accuracy. Thus, we developed a new miRNA target-prediction method based on the support vector machine (SVM) model. The model supplies information of two binding sites (primary and secondary) for a radial basis function kernel as a similarity measure for SVM features. The information is categorized based on structural, thermodynamic, and sequence conservation. Using high-confidence datasets selected from public miRNA target databases, we obtained a human miRNA target SVM classifier model with high performance and provided an efficient tool for human miRNA target gene identification. Experiments have shown that our method is a reliable tool for miRNA target-gene prediction, and a successful application of an SVM classifier. Compared with other methods, the method proposed here improves the sensitivity and accuracy of miRNA prediction. Its performance can be further improved by providing more training examples.

  14. Drug-target interaction prediction from PSSM based evolutionary information.

    PubMed

    Mousavian, Zaynab; Khakabimamaghani, Sahand; Kavousi, Kaveh; Masoudi-Nejad, Ali

    2016-01-01

    The labor-intensive and expensive experimental process of drug-target interaction prediction has motivated many researchers to focus on in silico prediction, which leads to the helpful information in supporting the experimental interaction data. Therefore, they have proposed several computational approaches for discovering new drug-target interactions. Several learning-based methods have been increasingly developed which can be categorized into two main groups: similarity-based and feature-based. In this paper, we firstly use the bi-gram features extracted from the Position Specific Scoring Matrix (PSSM) of proteins in predicting drug-target interactions. Our results demonstrate the high-confidence prediction ability of the Bigram-PSSM model in terms of several performance indicators specifically for enzymes and ion channels. Moreover, we investigate the impact of negative selection strategy on the performance of the prediction, which is not widely taken into account in the other relevant studies. This is important, as the number of non-interacting drug-target pairs are usually extremely large in comparison with the number of interacting ones in existing drug-target interaction data. An interesting observation is that different levels of performance reduction have been attained for four datasets when we change the sampling method from the random sampling to the balanced sampling. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Macromolecular target prediction by self-organizing feature maps.

    PubMed

    Schneider, Gisbert; Schneider, Petra

    2017-03-01

    Rational drug discovery would greatly benefit from a more nuanced appreciation of the activity of pharmacologically active compounds against a diverse panel of macromolecular targets. Already, computational target-prediction models assist medicinal chemists in library screening, de novo molecular design, optimization of active chemical agents, drug re-purposing, in the spotting of potential undesired off-target activities, and in the 'de-orphaning' of phenotypic screening hits. The self-organizing map (SOM) algorithm has been employed successfully for these and other purposes. Areas covered: The authors recapitulate contemporary artificial neural network methods for macromolecular target prediction, and present the basic SOM algorithm at a conceptual level. Specifically, they highlight consensus target-scoring by the employment of multiple SOMs, and discuss the opportunities and limitations of this technique. Expert opinion: Self-organizing feature maps represent a straightforward approach to ligand clustering and classification. Some of the appeal lies in their conceptual simplicity and broad applicability domain. Despite known algorithmic shortcomings, this computational target prediction concept has been proven to work in prospective settings with high success rates. It represents a prototypic technique for future advances in the in silico identification of the modes of action and macromolecular targets of bioactive molecules.

  16. Prediction Of Abrasive And Diffusive Tool Wear Mechanisms In Machining

    NASA Astrophysics Data System (ADS)

    Rizzuti, S.; Umbrello, D.

    2011-01-01

    Tool wear prediction is regarded as very important task in order to maximize tool performance, minimize cutting costs and improve the quality of workpiece in cutting. In this research work, an experimental campaign was carried out at the varying of cutting conditions with the aim to measure both crater and flank tool wear, during machining of an AISI 1045 with an uncoated carbide tool P40. Parallel a FEM-based analysis was developed in order to study the tool wear mechanisms, taking also into account the influence of the cutting conditions and the temperature reached on the tool surfaces. The results show that, when the temperature of the tool rake surface is lower than the activation temperature of the diffusive phenomenon, the wear rate can be estimated applying an abrasive model. In contrast, in the tool area where the temperature is higher than the diffusive activation temperature, the wear rate can be evaluated applying a diffusive model. Finally, for a temperature ranges within the above cited values an adopted abrasive-diffusive wear model furnished the possibility to correctly evaluate the tool wear phenomena.

  17. FeNi nanotubes: perspective tool for targeted delivery

    NASA Astrophysics Data System (ADS)

    Kaniukov, Egor; Shumskaya, Alena; Yakimchuk, Dzmitry; Kozlovskiy, Artem; Korolkov, Ilya; Ibragimova, Milana; Zdorovets, Maxim; Kadyrzhanov, Kairat; Rusakov, Vyacheslav; Fadeev, Maxim; Lobko, Eugenia; Saunina, Kristina; Nikolaevich, Larisa

    2018-05-01

    Targeted delivery of drugs and proteins by magnetic field is a promising method to treat cancer that reduces undesired systemic toxicity of drugs. In this method, the therapeutic agent is attached through links to functional groups with magnetic nanostructure and injected into the blood to be transported to the problem area. To provide a local effect of drug treatment, nanostructures are concentrated and fixed in the selected area by the external magnetic field (magnet). After the exposure, carriers are removed from the circulatory system by magnetic field. In this study, Fe20Ni80 nanotubes are considered as carriers for targeted delivery of drugs and proteins. A simple synthesis method is proposed to form these structures by electrodeposition in PET template pores, and structural and magnetic properties are studied in detail. Nanotubes have polycrystalline walls providing mechanical strength of carriers and magnetic anisotropy that allow controlling the nanostructure movement under the exposure of by magnetic field. Moreover, potential advantages of magnetic nanotubes are discussed in comparison with other carrier types. Most sufficient of them is predictable behavior in magnetic field due to the absence of magnetic core, low specific density that allows floating in biological media, and large specific surface area providing the attachment of a larger number of payloads for the targeted delivery. A method of coating nanotube surfaces with PMMA is proposed to exclude possible negative impact of the carrier material and to form functional bonds for the payload connection. Cytotoxicity studies of coated and uncoated nanotubes are carried out to understand their influence on the biological media.

  18. Predicting selective drug targets in cancer through metabolic networks

    PubMed Central

    Folger, Ori; Jerby, Livnat; Frezza, Christian; Gottlieb, Eyal; Ruppin, Eytan; Shlomi, Tomer

    2011-01-01

    The interest in studying metabolic alterations in cancer and their potential role as novel targets for therapy has been rejuvenated in recent years. Here, we report the development of the first genome-scale network model of cancer metabolism, validated by correctly identifying genes essential for cellular proliferation in cancer cell lines. The model predicts 52 cytostatic drug targets, of which 40% are targeted by known, approved or experimental anticancer drugs, and the rest are new. It further predicts combinations of synthetic lethal drug targets, whose synergy is validated using available drug efficacy and gene expression measurements across the NCI-60 cancer cell line collection. Finally, potential selective treatments for specific cancers that depend on cancer type-specific downregulation of gene expression and somatic mutations are compiled. PMID:21694718

  19. Benchmark data sets for structure-based computational target prediction.

    PubMed

    Schomburg, Karen T; Rarey, Matthias

    2014-08-25

    Structure-based computational target prediction methods identify potential targets for a bioactive compound. Methods based on protein-ligand docking so far face many challenges, where the greatest probably is the ranking of true targets in a large data set of protein structures. Currently, no standard data sets for evaluation exist, rendering comparison and demonstration of improvements of methods cumbersome. Therefore, we propose two data sets and evaluation strategies for a meaningful evaluation of new target prediction methods, i.e., a small data set consisting of three target classes for detailed proof-of-concept and selectivity studies and a large data set consisting of 7992 protein structures and 72 drug-like ligands allowing statistical evaluation with performance metrics on a drug-like chemical space. Both data sets are built from openly available resources, and any information needed to perform the described experiments is reported. We describe the composition of the data sets, the setup of screening experiments, and the evaluation strategy. Performance metrics capable to measure the early recognition of enrichments like AUC, BEDROC, and NSLR are proposed. We apply a sequence-based target prediction method to the large data set to analyze its content of nontrivial evaluation cases. The proposed data sets are used for method evaluation of our new inverse screening method iRAISE. The small data set reveals the method's capability and limitations to selectively distinguish between rather similar protein structures. The large data set simulates real target identification scenarios. iRAISE achieves in 55% excellent or good enrichment a median AUC of 0.67 and RMSDs below 2.0 Å for 74% and was able to predict the first true target in 59 out of 72 cases in the top 2% of the protein data set of about 8000 structures.

  20. Tampa Bay Water Clarity Model (TBWCM): As a Predictive Tool

    EPA Science Inventory

    The Tampa Bay Water Clarity Model was developed as a predictive tool for estimating the impact of changing nutrient loads on water clarity as measured by secchi depth. The model combines a physical mixing model with an irradiance model and nutrient cycling model. A 10 segment bi...

  1. TargetNet: a web service for predicting potential drug-target interaction profiling via multi-target SAR models.

    PubMed

    Yao, Zhi-Jiang; Dong, Jie; Che, Yu-Jing; Zhu, Min-Feng; Wen, Ming; Wang, Ning-Ning; Wang, Shan; Lu, Ai-Ping; Cao, Dong-Sheng

    2016-05-01

    Drug-target interactions (DTIs) are central to current drug discovery processes and public health fields. Analyzing the DTI profiling of the drugs helps to infer drug indications, adverse drug reactions, drug-drug interactions, and drug mode of actions. Therefore, it is of high importance to reliably and fast predict DTI profiling of the drugs on a genome-scale level. Here, we develop the TargetNet server, which can make real-time DTI predictions based only on molecular structures, following the spirit of multi-target SAR methodology. Naïve Bayes models together with various molecular fingerprints were employed to construct prediction models. Ensemble learning from these fingerprints was also provided to improve the prediction ability. When the user submits a molecule, the server will predict the activity of the user's molecule across 623 human proteins by the established high quality SAR model, thus generating a DTI profiling that can be used as a feature vector of chemicals for wide applications. The 623 SAR models related to 623 human proteins were strictly evaluated and validated by several model validation strategies, resulting in the AUC scores of 75-100 %. We applied the generated DTI profiling to successfully predict potential targets, toxicity classification, drug-drug interactions, and drug mode of action, which sufficiently demonstrated the wide application value of the potential DTI profiling. The TargetNet webserver is designed based on the Django framework in Python, and is freely accessible at http://targetnet.scbdd.com .

  2. TargetNet: a web service for predicting potential drug-target interaction profiling via multi-target SAR models

    NASA Astrophysics Data System (ADS)

    Yao, Zhi-Jiang; Dong, Jie; Che, Yu-Jing; Zhu, Min-Feng; Wen, Ming; Wang, Ning-Ning; Wang, Shan; Lu, Ai-Ping; Cao, Dong-Sheng

    2016-05-01

    Drug-target interactions (DTIs) are central to current drug discovery processes and public health fields. Analyzing the DTI profiling of the drugs helps to infer drug indications, adverse drug reactions, drug-drug interactions, and drug mode of actions. Therefore, it is of high importance to reliably and fast predict DTI profiling of the drugs on a genome-scale level. Here, we develop the TargetNet server, which can make real-time DTI predictions based only on molecular structures, following the spirit of multi-target SAR methodology. Naïve Bayes models together with various molecular fingerprints were employed to construct prediction models. Ensemble learning from these fingerprints was also provided to improve the prediction ability. When the user submits a molecule, the server will predict the activity of the user's molecule across 623 human proteins by the established high quality SAR model, thus generating a DTI profiling that can be used as a feature vector of chemicals for wide applications. The 623 SAR models related to 623 human proteins were strictly evaluated and validated by several model validation strategies, resulting in the AUC scores of 75-100 %. We applied the generated DTI profiling to successfully predict potential targets, toxicity classification, drug-drug interactions, and drug mode of action, which sufficiently demonstrated the wide application value of the potential DTI profiling. The TargetNet webserver is designed based on the Django framework in Python, and is freely accessible at http://targetnet.scbdd.com.

  3. Drug-target interaction prediction: A Bayesian ranking approach.

    PubMed

    Peska, Ladislav; Buza, Krisztian; Koller, Júlia

    2017-12-01

    In silico prediction of drug-target interactions (DTI) could provide valuable information and speed-up the process of drug repositioning - finding novel usage for existing drugs. In our work, we focus on machine learning algorithms supporting drug-centric repositioning approach, which aims to find novel usage for existing or abandoned drugs. We aim at proposing a per-drug ranking-based method, which reflects the needs of drug-centric repositioning research better than conventional drug-target prediction approaches. We propose Bayesian Ranking Prediction of Drug-Target Interactions (BRDTI). The method is based on Bayesian Personalized Ranking matrix factorization (BPR) which has been shown to be an excellent approach for various preference learning tasks, however, it has not been used for DTI prediction previously. In order to successfully deal with DTI challenges, we extended BPR by proposing: (i) the incorporation of target bias, (ii) a technique to handle new drugs and (iii) content alignment to take structural similarities of drugs and targets into account. Evaluation on five benchmark datasets shows that BRDTI outperforms several state-of-the-art approaches in terms of per-drug nDCG and AUC. BRDTI results w.r.t. nDCG are 0.929, 0.953, 0.948, 0.897 and 0.690 for G-Protein Coupled Receptors (GPCR), Ion Channels (IC), Nuclear Receptors (NR), Enzymes (E) and Kinase (K) datasets respectively. Additionally, BRDTI significantly outperformed other methods (BLM-NII, WNN-GIP, NetLapRLS and CMF) w.r.t. nDCG in 17 out of 20 cases. Furthermore, BRDTI was also shown to be able to predict novel drug-target interactions not contained in the original datasets. The average recall at top-10 predicted targets for each drug was 0.762, 0.560, 1.000 and 0.404 for GPCR, IC, NR, and E datasets respectively. Based on the evaluation, we can conclude that BRDTI is an appropriate choice for researchers looking for an in silico DTI prediction technique to be used in drug

  4. HomoTarget: a new algorithm for prediction of microRNA targets in Homo sapiens.

    PubMed

    Ahmadi, Hamed; Ahmadi, Ali; Azimzadeh-Jamalkandi, Sadegh; Shoorehdeli, Mahdi Aliyari; Salehzadeh-Yazdi, Ali; Bidkhori, Gholamreza; Masoudi-Nejad, Ali

    2013-02-01

    MiRNAs play an essential role in the networks of gene regulation by inhibiting the translation of target mRNAs. Several computational approaches have been proposed for the prediction of miRNA target-genes. Reports reveal a large fraction of under-predicted or falsely predicted target genes. Thus, there is an imperative need to develop a computational method by which the target mRNAs of existing miRNAs can be correctly identified. In this study, combined pattern recognition neural network (PRNN) and principle component analysis (PCA) architecture has been proposed in order to model the complicated relationship between miRNAs and their target mRNAs in humans. The results of several types of intelligent classifiers and our proposed model were compared, showing that our algorithm outperformed them with higher sensitivity and specificity. Using the recent release of the mirBase database to find potential targets of miRNAs, this model incorporated twelve structural, thermodynamic and positional features of miRNA:mRNA binding sites to select target candidates. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Quantitative and Systems Pharmacology. 1. In Silico Prediction of Drug-Target Interactions of Natural Products Enables New Targeted Cancer Therapy.

    PubMed

    Fang, Jiansong; Wu, Zengrui; Cai, Chuipu; Wang, Qi; Tang, Yun; Cheng, Feixiong

    2017-11-27

    Natural products with diverse chemical scaffolds have been recognized as an invaluable source of compounds in drug discovery and development. However, systematic identification of drug targets for natural products at the human proteome level via various experimental assays is highly expensive and time-consuming. In this study, we proposed a systems pharmacology infrastructure to predict new drug targets and anticancer indications of natural products. Specifically, we reconstructed a global drug-target network with 7,314 interactions connecting 751 targets and 2,388 natural products and built predictive network models via a balanced substructure-drug-target network-based inference approach. A high area under receiver operating characteristic curve of 0.96 was yielded for predicting new targets of natural products during cross-validation. The newly predicted targets of natural products (e.g., resveratrol, genistein, and kaempferol) with high scores were validated by various literature studies. We further built the statistical network models for identification of new anticancer indications of natural products through integration of both experimentally validated and computationally predicted drug-target interactions of natural products with known cancer proteins. We showed that the significantly predicted anticancer indications of multiple natural products (e.g., naringenin, disulfiram, and metformin) with new mechanism-of-action were validated by various published experimental evidence. In summary, this study offers powerful computational systems pharmacology approaches and tools for the development of novel targeted cancer therapies by exploiting the polypharmacology of natural products.

  6. Imbalanced target prediction with pattern discovery on clinical data repositories.

    PubMed

    Chan, Tak-Ming; Li, Yuxi; Chiau, Choo-Chiap; Zhu, Jane; Jiang, Jie; Huo, Yong

    2017-04-20

    Clinical data repositories (CDR) have great potential to improve outcome prediction and risk modeling. However, most clinical studies require careful study design, dedicated data collection efforts, and sophisticated modeling techniques before a hypothesis can be tested. We aim to bridge this gap, so that clinical domain users can perform first-hand prediction on existing repository data without complicated handling, and obtain insightful patterns of imbalanced targets for a formal study before it is conducted. We specifically target for interpretability for domain users where the model can be conveniently explained and applied in clinical practice. We propose an interpretable pattern model which is noise (missing) tolerant for practice data. To address the challenge of imbalanced targets of interest in clinical research, e.g., deaths less than a few percent, the geometric mean of sensitivity and specificity (G-mean) optimization criterion is employed, with which a simple but effective heuristic algorithm is developed. We compared pattern discovery to clinically interpretable methods on two retrospective clinical datasets. They contain 14.9% deaths in 1 year in the thoracic dataset and 9.1% deaths in the cardiac dataset, respectively. In spite of the imbalance challenge shown on other methods, pattern discovery consistently shows competitive cross-validated prediction performance. Compared to logistic regression, Naïve Bayes, and decision tree, pattern discovery achieves statistically significant (p-values < 0.01, Wilcoxon signed rank test) favorable averaged testing G-means and F1-scores (harmonic mean of precision and sensitivity). Without requiring sophisticated technical processing of data and tweaking, the prediction performance of pattern discovery is consistently comparable to the best achievable performance. Pattern discovery has demonstrated to be robust and valuable for target prediction on existing clinical data repositories with imbalance and

  7. Tool for Automated Retrieval of Generic Event Tracks (TARGET)

    NASA Technical Reports Server (NTRS)

    Clune, Thomas; Freeman, Shawn; Cruz, Carlos; Burns, Robert; Kuo, Kwo-Sen; Kouatchou, Jules

    2013-01-01

    Methods have been developed to identify and track tornado-producing mesoscale convective systems (MCSs) automatically over the continental United States, in order to facilitate systematic studies of these powerful and often destructive events. Several data sources were combined to ensure event identification accuracy. Records of watches and warnings issued by National Weather Service (NWS), and tornado locations and tracks from the Tornado History Project (THP) were used to locate MCSs in high-resolution precipitation observations and GOES infrared (11-micron) Rapid Scan Operation (RSO) imagery. Thresholds are then applied to the latter two data sets to define MCS events and track their developments. MCSs produce a broad range of severe convective weather events that are significantly affecting the living conditions of the populations exposed to them. Understanding how MCSs grow and develop could help scientists improve their weather prediction models, and also provide tools to decision-makers whose goals are to protect populations and their property. Associating storm cells across frames of remotely sensed images poses a difficult problem because storms evolve, split, and merge. Any storm-tracking method should include the following processes: storm identification, storm tracking, and quantification of storm intensity and activity. The spatiotemporal coordinates of the tracks will enable researchers to obtain other coincident observations to conduct more thorough studies of these events. In addition to their tracked locations, their areal extents, precipitation intensities, and accumulations all as functions of their evolutions in time were also obtained and recorded for these events. All parameters so derived can be catalogued into a moving object database (MODB) for custom queries. The purpose of this software is to provide a generalized, cross-platform, pluggable tool for identifying events within a set of scientific data based upon specified criteria with the

  8. Predicting targets of compounds against neurological diseases using cheminformatic methodology

    NASA Astrophysics Data System (ADS)

    Nikolic, Katarina; Mavridis, Lazaros; Bautista-Aguilera, Oscar M.; Marco-Contelles, José; Stark, Holger; do Carmo Carreiras, Maria; Rossi, Ilaria; Massarelli, Paola; Agbaba, Danica; Ramsay, Rona R.; Mitchell, John B. O.

    2015-02-01

    Recently developed multi-targeted ligands are novel drug candidates able to interact with monoamine oxidase A and B; acetylcholinesterase and butyrylcholinesterase; or with histamine N-methyltransferase and histamine H3-receptor (H3R). These proteins are drug targets in the treatment of depression, Alzheimer's disease, obsessive disorders, and Parkinson's disease. A probabilistic method, the Parzen-Rosenblatt window approach, was used to build a "predictor" model using data collected from the ChEMBL database. The model can be used to predict both the primary pharmaceutical target and off-targets of a compound based on its structure. Molecular structures were represented based on the circular fingerprint methodology. The same approach was used to build a "predictor" model from the DrugBank dataset to determine the main pharmacological groups of the compound. The study of off-target interactions is now recognised as crucial to the understanding of both drug action and toxicology. Primary pharmaceutical targets and off-targets for the novel multi-target ligands were examined by use of the developed cheminformatic method. Several multi-target ligands were selected for further study, as compounds with possible additional beneficial pharmacological activities. The cheminformatic targets identifications were in agreement with four 3D-QSAR (H3R/D1R/D2R/5-HT2aR) models and by in vitro assays for serotonin 5-HT1a and 5-HT2a receptor binding of the most promising ligand ( 71/MBA-VEG8).

  9. Prediction Markets: Another Tool in the Intelligence Kitbag

    DTIC Science & Technology

    2007-02-20

    the eminent British anthropologist and statistician, Francis Galton . He was passing by an English county fair when he noticed an advertisement for a...available from http://libnt4.lib.tcu.edu/staff/bellinger/essays/ untruth.htm; Internet; accessed 14 December 2006. 18 Francis Galton , Memories of My Life...USAWC STRATEGY RESEARCH PROJECT PREDICTION MARKETS: ANOTHER TOOL IN THE INTELLIGENCE KITBAG by Colonel

  10. ReactPRED: a tool to predict and analyze biochemical reactions.

    PubMed

    Sivakumar, Tadi Venkata; Giri, Varun; Park, Jin Hwan; Kim, Tae Yong; Bhaduri, Anirban

    2016-11-15

    Biochemical pathways engineering is often used to synthesize or degrade target chemicals. In silico screening of the biochemical transformation space allows predicting feasible reactions, constituting these pathways. Current enabling tools are customized to predict reactions based on pre-defined biochemical transformations or reaction rule sets. Reaction rule sets are usually curated manually and tailored to specific applications. They are not exhaustive. In addition, current systems are incapable of regulating and refining data with an aim to tune specificity and sensitivity. A robust and flexible tool that allows automated reaction rule set creation along with regulated pathway prediction and analyses is a need. ReactPRED aims to address the same. ReactPRED is an open source flexible and customizable tool enabling users to predict biochemical reactions and pathways. The tool allows automated reaction rule creation from a user defined reaction set. Additionally, reaction rule degree and rule tolerance features allow refinement of predicted data. It is available as a flexible graphical user interface and a console application. ReactPRED is available at: https://sourceforge.net/projects/reactpred/ CONTACT: anirban.b@samsung.com or ty76.kim@samsung.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Hydrocode predictions of collisional outcomes: Effects of target size

    NASA Technical Reports Server (NTRS)

    Ryan, Eileen V.; Asphaug, Erik; Melosh, H. J.

    1991-01-01

    Traditionally, laboratory impact experiments, designed to simulate asteroid collisions, attempted to establish a predictive capability for collisional outcomes given a particular set of initial conditions. Unfortunately, laboratory experiments are restricted to using targets considerably smaller than the modelled objects. It is therefore necessary to develop some methodology for extrapolating the extensive experimental results to the size regime of interest. Results are reported obtained through the use of two dimensional hydrocode based on 2-D SALE and modified to include strength effects and the fragmentation equations. The hydrocode was tested by comparing its predictions for post-impact fragment size distributions to those observed in laboratory impact experiments.

  12. Fuzzy regression modeling for tool performance prediction and degradation detection.

    PubMed

    Li, X; Er, M J; Lim, B S; Zhou, J H; Gan, O P; Rutkowski, L

    2010-10-01

    In this paper, the viability of using Fuzzy-Rule-Based Regression Modeling (FRM) algorithm for tool performance and degradation detection is investigated. The FRM is developed based on a multi-layered fuzzy-rule-based hybrid system with Multiple Regression Models (MRM) embedded into a fuzzy logic inference engine that employs Self Organizing Maps (SOM) for clustering. The FRM converts a complex nonlinear problem to a simplified linear format in order to further increase the accuracy in prediction and rate of convergence. The efficacy of the proposed FRM is tested through a case study - namely to predict the remaining useful life of a ball nose milling cutter during a dry machining process of hardened tool steel with a hardness of 52-54 HRc. A comparative study is further made between four predictive models using the same set of experimental data. It is shown that the FRM is superior as compared with conventional MRM, Back Propagation Neural Networks (BPNN) and Radial Basis Function Networks (RBFN) in terms of prediction accuracy and learning speed.

  13. Popularity Prediction Tool for ATLAS Distributed Data Management

    NASA Astrophysics Data System (ADS)

    Beermann, T.; Maettig, P.; Stewart, G.; Lassnig, M.; Garonne, V.; Barisits, M.; Vigne, R.; Serfon, C.; Goossens, L.; Nairz, A.; Molfetas, A.; Atlas Collaboration

    2014-06-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distributions. This article describes the popularity prediction method and the simulator that is used to evaluate the redistribution.

  14. Initial Integration of Noise Prediction Tools for Acoustic Scattering Effects

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Burley, Casey L.; Tinetti, Ana; Rawls, John W.

    2008-01-01

    This effort provides an initial glimpse at NASA capabilities available in predicting the scattering of fan noise from a non-conventional aircraft configuration. The Aircraft NOise Prediction Program, Fast Scattering Code, and the Rotorcraft Noise Model were coupled to provide increased fidelity models of scattering effects on engine fan noise sources. The integration of these codes led to the identification of several keys issues entailed in applying such multi-fidelity approaches. In particular, for prediction at noise certification points, the inclusion of distributed sources leads to complications with the source semi-sphere approach. Computational resource requirements limit the use of the higher fidelity scattering code to predict radiated sound pressure levels for full scale configurations at relevant frequencies. And, the ability to more accurately represent complex shielding surfaces in current lower fidelity models is necessary for general application to scattering predictions. This initial step in determining the potential benefits/costs of these new methods over the existing capabilities illustrates a number of the issues that must be addressed in the development of next generation aircraft system noise prediction tools.

  15. Brainstorming: weighted voting prediction of inhibitors for protein targets.

    PubMed

    Plewczynski, Dariusz

    2011-09-01

    The "Brainstorming" approach presented in this paper is a weighted voting method that can improve the quality of predictions generated by several machine learning (ML) methods. First, an ensemble of heterogeneous ML algorithms is trained on available experimental data, then all solutions are gathered and a consensus is built between them. The final prediction is performed using a voting procedure, whereby the vote of each method is weighted according to a quality coefficient calculated using multivariable linear regression (MLR). The MLR optimization procedure is very fast, therefore no additional computational cost is introduced by using this jury approach. Here, brainstorming is applied to selecting actives from large collections of compounds relating to five diverse biological targets of medicinal interest, namely HIV-reverse transcriptase, cyclooxygenase-2, dihydrofolate reductase, estrogen receptor, and thrombin. The MDL Drug Data Report (MDDR) database was used for selecting known inhibitors for these protein targets, and experimental data was then used to train a set of machine learning methods. The benchmark dataset (available at http://bio.icm.edu.pl/∼darman/chemoinfo/benchmark.tar.gz ) can be used for further testing of various clustering and machine learning methods when predicting the biological activity of compounds. Depending on the protein target, the overall recall value is raised by at least 20% in comparison to any single machine learning method (including ensemble methods like random forest) and unweighted simple majority voting procedures.

  16. TargetMiner: microRNA target prediction with systematic identification of tissue-specific negative examples.

    PubMed

    Bandyopadhyay, Sanghamitra; Mitra, Ramkrishna

    2009-10-15

    Prediction of microRNA (miRNA) target mRNAs using machine learning approaches is an important area of research. However, most of the methods suffer from either high false positive or false negative rates. One reason for this is the marked deficiency of negative examples or miRNA non-target pairs. Systematic identification of non-target mRNAs is still not addressed properly, and therefore, current machine learning approaches are compelled to rely on artificially generated negative examples for training. In this article, we have identified approximately 300 tissue-specific negative examples using a novel approach that involves expression profiling of both miRNAs and mRNAs, miRNA-mRNA structural interactions and seed-site conservation. The newly generated negative examples are validated with pSILAC dataset, which elucidate the fact that the identified non-targets are indeed non-targets.These high-throughput tissue-specific negative examples and a set of experimentally verified positive examples are then used to build a system called TargetMiner, a support vector machine (SVM)-based classifier. In addition to assessing the prediction accuracy on cross-validation experiments, TargetMiner has been validated with a completely independent experimental test dataset. Our method outperforms 10 existing target prediction algorithms and provides a good balance between sensitivity and specificity that is not reflected in the existing methods. We achieve a significantly higher sensitivity and specificity of 69% and 67.8% based on a pool of 90 feature set and 76.5% and 66.1% using a set of 30 selected feature set on the completely independent test dataset. In order to establish the effectiveness of the systematically generated negative examples, the SVM is trained using a different set of negative data generated using the method in Yousef et al. A significantly higher false positive rate (70.6%) is observed when tested on the independent set, while all other factors are kept the

  17. Predicting Drug-Target Interactions Based on Small Positive Samples.

    PubMed

    Hu, Pengwei; Chan, Keith C C; Hu, Yanxing

    2018-01-01

    A basic task in drug discovery is to find new medication in the form of candidate compounds that act on a target protein. In other words, a drug has to interact with a target and such drug-target interaction (DTI) is not expected to be random. Significant and interesting patterns are expected to be hidden in them. If these patterns can be discovered, new drugs are expected to be more easily discoverable. Currently, a number of computational methods have been proposed to predict DTIs based on their similarity. However, such as approach does not allow biochemical features to be directly considered. As a result, some methods have been proposed to try to discover patterns in physicochemical interactions. Since the number of potential negative DTIs are very high both in absolute terms and in comparison to that of the known ones, these methods are rather computationally expensive and they can only rely on subsets, rather than the full set, of negative DTIs for training and validation. As there is always a relatively high chance for negative DTIs to be falsely identified and as only partial subset of such DTIs is considered, existing approaches can be further improved to better predict DTIs. In this paper, we present a novel approach, called ODT (one class drug target interaction prediction), for such purpose. One main task of ODT is to discover association patterns between interacting drugs and proteins from the chemical structure of the former and the protein sequence network of the latter. ODT does so in two phases. First, the DTI-network is transformed to a representation by structural properties. Second, it applies a oneclass classification algorithm to build a prediction model based only on known positive interactions. We compared the best AUROC scores of the ODT with several state-of-art approaches on Gold standard data. The prediction accuracy of the ODT is superior in comparison with all the other methods at GPCRs dataset and Ion channels dataset. Performance

  18. Proteome-wide prediction of targets for aspirin: new insight into the molecular mechanism of aspirin

    PubMed Central

    Dai, Shao-Xing; Li, Wen-Xing

    2016-01-01

    Besides its anti-inflammatory, analgesic and anti-pyretic properties, aspirin is used for the prevention of cardiovascular disease and various types of cancer. The multiple activities of aspirin likely involve several molecular targets and pathways rather than a single target. Therefore, systematic identification of these targets of aspirin can help us understand the underlying mechanisms of the activities. In this study, we identified 23 putative targets of aspirin in the human proteome by using binding pocket similarity detecting tool combination with molecular docking, free energy calculation and pathway analysis. These targets have diverse folds and are derived from different protein family. However, they have similar aspirin-binding pockets. The binding free energy with aspirin for newly identified targets is comparable to that for the primary targets. Pathway analysis revealed that the targets were enriched in several pathways such as vascular endothelial growth factor (VEGF) signaling, Fc epsilon RI signaling and arachidonic acid metabolism, which are strongly involved in inflammation, cardiovascular disease and cancer. Therefore, the predicted target profile of aspirin suggests a new explanation for the disease prevention ability of aspirin. Our findings provide a new insight of aspirin and its efficacy of disease prevention in a systematic and global view. PMID:26989626

  19. Proteome-wide prediction of targets for aspirin: new insight into the molecular mechanism of aspirin.

    PubMed

    Dai, Shao-Xing; Li, Wen-Xing; Li, Gong-Hua; Huang, Jing-Fei

    2016-01-01

    Besides its anti-inflammatory, analgesic and anti-pyretic properties, aspirin is used for the prevention of cardiovascular disease and various types of cancer. The multiple activities of aspirin likely involve several molecular targets and pathways rather than a single target. Therefore, systematic identification of these targets of aspirin can help us understand the underlying mechanisms of the activities. In this study, we identified 23 putative targets of aspirin in the human proteome by using binding pocket similarity detecting tool combination with molecular docking, free energy calculation and pathway analysis. These targets have diverse folds and are derived from different protein family. However, they have similar aspirin-binding pockets. The binding free energy with aspirin for newly identified targets is comparable to that for the primary targets. Pathway analysis revealed that the targets were enriched in several pathways such as vascular endothelial growth factor (VEGF) signaling, Fc epsilon RI signaling and arachidonic acid metabolism, which are strongly involved in inflammation, cardiovascular disease and cancer. Therefore, the predicted target profile of aspirin suggests a new explanation for the disease prevention ability of aspirin. Our findings provide a new insight of aspirin and its efficacy of disease prevention in a systematic and global view.

  20. Tools for outcome prediction in patients with community acquired pneumonia.

    PubMed

    Khan, Faheem; Owens, Mark B; Restrepo, Marcos; Povoa, Pedro; Martin-Loeches, Ignacio

    2017-02-01

    Community-acquired pneumonia (CAP) is one of the most common causes of mortality world-wide. The mortality rate of patients with CAP is influenced by the severity of the disease, treatment failure and the requirement for hospitalization and/or intensive care unit (ICU) management, all of which may be predicted by biomarkers and clinical scoring systems. Areas covered: We review the recent literature examining the efficacy of established and newly-developed clinical scores, biological and inflammatory markers such as C-Reactive protein (CRP), procalcitonin (PCT) and Interleukin-6 (IL-6), whether used alone or in conjunction with clinical severity scores to assess the severity of CAP, predict treatment failure, guide acute in-hospital or ICU admission and predict mortality. Expert commentary: The early prediction of treatment failure using clinical scores and biomarkers plays a developing role in improving survival of patients with CAP by identifying high-risk patients requiring hospitalization or ICU admission; and may enable more efficient allocation of resources. However, it is likely that combinations of scoring systems and biomarkers will be of greater use than individual markers. Further larger studies are needed to corroborate the additive value of these markers to clinical prediction scores to provide a safer and more effective assessment tool for clinicians.

  1. Drinking Water Enforcement Response Policy and Enforcement Targeting Tool

    EPA Pesticide Factsheets

    This document contains a letter from Cynthia Giles to Regional Administrators about drinking water enforcement response policy with an attached document on Proposed Revision to Enforcement Response Policy for the PWSS Program and Enforcement Tool

  2. An Engineering Tool for the Prediction of Internal Dielectric Charging

    NASA Astrophysics Data System (ADS)

    Rodgers, D. J.; Ryden, K. A.; Wrenn, G. L.; Latham, P. M.; Sorensen, J.; Levy, L.

    1998-11-01

    A practical internal charging tool has been developed. It provides an easy-to-use means for satellite engineers to predict whether on-board dielectrics are vulnerable to electrostatic discharge in the outer radiation belt. The tool is designed to simulate irradiation of single-dielectric planar or cylindrical structures with or without shielding. Analytical equations are used to describe current deposition in the dielectric. This is fast and gives charging currents to sufficient accuracy given the uncertainties in other aspects of the problem - particularly material characteristics. Time-dependent internal electric fields are calculated, taking into account the effect on conductivity of electric field, dose rate and temperature. A worst-case model of electron fluxes in the outer belt has been created specifically for the internal charging problem and is built into the code. For output, the tool gives a YES or NO decision on the susceptibility of the structure to internal electrostatic breakdown and if necessary, calculates the required changes to bring the system below the breakdown threshold. A complementary programme of laboratory irradiations has been carried out to validate the tool. The results for Epoxy-fibreglass samples show that the code models electric field realistically for a wide variety of shields, dielectric thicknesses and electron spectra. Results for Teflon samples indicate that some further experimentation is required and the radiation-induced conductivity aspects of the code have not been validated.

  3. The development of a tool to predict team performance.

    PubMed

    Sinclair, M A; Siemieniuch, C E; Haslam, R A; Henshaw, M J D C; Evans, L

    2012-01-01

    The paper describes the development of a tool to predict quantitatively the success of a team when executing a process. The tool was developed for the UK defence industry, though it may be useful in other domains. It is expected to be used by systems engineers in initial stages of systems design, when concepts are still fluid, including the structure of the team(s) which are expected to be operators within the system. It enables answers to be calculated for questions such as "What happens if I reduce team size?" and "Can I reduce the qualifications necessary to execute this process and still achieve the required level of success?". The tool has undergone verification and validation; it predicts fairly well and shows promise. An unexpected finding is that the tool creates a good a priori argument for significant attention to Human Factors Integration in systems projects. The simulations show that if a systems project takes full account of human factors integration (selection, training, process design, interaction design, culture, etc.) then the likelihood of team success will be in excess of 0.95. As the project derogates from this state, the likelihood of team success will drop as low as 0.05. If the team has good internal communications and good individuals in key roles, the likelihood of success rises towards 0.25. Even with a team comprising the best individuals, p(success) will not be greater than 0.35. It is hoped that these results will be useful for human factors professionals involved in systems design. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. Prediction of Drug-Target Interactions and Drug Repositioning via Network-Based Inference

    PubMed Central

    Jiang, Jing; Lu, Weiqiang; Li, Weihua; Liu, Guixia; Zhou, Weixing; Huang, Jin; Tang, Yun

    2012-01-01

    Drug-target interaction (DTI) is the basis of drug discovery and design. It is time consuming and costly to determine DTI experimentally. Hence, it is necessary to develop computational methods for the prediction of potential DTI. Based on complex network theory, three supervised inference methods were developed here to predict DTI and used for drug repositioning, namely drug-based similarity inference (DBSI), target-based similarity inference (TBSI) and network-based inference (NBI). Among them, NBI performed best on four benchmark data sets. Then a drug-target network was created with NBI based on 12,483 FDA-approved and experimental drug-target binary links, and some new DTIs were further predicted. In vitro assays confirmed that five old drugs, namely montelukast, diclofenac, simvastatin, ketoconazole, and itraconazole, showed polypharmacological features on estrogen receptors or dipeptidyl peptidase-IV with half maximal inhibitory or effective concentration ranged from 0.2 to 10 µM. Moreover, simvastatin and ketoconazole showed potent antiproliferative activities on human MDA-MB-231 breast cancer cell line in MTT assays. The results indicated that these methods could be powerful tools in prediction of DTIs and drug repositioning. PMID:22589709

  5. Computational Prediction of Neutralization Epitopes Targeted by Human Anti-V3 HIV Monoclonal Antibodies

    PubMed Central

    Shmelkov, Evgeny; Krachmarov, Chavdar; Grigoryan, Arsen V.; Pinter, Abraham; Statnikov, Alexander; Cardozo, Timothy

    2014-01-01

    The extreme diversity of HIV-1 strains presents a formidable challenge for HIV-1 vaccine design. Although antibodies (Abs) can neutralize HIV-1 and potentially protect against infection, antibodies that target the immunogenic viral surface protein gp120 have widely variable and poorly predictable cross-strain reactivity. Here, we developed a novel computational approach, the Method of Dynamic Epitopes, for identification of neutralization epitopes targeted by anti-HIV-1 monoclonal antibodies (mAbs). Our data demonstrate that this approach, based purely on calculated energetics and 3D structural information, accurately predicts the presence of neutralization epitopes targeted by V3-specific mAbs 2219 and 447-52D in any HIV-1 strain. The method was used to calculate the range of conservation of these specific epitopes across all circulating HIV-1 viruses. Accurately identifying an Ab-targeted neutralization epitope in a virus by computational means enables easy prediction of the breadth of reactivity of specific mAbs across the diversity of thousands of different circulating HIV-1 variants and facilitates rational design and selection of immunogens mimicking specific mAb-targeted epitopes in a multivalent HIV-1 vaccine. The defined epitopes can also be used for the purpose of epitope-specific analyses of breakthrough sequences recorded in vaccine clinical trials. Thus, our study is a prototype for a valuable tool for rational HIV-1 vaccine design. PMID:24587168

  6. Predicting SPE Fluxes: Coupled Simulations and Analysis Tools

    NASA Astrophysics Data System (ADS)

    Gorby, M.; Schwadron, N.; Linker, J.; Caplan, R. M.; Wijaya, J.; Downs, C.; Lionello, R.

    2017-12-01

    Presented here is a nuts-and-bolts look at the coupled framework of Predictive Science Inc's Magnetohydrodynamics Around a Sphere (MAS) code and the Energetic Particle Radiation Environment Module (EPREM). MAS simulated coronal mass ejection output from a variety of events can be selected as the MHD input to EPREM and a variety of parameters can be set to run against: bakground seed particle spectra, mean free path, perpendicular diffusion efficiency, etc.. A standard set of visualizations are produced as well as a library of analysis tools for deeper inquiries. All steps will be covered end-to-end as well as the framework's user interface and availability.

  7. Wind Prediction Accuracy for Air Traffic Management Decision Support Tools

    NASA Technical Reports Server (NTRS)

    Cole, Rod; Green, Steve; Jardin, Matt; Schwartz, Barry; Benjamin, Stan

    2000-01-01

    The performance of Air Traffic Management and flight deck decision support tools depends in large part on the accuracy of the supporting 4D trajectory predictions. This is particularly relevant to conflict prediction and active advisories for the resolution of conflicts and the conformance with of traffic-flow management flow-rate constraints (e.g., arrival metering / required time of arrival). Flight test results have indicated that wind prediction errors may represent the largest source of trajectory prediction error. The tests also discovered relatively large errors (e.g., greater than 20 knots), existing in pockets of space and time critical to ATM DST performance (one or more sectors, greater than 20 minutes), are inadequately represented by the classic RMS aggregate prediction-accuracy studies of the past. To facilitate the identification and reduction of DST-critical wind-prediction errors, NASA has lead a collaborative research and development activity with MIT Lincoln Laboratories and the Forecast Systems Lab of the National Oceanographic and Atmospheric Administration (NOAA). This activity, begun in 1996, has focussed on the development of key metrics for ATM DST performance, assessment of wind-prediction skill for state of the art systems, and development/validation of system enhancements to improve skill. A 13 month study was conducted for the Denver Center airspace in 1997. Two complementary wind-prediction systems were analyzed and compared to the forecast performance of the then standard 60 km Rapid Update Cycle - version 1 (RUC-1). One system, developed by NOAA, was the prototype 40-km RUC-2 that became operational at NCEP in 1999. RUC-2 introduced a faster cycle (1 hr vs. 3 hr) and improved mesoscale physics. The second system, Augmented Winds (AW), is a prototype en route wind application developed by MITLL based on the Integrated Terminal Wind System (ITWS). AW is run at a local facility (Center) level, and updates RUC predictions based on an

  8. Modified linear predictive coding approach for moving target tracking by Doppler radar

    NASA Astrophysics Data System (ADS)

    Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao

    2016-07-01

    Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.

  9. Secure FAST: Security Enhancement in the NATO Time Sensitive Targeting Tool

    DTIC Science & Technology

    2010-11-01

    designed to aid in the tracking and prosecuting of Time Sensitive Targets. The FAST tool provides user level authentication and authorisation in terms...level authentication and authorisation in terms of security. It uses operating system level security but does not provide application level security for...and collaboration tool, designed to aid in the tracking and prosecuting of Time Sensitive Targets. The FAST tool provides user level authentication and

  10. Using social media as a tool to predict syphilis.

    PubMed

    Young, Sean D; Mercer, Neil; Weiss, Robert E; Torrone, Elizabeth A; Aral, Sevgi O

    2018-04-01

    Syphilis rates have been rapidly rising in the United States. New technologies, such as social media, might be used to anticipate and prevent the spread of disease. Because social media data collection is easy and inexpensive, integration of social media data into syphilis surveillance may be a cost-effective surveillance strategy, especially in low-resource regions. People are increasingly using social media to discuss health-related issues, such as sexual risk behaviors, allowing social media to be a potential tool for public health and medical research. This study mined Twitter data to assess whether social media could be used to predict syphilis cases in 2013 based on 2012 data. We collected 2012 and 2013 county-level primary and secondary (P&S) and early latent syphilis cases reported to the Center for Disease Control and Prevention, along with >8500 geolocated tweets in the United States that were filtered to include sexual risk-related keywords, including colloquial terms for intercourse. We assessed the relationship between syphilis-related tweets and actual case reports by county, controlling for socioeconomic indicators and prior year syphilis cases. We found a significant positive relationship between tweets and cases of P&S and early latent syphilis. This study shows that social media may be an additional tool to enhance syphilis prediction and surveillance. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Prediction of protein mutant stability using classification and regression tool.

    PubMed

    Huang, Liang-Tsung; Saraboji, K; Ho, Shinn-Ying; Hwang, Shiow-Fen; Ponnuswamy, M N; Gromiha, M Michael

    2007-02-01

    Prediction of protein stability upon amino acid substitutions is an important problem in molecular biology and the solving of which would help for designing stable mutants. In this work, we have analyzed the stability of protein mutants using two different datasets of 1396 and 2204 mutants obtained from ProTherm database, respectively for free energy change due to thermal (DeltaDeltaG) and denaturant denaturations (DeltaDeltaG(H(2)O)). We have used a set of 48 physical, chemical energetic and conformational properties of amino acid residues and computed the difference of amino acid properties for each mutant in both sets of data. These differences in amino acid properties have been related to protein stability (DeltaDeltaG and DeltaDeltaG(H(2)O)) and are used to train with classification and regression tool for predicting the stability of protein mutants. Further, we have tested the method with 4 fold, 5 fold and 10 fold cross validation procedures. We found that the physical properties, shape and flexibility are important determinants of protein stability. The classification of mutants based on secondary structure (helix, strand, turn and coil) and solvent accessibility (buried, partially buried, partially exposed and exposed) distinguished the stabilizing/destabilizing mutants at an average accuracy of 81% and 80%, respectively for DeltaDeltaG and DeltaDeltaG(H(2)O). The correlation between the experimental and predicted stability change is 0.61 for DeltaDeltaG and 0.44 for DeltaDeltaG(H(2)O). Further, the free energy change due to the replacement of amino acid residue has been predicted within an average error of 1.08 kcal/mol and 1.37 kcal/mol for thermal and chemical denaturation, respectively. The relative importance of secondary structure and solvent accessibility, and the influence of the dataset on prediction of protein mutant stability have been discussed.

  12. Cardiovascular risk prediction tools for populations in Asia.

    PubMed

    Barzi, F; Patel, A; Gu, D; Sritara, P; Lam, T H; Rodgers, A; Woodward, M

    2007-02-01

    Cardiovascular risk equations are traditionally derived from the Framingham Study. The accuracy of this approach in Asian populations, where resources for risk factor measurement may be limited, is unclear. To compare "low-information" equations (derived using only age, systolic blood pressure, total cholesterol and smoking status) derived from the Framingham Study with those derived from the Asian cohorts, on the accuracy of cardiovascular risk prediction. Separate equations to predict the 8-year risk of a cardiovascular event were derived from Asian and Framingham cohorts. The performance of these equations, and a subsequently "recalibrated" Framingham equation, were evaluated among participants from independent Chinese cohorts. Six cohort studies from Japan, Korea and Singapore (Asian cohorts); six cohort studies from China; the Framingham Study from the US. 172,077 participants from the Asian cohorts; 25,682 participants from Chinese cohorts and 6053 participants from the Framingham Study. In the Chinese cohorts, 542 cardiovascular events occurred during 8 years of follow-up. Both the Asian cohorts and the Framingham equations discriminated cardiovascular risk well in the Chinese cohorts; the area under the receiver-operator characteristic curve was at least 0.75 for men and women. However, the Framingham risk equation systematically overestimated risk in the Chinese cohorts by an average of 276% among men and 102% among women. The corresponding average overestimation using the Asian cohorts equation was 11% and 10%, respectively. Recalibrating the Framingham risk equation using cardiovascular disease incidence from the non-Chinese Asian cohorts led to an overestimation of risk by an average of 4% in women and underestimation of risk by an average of 2% in men. A low-information Framingham cardiovascular risk prediction tool, which, when recalibrated with contemporary data, is likely to estimate future cardiovascular risk with similar accuracy in Asian

  13. Getting NuSTAR on target: predicting mast motion

    NASA Astrophysics Data System (ADS)

    Forster, Karl; Madsen, Kristin K.; Miyasaka, Hiromasa; Craig, William W.; Harrison, Fiona A.; Rana, Vikram R.; Markwardt, Craig B.; Grefenstette, Brian W.

    2016-07-01

    The Nuclear Spectroscopic Telescope Array (NuSTAR) is the first focusing high energy (3-79 keV) X-ray observatory operating for four years from low Earth orbit. The X-ray detector arrays are located on the spacecraft bus with the optics modules mounted on a flexible mast of 10.14m length. The motion of the telescope optical axis on the detectors during each observation is measured by a laser metrology system and matches the pre-launch predictions of the thermal flexing of the mast as the spacecraft enters and exits the Earths shadow each orbit. However, an additional motion of the telescope field of view was discovered during observatory commissioning that is associated with the spacecraft attitude control system and an additional flexing of the mast correlated with the Solar aspect angle for the observation. We present the methodology developed to predict where any particular target coordinate will fall on the NuSTAR detectors based on the Solar aspect angle at the scheduled time of an observation. This may be applicable to future observatories that employ optics deployed on extendable masts. The automation of the prediction system has greatly improved observatory operations efficiency and the reliability of observation planning.

  14. Getting NuSTAR on Target: Predicting Mast Motion

    NASA Technical Reports Server (NTRS)

    Forster, Karl; Madsen, Kristin K.; Miyasaka, Hiroshima; Craig, William W.; Harrison, Fiona A.; Rana, Vikram R.; Markwardt, Craig B.; Grenfenstette, Brian W.

    2017-01-01

    The Nuclear Spectroscopic Telescope Array (NuSTAR) is the first focusing high energy (3-79 keV) X-ray observatory operating for four years from low Earth orbit. The X-ray detector arrays are located on the spacecraft bus with the optics modules mounted on a flexible mast of 10.14m length. The motion of the telescope optical axis on the detectors during each observation is measured by a laser metrology system and matches the pre-launch predictions of the thermal flexing of the mast as the spacecraft enters and exits the Earths shadow each orbit. However, an additional motion of the telescope field of view was discovered during observatory commissioning that is associated with the spacecraft attitude control system and an additional flexing of the mast correlated with the Solar aspect angle for the observation. We present the methodology developed to predict where any particular target coordinate will fall on the NuSTAR detectors based on the Solar aspect angle at the scheduled time of an observation. This may be applicable to future observatories that employ optics deployed on extendable masts. The automation of the prediction system has greatly improved observatory operations efficiency and the reliability of observation planning.

  15. Using exposure prediction tools to link exposure and ...

    EPA Pesticide Factsheets

    A few different exposure prediction tools were evaluated for use in the new in vitro-based safety assessment paradigm using di-2-ethylhexyl phthalate (DEHP) and dibutyl phthalate (DnBP) as case compounds. Daily intake of each phthalate was estimated using both high-throughput (HT) prediction models such as the HT Stochastic Human Exposure and Dose Simulation model (SHEDS-HT) and the ExpoCast heuristic model and non-HT approaches based on chemical specific exposure estimations in the environment in conjunction with human exposure factors. Reverse dosimetry was performed using a published physiologically based pharmacokinetic (PBPK) model for phthalates and their metabolites to provide a comparison point. Daily intakes of DEHP and DnBP were estimated based on the urinary concentrations of their respective monoesters, mono-2-ethylhexyl phthalate (MEHP) and monobutyl phthalate (MnBP), reported in NHANES (2011–2012). The PBPK-reverse dosimetry estimated daily intakes at the 50th and 95th percentiles were 0.68 and 9.58 μg/kg/d and 0.089 and 0.68 μg/kg/d for DEHP and DnBP, respectively. For DEHP, the estimated median from PBPK-reverse dosimetry was about 3.6-fold higher than the ExpoCast estimate (0.68 and 0.18 μg/kg/d, respectively). For DnBP, the estimated median was similar to that predicted by ExpoCast (0.089 and 0.094 μg/kg/d, respectively). The SHEDS-HT prediction of DnBP intake from consumer product pathways alone was higher at 0.67 μg/kg/d. The PBPK-reve

  16. Exploring the potential of a structural alphabet-based tool for mining multiple target conformations and target flexibility insight.

    PubMed

    Regad, Leslie; Chéron, Jean-Baptiste; Triki, Dhoha; Senac, Caroline; Flatters, Delphine; Camproux, Anne-Claude

    2017-01-01

    Protein flexibility is often implied in binding with different partners and is essential for protein function. The growing number of macromolecular structures in the Protein Data Bank entries and their redundancy has become a major source of structural knowledge of the protein universe. The analysis of structural variability through available redundant structures of a target, called multiple target conformations (MTC), obtained using experimental or modeling methods and under different biological conditions or different sources is one way to explore protein flexibility. This analysis is essential to improve the understanding of various mechanisms associated with protein target function and flexibility. In this study, we explored structural variability of three biological targets by analyzing different MTC sets associated with these targets. To facilitate the study of these MTC sets, we have developed an efficient tool, SA-conf, dedicated to capturing and linking the amino acid and local structure variability and analyzing the target structural variability space. The advantage of SA-conf is that it could be applied to divers sets composed of MTCs available in the PDB obtained using NMR and crystallography or homology models. This tool could also be applied to analyze MTC sets obtained by dynamics approaches. Our results showed that SA-conf tool is effective to quantify the structural variability of a MTC set and to localize the structural variable positions and regions of the target. By selecting adapted MTC subsets and comparing their variability detected by SA-conf, we highlighted different sources of target flexibility such as induced by binding partner, by mutation and intrinsic flexibility. Our results support the interest to mine available structures associated with a target using to offer valuable insight into target flexibility and interaction mechanisms. The SA-conf executable script, with a set of pre-compiled binaries are available at http://www.mti.univ-paris-diderot.fr/recherche/plateformes/logiciels.

  17. Exploring the potential of a structural alphabet-based tool for mining multiple target conformations and target flexibility insight

    PubMed Central

    Chéron, Jean-Baptiste; Triki, Dhoha; Senac, Caroline; Flatters, Delphine; Camproux, Anne-Claude

    2017-01-01

    Protein flexibility is often implied in binding with different partners and is essential for protein function. The growing number of macromolecular structures in the Protein Data Bank entries and their redundancy has become a major source of structural knowledge of the protein universe. The analysis of structural variability through available redundant structures of a target, called multiple target conformations (MTC), obtained using experimental or modeling methods and under different biological conditions or different sources is one way to explore protein flexibility. This analysis is essential to improve the understanding of various mechanisms associated with protein target function and flexibility. In this study, we explored structural variability of three biological targets by analyzing different MTC sets associated with these targets. To facilitate the study of these MTC sets, we have developed an efficient tool, SA-conf, dedicated to capturing and linking the amino acid and local structure variability and analyzing the target structural variability space. The advantage of SA-conf is that it could be applied to divers sets composed of MTCs available in the PDB obtained using NMR and crystallography or homology models. This tool could also be applied to analyze MTC sets obtained by dynamics approaches. Our results showed that SA-conf tool is effective to quantify the structural variability of a MTC set and to localize the structural variable positions and regions of the target. By selecting adapted MTC subsets and comparing their variability detected by SA-conf, we highlighted different sources of target flexibility such as induced by binding partner, by mutation and intrinsic flexibility. Our results support the interest to mine available structures associated with a target using to offer valuable insight into target flexibility and interaction mechanisms. The SA-conf executable script, with a set of pre-compiled binaries are available at http

  18. Virtual Screening of Phytochemicals to Novel Target (HAT) Rtt109 in Pneumocystis Jirovecii using Bioinformatics Tools.

    PubMed

    Sugumar, Ramya; Adithavarman, Abhinand Ponneri; Dakshinamoorthi, Anusha; David, Darling Chellathai; Ragunath, Padmavathi Kannan

    2016-03-01

    Pneumocystis jirovecii is a fungus that causes Pneumocystis pneumonia in HIV and other immunosuppressed patients. Treatment of Pneumocystis pneumonia with the currently available antifungals is challenging and associated with considerable adverse effects. There is a need to develop drugs against novel targets with minimal human toxicities. Histone Acetyl Transferase (HAT) Rtt109 is a potential therapeutic target in Pneumocystis jirovecii species. HAT is linked to transcription and is required to acetylate conserved lysine residues on histone proteins by transferring an acetyl group from acetyl CoA to form e-N-acetyl lysine. Therefore, inhibitors of HAT can be useful therapeutic options in Pneumocystis pneumonia. To screen phytochemicals against (HAT) Rtt109 using bioinformatics tool. The tertiary structure of Pneumocystis jirovecii (HAT) Rtt109 was modeled by Homology Modeling. The ideal template for modeling was obtained by performing Psi BLAST of the protein sequence. Rtt109-AcCoA/Vps75 protein from Saccharomyces cerevisiae (PDB structure 3Q35) was chosen as the template. The target protein was modeled using Swiss Modeler and validated using Ramachandran plot and Errat 2. Comprehensive text mining was performed to identify phytochemical compounds with antipneumonia and fungicidal properties and these compounds were filtered based on Lipinski's Rule of 5. The chosen compounds were subjected to virtual screening against the target protein (HAT) Rtt109 using Molegro Virtual Docker 4.5. Osiris Property Explorer and Open Tox Server were used to predict ADME-T properties of the chosen phytochemicals. Tertiary structure model of HAT Rtt 109 had a ProSA score of -6.57 and Errat 2 score of 87.34. Structure validation analysis by Ramachandran plot for the model revealed 97% of amino acids were in the favoured region. Of all the phytochemicals subjected to virtual screening against the target protein (HAT) Rtt109, baicalin exhibited highest binding affinity towards the

  19. Virtual Screening of Phytochemicals to Novel Target (HAT) Rtt109 in Pneumocystis Jirovecii using Bioinformatics Tools

    PubMed Central

    Adithavarman, Abhinand Ponneri; Dakshinamoorthi, Anusha; David, Darling Chellathai; Ragunath, Padmavathi Kannan

    2016-01-01

    Introduction Pneumocystis jirovecii is a fungus that causes Pneumocystis pneumonia in HIV and other immunosuppressed patients. Treatment of Pneumocystis pneumonia with the currently available antifungals is challenging and associated with considerable adverse effects. There is a need to develop drugs against novel targets with minimal human toxicities. Histone Acetyl Transferase (HAT) Rtt109 is a potential therapeutic target in Pneumocystis jirovecii species. HAT is linked to transcription and is required to acetylate conserved lysine residues on histone proteins by transferring an acetyl group from acetyl CoA to form e-N-acetyl lysine. Therefore, inhibitors of HAT can be useful therapeutic options in Pneumocystis pneumonia. Aim To screen phytochemicals against (HAT) Rtt109 using bioinformatics tool. Materials and Methods The tertiary structure of Pneumocystis jirovecii (HAT) Rtt109 was modeled by Homology Modeling. The ideal template for modeling was obtained by performing Psi BLAST of the protein sequence. Rtt109-AcCoA/Vps75 protein from Saccharomyces cerevisiae (PDB structure 3Q35) was chosen as the template. The target protein was modeled using Swiss Modeler and validated using Ramachandran plot and Errat 2. Comprehensive text mining was performed to identify phytochemical compounds with antipneumonia and fungicidal properties and these compounds were filtered based on Lipinski’s Rule of 5. The chosen compounds were subjected to virtual screening against the target protein (HAT) Rtt109 using Molegro Virtual Docker 4.5. Osiris Property Explorer and Open Tox Server were used to predict ADME-T properties of the chosen phytochemicals. Results Tertiary structure model of HAT Rtt 109 had a ProSA score of -6.57 and Errat 2 score of 87.34. Structure validation analysis by Ramachandran plot for the model revealed 97% of amino acids were in the favoured region. Of all the phytochemicals subjected to virtual screening against the target protein (HAT) Rtt109, baicalin

  20. Ontology-based tools to expedite predictive model construction.

    PubMed

    Haug, Peter; Holmen, John; Wu, Xinzi; Mynam, Kumar; Ebert, Matthew; Ferraro, Jeffrey

    2014-01-01

    Large amounts of medical data are collected electronically during the course of caring for patients using modern medical information systems. This data presents an opportunity to develop clinically useful tools through data mining and observational research studies. However, the work necessary to make sense of this data and to integrate it into a research initiative can require substantial effort from medical experts as well as from experts in medical terminology, data extraction, and data analysis. This slows the process of medical research. To reduce the effort required for the construction of computable, diagnostic predictive models, we have developed a system that hybridizes a medical ontology with a large clinical data warehouse. Here we describe components of this system designed to automate the development of preliminary diagnostic models and to provide visual clues that can assist the researcher in planning for further analysis of the data behind these models.

  1. Bioinformatics tools in predictive ecology: applications to fisheries

    PubMed Central

    Tucker, Allan; Duplisea, Daniel

    2012-01-01

    There has been a huge effort in the advancement of analytical techniques for molecular biological data over the past decade. This has led to many novel algorithms that are specialized to deal with data associated with biological phenomena, such as gene expression and protein interactions. In contrast, ecological data analysis has remained focused to some degree on off-the-shelf statistical techniques though this is starting to change with the adoption of state-of-the-art methods, where few assumptions can be made about the data and a more explorative approach is required, for example, through the use of Bayesian networks. In this paper, some novel bioinformatics tools for microarray data are discussed along with their ‘crossover potential’ with an application to fisheries data. In particular, a focus is made on the development of models that identify functionally equivalent species in different fish communities with the aim of predicting functional collapse. PMID:22144390

  2. Bioinformatics tools in predictive ecology: applications to fisheries.

    PubMed

    Tucker, Allan; Duplisea, Daniel

    2012-01-19

    There has been a huge effort in the advancement of analytical techniques for molecular biological data over the past decade. This has led to many novel algorithms that are specialized to deal with data associated with biological phenomena, such as gene expression and protein interactions. In contrast, ecological data analysis has remained focused to some degree on off-the-shelf statistical techniques though this is starting to change with the adoption of state-of-the-art methods, where few assumptions can be made about the data and a more explorative approach is required, for example, through the use of Bayesian networks. In this paper, some novel bioinformatics tools for microarray data are discussed along with their 'crossover potential' with an application to fisheries data. In particular, a focus is made on the development of models that identify functionally equivalent species in different fish communities with the aim of predicting functional collapse.

  3. ConoDictor: a tool for prediction of conopeptide superfamilies.

    PubMed

    Koua, Dominique; Brauer, Age; Laht, Silja; Kaplinski, Lauris; Favreau, Philippe; Remm, Maido; Lisacek, Frédérique; Stöcklin, Reto

    2012-07-01

    ConoDictor is a tool that enables fast and accurate classification of conopeptides into superfamilies based on their amino acid sequence. ConoDictor combines predictions from two complementary approaches-profile hidden Markov models and generalized profiles. Results appear in a browser as tables that can be downloaded in various formats. This application is particularly valuable in view of the exponentially increasing number of conopeptides that are being identified. ConoDictor was written in Perl using the common gateway interface module with a php submission page. Sequence matching is performed with hmmsearch from HMMER 3 and ps_scan.pl from the pftools 2.3 package. ConoDictor is freely accessible at http://conco.ebc.ee.

  4. Alarmins firing arthritis: Helpful diagnostic tools and promising therapeutic targets.

    PubMed

    Lavric, Miha; Miranda-García, María Auxiliadora; Holzinger, Dirk; Foell, Dirk; Wittkowski, Helmut

    2017-07-01

    Alarmins are endogenous molecules with homeostatic roles that have reached the focus of research in inflammatory arthritis in the last two decades, mostly due to their ability to indicate tissue related damage after active or passive release from injured cells. From HMGB1, S100A8/A9 and S100A12 proteins, over heat-shock proteins (HSPs) and purine metabolites (e.g. uric acid, ATP) to altered matrix proteins and interleukin-33 (IL-33), a number of alarmins have been determined until now as having a role in rheumatoid arthritis, psoriatic and juvenile idiopathic arthritis, as well as spondyloarthritis and gout. Although formerly being linked to initiation and chronification of inflammatory arthritis, driving auto- and paracrine inflammatory loops, more recent research has also unraveled the alarmins' role in the crosstalk between innate and adaptive immunity and in resolution of inflammation. Providing a state-of-the-art overview of known alarmins, this review lists the known modes of action and pathologic contribution of alarmins to inflammatory arthritis, as well as biomarker potential of alarmins in the clinical setting for tracking disease severity. Based upon research on animal experimental models (CIA, AIA) and clinical trials, a look is made into potentially viable strategies for modifying alarmin secretion and their target receptor (e.g. TLR, RAGE) interaction with the purpose of attenuating arthritic disease. Copyright © 2016 Société française de rhumatologie. Published by Elsevier SAS. All rights reserved.

  5. Non-Targeted Effects Models Predict Significantly Higher Mars Mission Cancer Risk than Targeted Effects Models

    DOE PAGES

    Cucinotta, Francis A.; Cacao, Eliedonna

    2017-05-12

    Cancer risk is an important concern for galactic cosmic ray (GCR) exposures, which consist of a wide-energy range of protons, heavy ions and secondary radiation produced in shielding and tissues. Relative biological effectiveness (RBE) factors for surrogate cancer endpoints in cell culture models and tumor induction in mice vary considerable, including significant variations for different tissues and mouse strains. Many studies suggest non-targeted effects (NTE) occur for low doses of high linear energy transfer (LET) radiation, leading to deviation from the linear dose response model used in radiation protection. Using the mouse Harderian gland tumor experiment, the only extensive data-setmore » for dose response modelling with a variety of particle types (>4), for the first-time a particle track structure model of tumor prevalence is used to investigate the effects of NTEs in predictions of chronic GCR exposure risk. The NTE model led to a predicted risk 2-fold higher compared to a targeted effects model. The scarcity of data with animal models for tissues that dominate human radiation cancer risk, including lung, colon, breast, liver, and stomach, suggest that studies of NTEs in other tissues are urgently needed prior to long-term space missions outside the protection of the Earth’s geomagnetic sphere.« less

  6. Non-Targeted Effects Models Predict Significantly Higher Mars Mission Cancer Risk than Targeted Effects Models

    SciT

    Cucinotta, Francis A.; Cacao, Eliedonna

    Cancer risk is an important concern for galactic cosmic ray (GCR) exposures, which consist of a wide-energy range of protons, heavy ions and secondary radiation produced in shielding and tissues. Relative biological effectiveness (RBE) factors for surrogate cancer endpoints in cell culture models and tumor induction in mice vary considerable, including significant variations for different tissues and mouse strains. Many studies suggest non-targeted effects (NTE) occur for low doses of high linear energy transfer (LET) radiation, leading to deviation from the linear dose response model used in radiation protection. Using the mouse Harderian gland tumor experiment, the only extensive data-setmore » for dose response modelling with a variety of particle types (>4), for the first-time a particle track structure model of tumor prevalence is used to investigate the effects of NTEs in predictions of chronic GCR exposure risk. The NTE model led to a predicted risk 2-fold higher compared to a targeted effects model. The scarcity of data with animal models for tissues that dominate human radiation cancer risk, including lung, colon, breast, liver, and stomach, suggest that studies of NTEs in other tissues are urgently needed prior to long-term space missions outside the protection of the Earth’s geomagnetic sphere.« less

  7. Can working memory predict target-to-target interval effects in the P300?

    PubMed

    Steiner, Genevieve Z; Barry, Robert J; Gonsalvez, Craig J

    2013-09-01

    It has been suggested that the P300 component of the ERP is an electrophysiological index of memory-updating processes associated with task-relevant stimuli. Component magnitude varies with the time separating target stimuli (target-to-target interval: TTI), with longer TTIs eliciting larger P300 amplitudes. According to the template-update perspective, TTI effects observable in the P300 reflect the updating of stimulus-templates in working memory (WM). The current study explored whether young adults' memory-task ability could predict TTI effects in P300. EEG activity was recorded from 50 university students (aged 18-25 years) while they completed an auditory equiprobable Go/NoGo task with manipulations of TTIs. Participants also completed a CogState® battery and were sorted according to their WM score. ERPs were analysed using a temporal PCA. Two P300 components, P3b and the Slow Wave, were found to linearly increase in amplitude to longer TTIs. This TTI effect differed between groups only for the P3b component: The high WM group showed a steeper increase in P3b amplitude with TTI than the low WM group. These results suggest that TTI effects in P300 are directly related to WM processes. © 2013.

  8. Comparison of Performance Predictions for New Low-Thrust Trajectory Tools

    NASA Technical Reports Server (NTRS)

    Polsgrove, Tara; Kos, Larry; Hopkins, Randall; Crane, Tracie

    2006-01-01

    Several low thrust trajectory optimization tools have been developed over the last 3% years by the Low Thrust Trajectory Tools development team. This toolset includes both low-medium fidelity and high fidelity tools which allow the analyst to quickly research a wide mission trade space and perform advanced mission design. These tools were tested using a set of reference trajectories that exercised each tool s unique capabilities. This paper compares the performance predictions of the various tools against several of the reference trajectories. The intent is to verify agreement between the high fidelity tools and to quantify the performance prediction differences between tools of different fidelity levels.

  9. MultiMiTar: a novel multi objective optimization based miRNA-target prediction method.

    PubMed

    Mitra, Ramkrishna; Bandyopadhyay, Sanghamitra

    2011-01-01

    Machine learning based miRNA-target prediction algorithms often fail to obtain a balanced prediction accuracy in terms of both sensitivity and specificity due to lack of the gold standard of negative examples, miRNA-targeting site context specific relevant features and efficient feature selection process. Moreover, all the sequence, structure and machine learning based algorithms are unable to distribute the true positive predictions preferentially at the top of the ranked list; hence the algorithms become unreliable to the biologists. In addition, these algorithms fail to obtain considerable combination of precision and recall for the target transcripts that are translationally repressed at protein level. In the proposed article, we introduce an efficient miRNA-target prediction system MultiMiTar, a Support Vector Machine (SVM) based classifier integrated with a multiobjective metaheuristic based feature selection technique. The robust performance of the proposed method is mainly the result of using high quality negative examples and selection of biologically relevant miRNA-targeting site context specific features. The features are selected by using a novel feature selection technique AMOSA-SVM, that integrates the multi objective optimization technique Archived Multi-Objective Simulated Annealing (AMOSA) and SVM. MultiMiTar is found to achieve much higher Matthew's correlation coefficient (MCC) of 0.583 and average class-wise accuracy (ACA) of 0.8 compared to the others target prediction methods for a completely independent test data set. The obtained MCC and ACA values of these algorithms range from -0.269 to 0.155 and 0.321 to 0.582, respectively. Moreover, it shows a more balanced result in terms of precision and sensitivity (recall) for the translationally repressed data set as compared to all the other existing methods. An important aspect is that the true positive predictions are distributed preferentially at the top of the ranked list that makes Multi

  10. SMOQ: a tool for predicting the absolute residue-specific quality of a single protein model with support vector machines

    PubMed Central

    2014-01-01

    Background It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. Results We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. Conclusion SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:24776231

  11. SMOQ: a tool for predicting the absolute residue-specific quality of a single protein model with support vector machines.

    PubMed

    Cao, Renzhi; Wang, Zheng; Wang, Yiheng; Cheng, Jianlin

    2014-04-28

    It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/.

  12. Bitter or not? BitterPredict, a tool for predicting taste from chemical structure.

    PubMed

    Dagan-Wiener, Ayana; Nissim, Ido; Ben Abu, Natalie; Borgonovo, Gigliola; Bassoli, Angela; Niv, Masha Y

    2017-09-21

    Bitter taste is an innately aversive taste modality that is considered to protect animals from consuming toxic compounds. Yet, bitterness is not always noxious and some bitter compounds have beneficial effects on health. Hundreds of bitter compounds were reported (and are accessible via the BitterDB http://bitterdb.agri.huji.ac.il/dbbitter.php ), but numerous additional bitter molecules are still unknown. The dramatic chemical diversity of bitterants makes bitterness prediction a difficult task. Here we present a machine learning classifier, BitterPredict, which predicts whether a compound is bitter or not, based on its chemical structure. BitterDB was used as the positive set, and non-bitter molecules were gathered from literature to create the negative set. Adaptive Boosting (AdaBoost), based on decision trees machine-learning algorithm was applied to molecules that were represented using physicochemical and ADME/Tox descriptors. BitterPredict correctly classifies over 80% of the compounds in the hold-out test set, and 70-90% of the compounds in three independent external sets and in sensory test validation, providing a quick and reliable tool for classifying large sets of compounds into bitter and non-bitter groups. BitterPredict suggests that about 40% of random molecules, and a large portion (66%) of clinical and experimental drugs, and of natural products (77%) are bitter.

  13. Comparison of the Nosocomial Pneumonia Mortality Prediction (NPMP) model with standard mortality prediction tools.

    PubMed

    Srinivasan, M; Shetty, N; Gadekari, S; Thunga, G; Rao, K; Kunhikatta, V

    2017-07-01

    Severity or mortality prediction of nosocomial pneumonia could aid in the effective triage of patients and assisting physicians. To compare various severity assessment scoring systems for predicting intensive care unit (ICU) mortality in nosocomial pneumonia patients. A prospective cohort study was conducted in a tertiary care university-affiliated hospital in Manipal, India. One hundred patients with nosocomial pneumonia, admitted in the ICUs who developed pneumonia after >48h of admission, were included. The Nosocomial Pneumonia Mortality Prediction (NPMP) model, developed in our hospital, was compared with Acute Physiology and Chronic Health Evaluation II (APACHE II), Mortality Probability Model II (MPM 72  II), Simplified Acute Physiology Score II (SAPS II), Multiple Organ Dysfunction Score (MODS), Sequential Organ Failure Assessment (SOFA), Clinical Pulmonary Infection Score (CPIS), Ventilator-Associated Pneumonia Predisposition, Insult, Response, Organ dysfunction (VAP-PIRO). Data and clinical variables were collected on the day of pneumonia diagnosis. The outcome for the study was ICU mortality. The sensitivity and specificity of the various scoring systems was analysed by plotting receiver operating characteristic (ROC) curves and computing the area under the curve for each of the mortality predicting tools. NPMP, APACHE II, SAPS II, MPM 72  II, SOFA, and VAP-PIRO were found to have similar and acceptable discrimination power as assessed by the area under the ROC curve. The AUC values for the above scores ranged from 0.735 to 0.762. CPIS and MODS showed least discrimination. NPMP is a specific tool to predict mortality in nosocomial pneumonia and is comparable to other standard scores. Copyright © 2017 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  14. Evaluating Gaze-Based Interface Tools to Facilitate Point-and-Select Tasks with Small Targets

    ERIC Educational Resources Information Center

    Skovsgaard, Henrik; Mateo, Julio C.; Hansen, John Paulin

    2011-01-01

    Gaze interaction affords hands-free control of computers. Pointing to and selecting small targets using gaze alone is difficult because of the limited accuracy of gaze pointing. This is the first experimental comparison of gaze-based interface tools for small-target (e.g. less than 12 x 12 pixels) point-and-select tasks. We conducted two…

  15. Simple tool for prediction of parotid gland sparing in intensity-modulated radiation therapy.

    PubMed

    Gensheimer, Michael F; Hummel-Kramer, Sharon M; Cain, David; Quang, Tony S

    2015-01-01

    Sparing one or both parotid glands is a key goal when planning head and neck cancer radiation treatment. If the planning target volume (PTV) overlaps one or both parotid glands substantially, it may not be possible to achieve adequate gland sparing. This finding results in physicians revising their PTV contours after an intensity-modulated radiation therapy (IMRT) plan has been run and reduces workflow efficiency. We devised a simple formula for predicting mean parotid gland dose from the overlap of the parotid gland and isotropically expanded PTV contours. We tested the tool using 44 patients from 2 institutions and found agreement between predicted and actual parotid gland doses (mean absolute error = 5.3 Gy). This simple method could increase treatment planning efficiency by improving the chance that the first plan presented to the physician will have optimal parotid gland sparing. Published by Elsevier Inc.

  16. Simple tool for prediction of parotid gland sparing in intensity-modulated radiation therapy

    SciT

    Gensheimer, Michael F.; Hummel-Kramer, Sharon M., E-mail: sharonhummel@comcast.net; Cain, David

    Sparing one or both parotid glands is a key goal when planning head and neck cancer radiation treatment. If the planning target volume (PTV) overlaps one or both parotid glands substantially, it may not be possible to achieve adequate gland sparing. This finding results in physicians revising their PTV contours after an intensity-modulated radiation therapy (IMRT) plan has been run and reduces workflow efficiency. We devised a simple formula for predicting mean parotid gland dose from the overlap of the parotid gland and isotropically expanded PTV contours. We tested the tool using 44 patients from 2 institutions and found agreementmore » between predicted and actual parotid gland doses (mean absolute error = 5.3 Gy). This simple method could increase treatment planning efficiency by improving the chance that the first plan presented to the physician will have optimal parotid gland sparing.« less

  17. Personalized Prediction of Glaucoma Progression Under Different Target Intraocular Pressure Levels Using Filtered Forecasting Methods.

    PubMed

    Kazemian, Pooyan; Lavieri, Mariel S; Van Oyen, Mark P; Andrews, Chris; Stein, Joshua D

    2018-04-01

    To generate personalized forecasts of how patients with open-angle glaucoma (OAG) experience disease progression at different intraocular pressure (IOP) levels to aid clinicians with setting personalized target IOPs. Secondary analyses using longitudinal data from 2 randomized controlled trials. Participants with moderate or advanced OAG from the Collaborative Initial Glaucoma Treatment Study (CIGTS) or the Advanced Glaucoma Intervention Study (AGIS). By using perimetric and tonometric data from trial participants, we developed and validated Kalman Filter (KF) models for fast-, slow-, and nonprogressing patients with OAG. The KF can generate personalized and dynamically updated forecasts of OAG progression under different target IOP levels. For each participant, we determined how mean deviation (MD) would change if the patient maintains his/her IOP at 1 of 7 levels (6, 9, 12, 15, 18, 21, or 24 mmHg) over the next 5 years. We also model and predict changes to MD over the same time horizon if IOP is increased or decreased by 3, 6, and 9 mmHg from the level attained in the trials. Personalized estimates of the change in MD under different target IOP levels. A total of 571 participants (mean age, 64.2 years; standard deviation, 10.9) were followed for a mean of 6.5 years (standard deviation, 2.8). Our models predicted that, on average, fast progressors would lose 2.1, 6.7, and 11.2 decibels (dB) MD under target IOPs of 6, 15, and 24 mmHg, respectively, over 5 years. In contrast, on average, slow progressors would lose 0.8, 2.1, and 4.1 dB MD under the same target IOPs and time frame. When using our tool to quantify the OAG progression dynamics for all 571 patients, we found no statistically significant differences over 5 years between progression for black versus white, male versus female, and CIGTS versus AGIS participants under different target IOPs (P > 0.05 for all). To our knowledge, this is the first clinical decision-making tool that generates personalized

  18. Development and Application of Predictive Tools for MHD Stability Limits in Tokamaks

    SciT

    Brennan, Dylan; Miller, G. P.

    This is a project to develop and apply analytic and computational tools to answer physics questions relevant to the onset of non-ideal magnetohydrodynamic (MHD) instabilities in toroidal magnetic confinement plasmas. The focused goal of the research is to develop predictive tools for these instabilities, including an inner layer solution algorithm, a resistive wall with control coils, and energetic particle effects. The production phase compares studies of instabilities in such systems using analytic techniques, PEST- III and NIMROD. Two important physics puzzles are targeted as guiding thrusts for the analyses. The first is to form an accurate description of the physicsmore » determining whether the resistive wall mode or a tearing mode will appear first as β is increased at low rotation and low error fields in DIII-D. The second is to understand the physical mechanism behind recent NIMROD results indicating strong damping and stabilization from energetic particle effects on linear resistive modes. The work seeks to develop a highly relevant predictive tool for ITER, advance the theoretical description of this physics in general, and analyze these instabilities in experiments such as ASDEX Upgrade, DIII-D, JET, JT-60U and NTSX. The awardee on this grant is the University of Tulsa. The research efforts are supervised principally by Dr. Brennan. Support is included for two graduate students, and a strong collaboration with Dr. John M. Finn of LANL. The work includes several ongoing collaborations with General Atomics, PPPL, and the NIMROD team, among others.« less

  19. Binding site and affinity prediction of general anesthetics to protein targets using docking.

    PubMed

    Liu, Renyu; Perez-Aguilar, Jose Manuel; Liang, David; Saven, Jeffery G

    2012-05-01

    The protein targets for general anesthetics remain unclear. A tool to predict anesthetic binding for potential binding targets is needed. In this study, we explored whether a computational method, AutoDock, could serve as such a tool. High-resolution crystal data of water-soluble proteins (cytochrome C, apoferritin, and human serum albumin), and a membrane protein (a pentameric ligand-gated ion channel from Gloeobacter violaceus [GLIC]) were used. Isothermal titration calorimetry (ITC) experiments were performed to determine anesthetic affinity in solution conditions for apoferritin. Docking calculations were performed using DockingServer with the Lamarckian genetic algorithm and the Solis and Wets local search method (http://www.dockingserver.com/web). Twenty general anesthetics were docked into apoferritin. The predicted binding constants were compared with those obtained from ITC experiments for potential correlations. In the case of apoferritin, details of the binding site and their interactions were compared with recent cocrystallization data. Docking calculations for 6 general anesthetics currently used in clinical settings (isoflurane, sevoflurane, desflurane, halothane, propofol, and etomidate) with known 50% effective concentration (EC(50)) values were also performed in all tested proteins. The binding constants derived from docking experiments were compared with known EC(50) values and octanol/water partition coefficients for the 6 general anesthetics. All 20 general anesthetics docked unambiguously into the anesthetic binding site identified in the crystal structure of apoferritin. The binding constants for 20 anesthetics obtained from the docking calculations correlate significantly with those obtained from ITC experiments (P = 0.04). In the case of GLIC, the identified anesthetic binding sites in the crystal structure are among the docking predicted binding sites, but not the top ranked site. Docking calculations suggest a most probable binding site

  20. Binding Site and Affinity Prediction of General Anesthetics to Protein Targets Using Docking

    PubMed Central

    Liu, Renyu; Perez-Aguilar, Jose Manuel; Liang, David; Saven, Jeffery G.

    2012-01-01

    Background The protein targets for general anesthetics remain unclear. A tool to predict anesthetic binding for potential binding targets is needed. In this study, we explore whether a computational method, AutoDock, could serve as such a tool. Methods High-resolution crystal data of water soluble proteins (cytochrome C, apoferritin and human serum albumin), and a membrane protein (a pentameric ligand-gated ion channel from Gloeobacter violaceus, GLIC) were used. Isothermal titration calorimetry (ITC) experiments were performed to determine anesthetic affinity in solution conditions for apoferritin. Docking calculations were performed using DockingServer with the Lamarckian genetic algorithm and the Solis and Wets local search method (https://www.dockingserver.com/web). Twenty general anesthetics were docked into apoferritin. The predicted binding constants are compared with those obtained from ITC experiments for potential correlations. In the case of apoferritin, details of the binding site and their interactions were compared with recent co-crystallization data. Docking calculations for six general anesthetics currently used in clinical settings (isoflurane, sevoflurane, desflurane, halothane, propofol, and etomidate) with known EC50 were also performed in all tested proteins. The binding constants derived from docking experiments were compared with known EC50s and octanol/water partition coefficients for the six general anesthetics. Results All 20 general anesthetics docked unambiguously into the anesthetic binding site identified in the crystal structure of apoferritin. The binding constants for 20 anesthetics obtained from the docking calculations correlate significantly with those obtained from ITC experiments (p=0.04). In the case of GLIC, the identified anesthetic binding sites in the crystal structure are among the docking predicted binding sites, but not the top ranked site. Docking calculations suggest a most probable binding site located in the

  1. Target and Tissue Selectivity Prediction by Integrated Mechanistic Pharmacokinetic-Target Binding and Quantitative Structure Activity Modeling.

    PubMed

    Vlot, Anna H C; de Witte, Wilhelmus E A; Danhof, Meindert; van der Graaf, Piet H; van Westen, Gerard J P; de Lange, Elizabeth C M

    2017-12-04

    Selectivity is an important attribute of effective and safe drugs, and prediction of in vivo target and tissue selectivity would likely improve drug development success rates. However, a lack of understanding of the underlying (pharmacological) mechanisms and availability of directly applicable predictive methods complicates the prediction of selectivity. We explore the value of combining physiologically based pharmacokinetic (PBPK) modeling with quantitative structure-activity relationship (QSAR) modeling to predict the influence of the target dissociation constant (K D ) and the target dissociation rate constant on target and tissue selectivity. The K D values of CB1 ligands in the ChEMBL database are predicted by QSAR random forest (RF) modeling for the CB1 receptor and known off-targets (TRPV1, mGlu5, 5-HT1a). Of these CB1 ligands, rimonabant, CP-55940, and Δ 8 -tetrahydrocanabinol, one of the active ingredients of cannabis, were selected for simulations of target occupancy for CB1, TRPV1, mGlu5, and 5-HT1a in three brain regions, to illustrate the principles of the combined PBPK-QSAR modeling. Our combined PBPK and target binding modeling demonstrated that the optimal values of the K D and k off for target and tissue selectivity were dependent on target concentration and tissue distribution kinetics. Interestingly, if the target concentration is high and the perfusion of the target site is low, the optimal K D value is often not the lowest K D value, suggesting that optimization towards high drug-target affinity can decrease the benefit-risk ratio. The presented integrative structure-pharmacokinetic-pharmacodynamic modeling provides an improved understanding of tissue and target selectivity.

  2. lncRNATargets: A platform for lncRNA target prediction based on nucleic acid thermodynamics.

    PubMed

    Hu, Ruifeng; Sun, Xiaobo

    2016-08-01

    Many studies have supported that long noncoding RNAs (lncRNAs) perform various functions in various critical biological processes. Advanced experimental and computational technologies allow access to more information on lncRNAs. Determining the functions and action mechanisms of these RNAs on a large scale is urgently needed. We provided lncRNATargets, which is a web-based platform for lncRNA target prediction based on nucleic acid thermodynamics. The nearest-neighbor (NN) model was used to calculate binging-free energy. The main principle of NN model for nucleic acid assumes that identity and orientation of neighbor base pairs determine stability of a given base pair. lncRNATargets features the following options: setting of a specific temperature that allow use not only for human but also for other animals or plants; processing all lncRNAs in high throughput without RNA size limitation that is superior to any other existing tool; and web-based, user-friendly interface, and colored result displays that allow easy access for nonskilled computer operators and provide better understanding of results. This technique could provide accurate calculation on the binding-free energy of lncRNA-target dimers to predict if these structures are well targeted together. lncRNATargets provides high accuracy calculations, and this user-friendly program is available for free at http://www.herbbol.org:8001/lrt/ .

  3. Automated antibody structure prediction using Accelrys tools: Results and best practices

    PubMed Central

    Fasnacht, Marc; Butenhof, Ken; Goupil-Lamy, Anne; Hernandez-Guzman, Francisco; Huang, Hongwei; Yan, Lisa

    2014-01-01

    We describe the methodology and results from our participation in the second Antibody Modeling Assessment experiment. During the experiment we predicted the structure of eleven unpublished antibody Fv fragments. Our prediction methods centered on template-based modeling; potential templates were selected from an antibody database based on their sequence similarity to the target in the framework regions. Depending on the quality of the templates, we constructed models of the antibody framework regions either using a single, chimeric or multiple template approach. The hypervariable loop regions in the initial models were rebuilt by grafting the corresponding regions from suitable templates onto the model. For the H3 loop region, we further refined models using ab initio methods. The final models were subjected to constrained energy minimization to resolve severe local structural problems. The analysis of the models submitted show that Accelrys tools allow for the construction of quite accurate models for the framework and the canonical CDR regions, with RMSDs to the X-ray structure on average below 1 Å for most of these regions. The results show that accurate prediction of the H3 hypervariable loops remains a challenge. Furthermore, model quality assessment of the submitted models show that the models are of quite high quality, with local geometry assessment scores similar to that of the target X-ray structures. Proteins 2014; 82:1583–1598. © 2014 The Authors. Proteins published by Wiley Periodicals, Inc. PMID:24833271

  4. In Silico Screening Based on Predictive Algorithms as a Design Tool for Exon Skipping Oligonucleotides in Duchenne Muscular Dystrophy

    PubMed Central

    Echigoya, Yusuke; Mouly, Vincent; Garcia, Luis; Yokota, Toshifumi; Duddy, William

    2015-01-01

    The use of antisense ‘splice-switching’ oligonucleotides to induce exon skipping represents a potential therapeutic approach to various human genetic diseases. It has achieved greatest maturity in exon skipping of the dystrophin transcript in Duchenne muscular dystrophy (DMD), for which several clinical trials are completed or ongoing, and a large body of data exists describing tested oligonucleotides and their efficacy. The rational design of an exon skipping oligonucleotide involves the choice of an antisense sequence, usually between 15 and 32 nucleotides, targeting the exon that is to be skipped. Although parameters describing the target site can be computationally estimated and several have been identified to correlate with efficacy, methods to predict efficacy are limited. Here, an in silico pre-screening approach is proposed, based on predictive statistical modelling. Previous DMD data were compiled together and, for each oligonucleotide, some 60 descriptors were considered. Statistical modelling approaches were applied to derive algorithms that predict exon skipping for a given target site. We confirmed (1) the binding energetics of the oligonucleotide to the RNA, and (2) the distance in bases of the target site from the splice acceptor site, as the two most predictive parameters, and we included these and several other parameters (while discounting many) into an in silico screening process, based on their capacity to predict high or low efficacy in either phosphorodiamidate morpholino oligomers (89% correctly predicted) and/or 2’O Methyl RNA oligonucleotides (76% correctly predicted). Predictions correlated strongly with in vitro testing for sixteen de novo PMO sequences targeting various positions on DMD exons 44 (R2 0.89) and 53 (R2 0.89), one of which represents a potential novel candidate for clinical trials. We provide these algorithms together with a computational tool that facilitates screening to predict exon skipping efficacy at each

  5. Primer-BLAST: A tool to design target-specific primers for polymerase chain reaction

    PubMed Central

    2012-01-01

    Background Choosing appropriate primers is probably the single most important factor affecting the polymerase chain reaction (PCR). Specific amplification of the intended target requires that primers do not have matches to other targets in certain orientations and within certain distances that allow undesired amplification. The process of designing specific primers typically involves two stages. First, the primers flanking regions of interest are generated either manually or using software tools; then they are searched against an appropriate nucleotide sequence database using tools such as BLAST to examine the potential targets. However, the latter is not an easy process as one needs to examine many details between primers and targets, such as the number and the positions of matched bases, the primer orientations and distance between forward and reverse primers. The complexity of such analysis usually makes this a time-consuming and very difficult task for users, especially when the primers have a large number of hits. Furthermore, although the BLAST program has been widely used for primer target detection, it is in fact not an ideal tool for this purpose as BLAST is a local alignment algorithm and does not necessarily return complete match information over the entire primer range. Results We present a new software tool called Primer-BLAST to alleviate the difficulty in designing target-specific primers. This tool combines BLAST with a global alignment algorithm to ensure a full primer-target alignment and is sensitive enough to detect targets that have a significant number of mismatches to primers. Primer-BLAST allows users to design new target-specific primers in one step as well as to check the specificity of pre-existing primers. Primer-BLAST also supports placing primers based on exon/intron locations and excluding single nucleotide polymorphism (SNP) sites in primers. Conclusions We describe a robust and fully implemented general purpose primer design tool

  6. AAVSO Target Tool: A Web-Based Service for Tracking Variable Star Observations (Abstract)

    NASA Astrophysics Data System (ADS)

    Burger, D.; Stassun, K. G.; Barnes, C.; Kafka, S.; Beck, S.; Li, K.

    2018-06-01

    (Abstract only) The AAVSO Target Tool is a web-based interface for bringing stars in need of observation to the attention of AAVSOís network of amateur and professional astronomers. The site currently tracks over 700 targets of interest, collecting data from them on a regular basis from AAVSOís servers and sorting them based on priority. While the target tool does not require a login, users can obtain visibility times for each target by signing up and entering a telescope location. Other key features of the site include filtering by AAVSO observing section, sorting by different variable types, formatting the data for printing, and exporting the data to a CSV file. The AAVSO Target Tool builds upon seven years of experience developing web applications for astronomical data analysis, most notably on Filtergraph (Burger, D., et al. 2013, Astronomical Data Analysis Software and Systems XXII, Astronomical Society of the Pacific, San Francisco, 399), and is built using the web2py web framework based on the python programming language. The target tool is available at http://filtergraph.com/aavso.

  7. Risk Prediction Tool for Medical Appointment Attendance Among HIV-Infected Persons with Unsuppressed Viremia

    PubMed Central

    Person, Anna; Rebeiro, Peter; Kheshti, Asghar; Raffanti, Stephen; Pettit, April

    2015-01-01

    Abstract Successful treatment of HIV infection requires regular clinical follow-up. A previously published risk-prediction tool (RPT) utilizing data from the electronic health record (EHR) including medication adherence, previous appointment attendance, substance abuse, recent CD4+ count, prior antiretroviral therapy (ART) exposure, prior treatment failure, and recent HIV-1 viral load (VL) has been shown to predict virologic failure at 1 year. If this same tool could be used to predict the more immediate event of appointment attendance, high-risk patients could be identified and interventions could be targeted to improve this outcome. We conducted an observational cohort study at the Vanderbilt Comprehensive Care Clinic from August 2013 through March 2014. Patients with routine medical appointments and most recent HIV-1 VL >200 copies/mL were included. Risk scores for a modified RPT were calculated based on data from the EHR. Odds ratios (OR) for missing the next appointment were estimated using multivariable logistic regression. Among 510 persons included, median age was 39 years, 74% were male, 55% were black, median CD4+ count was 327 cells/mm3 [Interquartile Range (IQR): 142–560], and median HIV-1 VL was 21,818 copies/mL (IQR: 2,030–69,597). Medium [OR 3.95, 95% confidence interval (CI) 2.08–7.50, p-value<0.01] and high (OR 9.55, 95% CI 4.31–21.16, p-value<0.01) vs. low RPT risk scores were independently associated with missing the next appointment. RPT scores, constructed using readily available data, allow for risk-stratification of HIV medical appointment non-attendance and could support targeting limited resources to improve appointment adherence in groups most at-risk of poor HIV outcomes. PMID:25746288

  8. DIANA-microT web server: elucidating microRNA functions through target prediction.

    PubMed

    Maragkakis, M; Reczko, M; Simossis, V A; Alexiou, P; Papadopoulos, G L; Dalamagas, T; Giannopoulos, G; Goumas, G; Koukis, E; Kourtis, K; Vergoulis, T; Koziris, N; Sellis, T; Tsanakas, P; Hatzigeorgiou, A G

    2009-07-01

    Computational microRNA (miRNA) target prediction is one of the key means for deciphering the role of miRNAs in development and disease. Here, we present the DIANA-microT web server as the user interface to the DIANA-microT 3.0 miRNA target prediction algorithm. The web server provides extensive information for predicted miRNA:target gene interactions with a user-friendly interface, providing extensive connectivity to online biological resources. Target gene and miRNA functions may be elucidated through automated bibliographic searches and functional information is accessible through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The web server offers links to nomenclature, sequence and protein databases, and users are facilitated by being able to search for targeted genes using different nomenclatures or functional features, such as the genes possible involvement in biological pathways. The target prediction algorithm supports parameters calculated individually for each miRNA:target gene interaction and provides a signal-to-noise ratio and a precision score that helps in the evaluation of the significance of the predicted results. Using a set of miRNA targets recently identified through the pSILAC method, the performance of several computational target prediction programs was assessed. DIANA-microT 3.0 achieved there with 66% the highest ratio of correctly predicted targets over all predicted targets. The DIANA-microT web server is freely available at www.microrna.gr/microT.

  9. Predicting Great Lakes fish yields: tools and constraints

    Lewis, C.A.; Schupp, D.H.; Taylor, W.W.; Collins, J.J.; Hatch, Richard W.

    1987-01-01

    Prediction of yield is a critical component of fisheries management. The development of sound yield prediction methodology and the application of the results of yield prediction are central to the evolution of strategies to achieve stated goals for Great Lakes fisheries and to the measurement of progress toward those goals. Despite general availability of species yield models, yield prediction for many Great Lakes fisheries has been poor due to the instability of the fish communities and the inadequacy of available data. A host of biological, institutional, and societal factors constrain both the development of sound predictions and their application to management. Improved predictive capability requires increased stability of Great Lakes fisheries through rehabilitation of well-integrated communities, improvement of data collection, data standardization and information-sharing mechanisms, and further development of the methodology for yield prediction. Most important is the creation of a better-informed public that will in turn establish the political will to do what is required.

  10. Prognostic and Prediction Tools in Bladder Cancer: A Comprehensive Review of the Literature.

    PubMed

    Kluth, Luis A; Black, Peter C; Bochner, Bernard H; Catto, James; Lerner, Seth P; Stenzl, Arnulf; Sylvester, Richard; Vickers, Andrew J; Xylinas, Evanguelos; Shariat, Shahrokh F

    2015-08-01

    This review focuses on risk assessment and prediction tools for bladder cancer (BCa). To review the current knowledge on risk assessment and prediction tools to enhance clinical decision making and counseling of patients with BCa. A literature search in English was performed using PubMed in July 2013. Relevant risk assessment and prediction tools for BCa were selected. More than 1600 publications were retrieved. Special attention was given to studies that investigated the clinical benefit of a prediction tool. Most prediction tools for BCa focus on the prediction of disease recurrence and progression in non-muscle-invasive bladder cancer or disease recurrence and survival after radical cystectomy. Although these tools are helpful, recent prediction tools aim to address a specific clinical problem, such as the prediction of organ-confined disease and lymph node metastasis to help identify patients who might benefit from neoadjuvant chemotherapy. Although a large number of prediction tools have been reported in recent years, many of them lack external validation. Few studies have investigated the clinical utility of any given model as measured by its ability to improve clinical decision making. There is a need for novel biomarkers to improve the accuracy and utility of prediction tools for BCa. Decision tools hold the promise of facilitating the shared decision process, potentially improving clinical outcomes for BCa patients. Prediction models need external validation and assessment of clinical utility before they can be incorporated into routine clinical care. We looked at models that aim to predict outcomes for patients with bladder cancer (BCa). We found a large number of prediction models that hold the promise of facilitating treatment decisions for patients with BCa. However, many models are missing confirmation in a different patient cohort, and only a few studies have tested the clinical utility of any given model as measured by its ability to improve

  11. Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS): A web-based tool for addressing the challenges of cross-species extrapolation of chemical toxicity

    EPA Science Inventory

    Conservation of a molecular target across species can be used as a line-of-evidence to predict the likelihood of chemical susceptibility. The web-based Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS) tool was developed to simplify, streamline, and quantitat...

  12. Predicting pathogen growth during short-term temperature abuse of raw pork, beef, and poultry products: use of an isothermal-based predictive tool.

    PubMed

    Ingham, Steven C; Fanslau, Melody A; Burnham, Greg M; Ingham, Barbara H; Norback, John P; Schaffner, Donald W

    2007-06-01

    A computer-based tool (available at: www.wisc.edu/foodsafety/meatresearch) was developed for predicting pathogen growth in raw pork, beef, and poultry meat. The tool, THERM (temperature history evaluation for raw meats), predicts the growth of pathogens in pork and beef (Escherichia coli O157:H7, Salmonella serovars, and Staphylococcus aureus) and on poultry (Salmonella serovars and S. aureus) during short-term temperature abuse. The model was developed as follows: 25-g samples of raw ground pork, beef, and turkey were inoculated with a five-strain cocktail of the target pathogen(s) and held at isothermal temperatures from 10 to 43.3 degrees C. Log CFU per sample data were obtained for each pathogen and used to determine lag-phase duration (LPD) and growth rate (GR) by DMFit software. The LPD and GR were used to develop the THERM predictive tool, into which chronological time and temperature data for raw meat processing and storage are entered. The THERM tool then predicts a delta log CFU value for the desired pathogen-product combination. The accuracy of THERM was tested in 20 different inoculation experiments that involved multiple products (coarse-ground beef, skinless chicken breast meat, turkey scapula meat, and ground turkey) and temperature-abuse scenarios. With the time-temperature data from each experiment, THERM accurately predicted the pathogen growth and no growth (with growth defined as delta log CFU > 0.3) in 67, 85, and 95% of the experiments with E. coli 0157:H7, Salmonella serovars, and S. aureus, respectively, and yielded fail-safe predictions in the remaining experiments. We conclude that THERM is a useful tool for qualitatively predicting pathogen behavior (growth and no growth) in raw meats. Potential applications include evaluating process deviations and critical limits under the HACCP (hazard analysis critical control point) system.

  13. Drug Target Validation Methods in Malaria - Protein Interference Assay (PIA) as a Tool for Highly Specific Drug Target Validation.

    PubMed

    Meissner, Kamila A; Lunev, Sergey; Wang, Yuan-Ze; Linzke, Marleen; de Assis Batista, Fernando; Wrenger, Carsten; Groves, Matthew R

    2017-01-01

    The validation of drug targets in malaria and other human diseases remains a highly difficult and laborious process. In the vast majority of cases, highly specific small molecule tools to inhibit a proteins function in vivo are simply not available. Additionally, the use of genetic tools in the analysis of malarial pathways is challenging. These issues result in difficulties in specifically modulating a hypothetical drug target's function in vivo. The current "toolbox" of various methods and techniques to identify a protein's function in vivo remains very limited and there is a pressing need for expansion. New approaches are urgently required to support target validation in the drug discovery process. Oligomerisation is the natural assembly of multiple copies of a single protein into one object and this self-assembly is present in more than half of all protein structures. Thus, oligomerisation plays a central role in the generation of functional biomolecules. A key feature of oligomerisation is that the oligomeric interfaces between the individual parts of the final assembly are highly specific. However, these interfaces have not yet been systematically explored or exploited to dissect biochemical pathways in vivo. This mini review will describe the current state of the antimalarial toolset as well as the potentially druggable malarial pathways. A specific focus is drawn to the initial efforts to exploit oligomerisation surfaces in drug target validation. As alternative to the conventional methods, Protein Interference Assay (PIA) can be used for specific distortion of the target protein function and pathway assessment in vivo. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Predictive Modeling of Estrogen Receptor Binding Agents Using Advanced Cheminformatics Tools and Massive Public Data.

    PubMed

    Ribay, Kathryn; Kim, Marlene T; Wang, Wenyi; Pinolini, Daniel; Zhu, Hao

    2016-03-01

    Estrogen receptors (ERα) are a critical target for drug design as well as a potential source of toxicity when activated unintentionally. Thus, evaluating potential ERα binding agents is critical in both drug discovery and chemical toxicity areas. Using computational tools, e.g., Quantitative Structure-Activity Relationship (QSAR) models, can predict potential ERα binding agents before chemical synthesis. The purpose of this project was to develop enhanced predictive models of ERα binding agents by utilizing advanced cheminformatics tools that can integrate publicly available bioassay data. The initial ERα binding agent data set, consisting of 446 binders and 8307 non-binders, was obtained from the Tox21 Challenge project organized by the NIH Chemical Genomics Center (NCGC). After removing the duplicates and inorganic compounds, this data set was used to create a training set (259 binders and 259 non-binders). This training set was used to develop QSAR models using chemical descriptors. The resulting models were then used to predict the binding activity of 264 external compounds, which were available to us after the models were developed. The cross-validation results of training set [Correct Classification Rate (CCR) = 0.72] were much higher than the external predictivity of the unknown compounds (CCR = 0.59). To improve the conventional QSAR models, all compounds in the training set were used to search PubChem and generate a profile of their biological responses across thousands of bioassays. The most important bioassays were prioritized to generate a similarity index that was used to calculate the biosimilarity score between each two compounds. The nearest neighbors for each compound within the set were then identified and its ERα binding potential was predicted by its nearest neighbors in the training set. The hybrid model performance (CCR = 0.94 for cross validation; CCR = 0.68 for external prediction) showed significant improvement over the original QSAR

  15. Identification of novel microRNAs in Hevea brasiliensis and computational prediction of their targets

    PubMed Central

    2012-01-01

    Background Plants respond to external stimuli through fine regulation of gene expression partially ensured by small RNAs. Of these, microRNAs (miRNAs) play a crucial role. They negatively regulate gene expression by targeting the cleavage or translational inhibition of target messenger RNAs (mRNAs). In Hevea brasiliensis, environmental and harvesting stresses are known to affect natural rubber production. This study set out to identify abiotic stress-related miRNAs in Hevea using next-generation sequencing and bioinformatic analysis. Results Deep sequencing of small RNAs was carried out on plantlets subjected to severe abiotic stress using the Solexa technique. By combining the LeARN pipeline, data from the Plant microRNA database (PMRD) and Hevea EST sequences, we identified 48 conserved miRNA families already characterized in other plant species, and 10 putatively novel miRNA families. The results showed the most abundant size for miRNAs to be 24 nucleotides, except for seven families. Several MIR genes produced both 20-22 nucleotides and 23-27 nucleotides. The two miRNA class sizes were detected for both conserved and putative novel miRNA families, suggesting their functional duality. The EST databases were scanned with conserved and novel miRNA sequences. MiRNA targets were computationally predicted and analysed. The predicted targets involved in "responses to stimuli" and to "antioxidant" and "transcription activities" are presented. Conclusions Deep sequencing of small RNAs combined with transcriptomic data is a powerful tool for identifying conserved and novel miRNAs when the complete genome is not yet available. Our study provided additional information for evolutionary studies and revealed potentially specific regulation of the control of redox status in Hevea. PMID:22330773

  16. IPMP 2013 - A comprehensive data analysis tool for predictive microbiology

    Predictive microbiology is an area of applied research in food science that uses mathematical models to predict the changes in the population of pathogenic or spoilage microorganisms in foods undergoing complex environmental changes during processing, transportation, distribution, and storage. It f...

  17. The Efficacy of Violence Prediction: A Meta-Analytic Comparison of Nine Risk Assessment Tools

    ERIC Educational Resources Information Center

    Yang, Min; Wong, Stephen C. P.; Coid, Jeremy

    2010-01-01

    Actuarial risk assessment tools are used extensively to predict future violence, but previous studies comparing their predictive accuracies have produced inconsistent findings as a result of various methodological issues. We conducted meta-analyses of the effect sizes of 9 commonly used risk assessment tools and their subscales to compare their…

  18. Predictive Models of target organ and Systemic toxicities (BOSC)

    EPA Science Inventory

    The objective of this work is to predict the hazard classification and point of departure (PoD) of untested chemicals in repeat-dose animal testing studies. We used supervised machine learning to objectively evaluate the predictive accuracy of different classification and regress...

  19. Targeted Observation of ELL Instruction as a Tool in the Preparation of School Leaders

    ERIC Educational Resources Information Center

    Baecher, Laura; Knoll, Marcia; Patti, Janet

    2016-01-01

    Preparing school administrators to promote effective instruction of English language learners (ELLs) is an important dimension of today's educational leadership programs, requiring innovative program activities. This study explores school leadership candidates' use of an observation tool targeted to ELL instruction that incorporated guided video…

  20. Speaking Spontaneously in the Modern Foreign Languages Classroom: Tools for Supporting Successful Target Language Conversation

    ERIC Educational Resources Information Center

    Christie, Colin

    2016-01-01

    This article reports on the findings of a study into the conditions which promote spontaneous learner talk in the target language in the modern foreign languages (MFL) classroom. A qualitative case study approach was adopted. French lessons, with school students aged 11-16 years old, were observed and analysed with the aim of identifying tools and…

  1. Advances and Computational Tools towards Predictable Design in Biological Engineering

    PubMed Central

    2014-01-01

    The design process of complex systems in all the fields of engineering requires a set of quantitatively characterized components and a method to predict the output of systems composed by such elements. This strategy relies on the modularity of the used components or the prediction of their context-dependent behaviour, when parts functioning depends on the specific context. Mathematical models usually support the whole process by guiding the selection of parts and by predicting the output of interconnected systems. Such bottom-up design process cannot be trivially adopted for biological systems engineering, since parts function is hard to predict when components are reused in different contexts. This issue and the intrinsic complexity of living systems limit the capability of synthetic biologists to predict the quantitative behaviour of biological systems. The high potential of synthetic biology strongly depends on the capability of mastering this issue. This review discusses the predictability issues of basic biological parts (promoters, ribosome binding sites, coding sequences, transcriptional terminators, and plasmids) when used to engineer simple and complex gene expression systems in Escherichia coli. A comparison between bottom-up and trial-and-error approaches is performed for all the discussed elements and mathematical models supporting the prediction of parts behaviour are illustrated. PMID:25161694

  2. Development, Testing, and Validation of a Model-Based Tool to Predict Operator Responses in Unexpected Workload Transitions

    NASA Technical Reports Server (NTRS)

    Sebok, Angelia; Wickens, Christopher; Sargent, Robert

    2015-01-01

    One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.

  3. Computer-based prediction of mitochondria-targeting peptides.

    PubMed

    Martelli, Pier Luigi; Savojardo, Castrense; Fariselli, Piero; Tasco, Gianluca; Casadio, Rita

    2015-01-01

    Computational methods are invaluable when protein sequences, directly derived from genomic data, need functional and structural annotation. Subcellular localization is a feature necessary for understanding the protein role and the compartment where the mature protein is active and very difficult to characterize experimentally. Mitochondrial proteins encoded on the cytosolic ribosomes carry specific patterns in the precursor sequence from where it is possible to recognize a peptide targeting the protein to its final destination. Here we discuss to which extent it is feasible to develop computational methods for detecting mitochondrial targeting peptides in the precursor sequences and benchmark our and other methods on the human mitochondrial proteins endowed with experimentally characterized targeting peptides. Furthermore, we illustrate our newly implemented web server and its usage on the whole human proteome in order to infer mitochondrial targeting peptides, their cleavage sites, and whether the targeting peptide regions contain or not arginine-rich recurrent motifs. By this, we add some other 2,800 human proteins to the 124 ones already experimentally annotated with a mitochondrial targeting peptide.

  4. Cancer-associated fibroblasts as target and tool in cancer therapeutics and diagnostics.

    PubMed

    De Vlieghere, Elly; Verset, Laurine; Demetter, Pieter; Bracke, Marc; De Wever, Olivier

    2015-10-01

    Cancer-associated fibroblasts (CAFs) are drivers of tumour progression and are considered as a target and a tool in cancer diagnostic and therapeutic applications. An increased abundance of CAFs or CAF signatures are recognized as a bad prognostic marker in several cancer types. Tumour-environment biomimetics strongly improve our understanding of the communication between CAFs, cancer cells and other host cells. Several experimental drugs targeting CAFs are in clinical trials for multiple tumour entities; alternatively, CAFs can be exploited as a tool to characterize the functionality of circulating tumour cells or to capture them as a tool to prevent metastasis. The continuous interaction between tissue engineers, biomaterial experts and cancer researchers creates the possibility to biomimic the tumour-environment and provides new opportunities in cancer diagnostics and management.

  5. Jet Measurements for Development of Jet Noise Prediction Tools

    NASA Technical Reports Server (NTRS)

    Bridges, James E.

    2006-01-01

    The primary focus of my presentation is the development of the jet noise prediction code JeNo with most examples coming from the experimental work that drove the theoretical development and validation. JeNo is a statistical jet noise prediction code, based upon the Lilley acoustic analogy. Our approach uses time-average 2-D or 3-D mean and turbulent statistics of the flow as input. The output is source distributions and spectral directivity.

  6. Prediction of Drug-Target Interaction Networks from the Integration of Protein Sequences and Drug Chemical Structures.

    PubMed

    Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Zhou, Yong; An, Ji-Yong

    2017-07-05

    Knowledge of drug-target interaction (DTI) plays an important role in discovering new drug candidates. Unfortunately, there are unavoidable shortcomings; including the time-consuming and expensive nature of the experimental method to predict DTI. Therefore, it motivates us to develop an effective computational method to predict DTI based on protein sequence. In the paper, we proposed a novel computational approach based on protein sequence, namely PDTPS (Predicting Drug Targets with Protein Sequence) to predict DTI. The PDTPS method combines Bi-gram probabilities (BIGP), Position Specific Scoring Matrix (PSSM), and Principal Component Analysis (PCA) with Relevance Vector Machine (RVM). In order to evaluate the prediction capacity of the PDTPS, the experiment was carried out on enzyme, ion channel, GPCR, and nuclear receptor datasets by using five-fold cross-validation tests. The proposed PDTPS method achieved average accuracy of 97.73%, 93.12%, 86.78%, and 87.78% on enzyme, ion channel, GPCR and nuclear receptor datasets, respectively. The experimental results showed that our method has good prediction performance. Furthermore, in order to further evaluate the prediction performance of the proposed PDTPS method, we compared it with the state-of-the-art support vector machine (SVM) classifier on enzyme and ion channel datasets, and other exiting methods on four datasets. The promising comparison results further demonstrate that the efficiency and robust of the proposed PDTPS method. This makes it a useful tool and suitable for predicting DTI, as well as other bioinformatics tasks.

  7. Engineering Property Prediction Tools for Tailored Polymer Composite Structures (49465)

    SciT

    Nguyen, Ba Nghiep; Kunc, Vlastimil

    2009-12-29

    Process and constitutive models as well as characterization tools and testing methods were developed to determine stress-strain responses, damage development, strengths and creep of long-fiber thermoplastics (LFTs). The developed models were implemented in Moldflow and ABAQUS and have been validated against LFT data obtained experimentally.

  8. Predicting the need for muscle flap salvage after open groin vascular procedures: a clinical assessment tool.

    PubMed

    Fischer, John P; Nelson, Jonas A; Shang, Eric K; Wink, Jason D; Wingate, Nicholas A; Woo, Edward Y; Jackson, Benjamin M; Kovach, Stephen J; Kanchwala, Suhail

    2014-12-01

    Groin wound complications after open vascular surgery procedures are common, morbid, and costly. The purpose of this study was to generate a simple, validated, clinically usable risk assessment tool for predicting groin wound morbidity after infra-inguinal vascular surgery. A retrospective review of consecutive patients undergoing groin cutdowns for femoral access between 2005-2011 was performed. Patients necessitating salvage flaps were compared to those who did not, and a stepwise logistic regression was performed and validated using a bootstrap technique. Utilising this analysis, a simplified risk score was developed to predict the risk of developing a wound which would necessitate salvage. A total of 925 patients were included in the study. The salvage flap rate was 11.2% (n = 104). Predictors determined by logistic regression included prior groin surgery (OR = 4.0, p < 0.001), prosthetic graft (OR = 2.7, p < 0.001), coronary artery disease (OR = 1.8, p = 0.019), peripheral arterial disease (OR = 5.0, p < 0.001), and obesity (OR = 1.7, p = 0.039). Based upon the respective logistic coefficients, a simplified scoring system was developed to enable the preoperative risk stratification regarding the likelihood of a significant complication which would require a salvage muscle flap. The c-statistic for the regression demonstrated excellent discrimination at 0.89. This study presents a simple, internally validated risk assessment tool that accurately predicts wound morbidity requiring flap salvage in open groin vascular surgery patients. The preoperatively high-risk patient can be identified and selectively targeted as a candidate for a prophylactic muscle flap.

  9. Predictive Engineering Tools for Injection-Molded Long-Carbon-Thermoplastic Composites: Weight and Cost Analyses

    SciT

    Nguyen, Ba Nghiep; Fifield, Leonard S.; Gandhi, Umesh N.

    This project proposed to integrate, optimize and validate the fiber orientation and length distribution models previously developed and implemented in the Autodesk Simulation Moldflow Insight (ASMI) package for injection-molded long-carbon-fiber thermoplastic composites into a cohesive prediction capability. The current effort focused on rendering the developed models more robust and efficient for automotive industry part design to enable weight savings and cost reduction. The project goal has been achieved by optimizing the developed models, improving and integrating their implementations in ASMI, and validating them for a complex 3D LCF thermoplastic automotive part (Figure 1). Both PP and PA66 were used asmore » resin matrices. After validating ASMI predictions for fiber orientation and fiber length for this complex part against the corresponding measured data, in collaborations with Toyota and Magna PNNL developed a method using the predictive engineering tool to assess LCF/PA66 complex part design in terms of stiffness performance. Structural three-point bending analyses of the complex part and similar parts in steel were then performed for this purpose, and the team has then demonstrated the use of stiffness-based complex part design assessment to evaluate weight savings relative to the body system target (≥ 35%) set in Table 2 of DE-FOA-0000648 (AOI #1). In addition, starting from the part-to-part analysis, the PE tools enabled an estimated weight reduction for the vehicle body system using 50 wt% LCF/PA66 parts relative to the current steel system. Also, from this analysis an estimate of the manufacturing cost including the material cost for making the equivalent part in steel has been determined and compared to the costs for making the LCF/PA66 part to determine the cost per “saved” pound.« less

  10. In vitro perturbations of targets in cancer hallmark processes predict rodent chemical carcinogenesis.

    PubMed

    Kleinstreuer, Nicole C; Dix, David J; Houck, Keith A; Kavlock, Robert J; Knudsen, Thomas B; Martin, Matthew T; Paul, Katie B; Reif, David M; Crofton, Kevin M; Hamilton, Kerry; Hunter, Ronald; Shah, Imran; Judson, Richard S

    2013-01-01

    Thousands of untested chemicals in the environment require efficient characterization of carcinogenic potential in humans. A proposed solution is rapid testing of chemicals using in vitro high-throughput screening (HTS) assays for targets in pathways linked to disease processes to build models for priority setting and further testing. We describe a model for predicting rodent carcinogenicity based on HTS data from 292 chemicals tested in 672 assays mapping to 455 genes. All data come from the EPA ToxCast project. The model was trained on a subset of 232 chemicals with in vivo rodent carcinogenicity data in the Toxicity Reference Database (ToxRefDB). Individual HTS assays strongly associated with rodent cancers in ToxRefDB were linked to genes, pathways, and hallmark processes documented to be involved in tumor biology and cancer progression. Rodent liver cancer endpoints were linked to well-documented pathways such as peroxisome proliferator-activated receptor signaling and TP53 and novel targets such as PDE5A and PLAUR. Cancer hallmark genes associated with rodent thyroid tumors were found to be linked to human thyroid tumors and autoimmune thyroid disease. A model was developed in which these genes/pathways function as hypothetical enhancers or promoters of rat thyroid tumors, acting secondary to the key initiating event of thyroid hormone disruption. A simple scoring function was generated to identify chemicals with significant in vitro evidence that was predictive of in vivo carcinogenicity in different rat tissues and organs. This scoring function was applied to an external test set of 33 compounds with carcinogenicity classifications from the EPA's Office of Pesticide Programs and successfully (p = 0.024) differentiated between chemicals classified as "possible"/"probable"/"likely" carcinogens and those designated as "not likely" or with "evidence of noncarcinogenicity." This model represents a chemical carcinogenicity prioritization tool supporting targeted

  11. Understanding Interrater Reliability and Validity of Risk Assessment Tools Used to Predict Adverse Clinical Events.

    PubMed

    Siedlecki, Sandra L; Albert, Nancy M

    This article will describe how to assess interrater reliability and validity of risk assessment tools, using easy-to-follow formulas, and to provide calculations that demonstrate principles discussed. Clinical nurse specialists should be able to identify risk assessment tools that provide high-quality interrater reliability and the highest validity for predicting true events of importance to clinical settings. Making best practice recommendations for assessment tool use is critical to high-quality patient care and safe practices that impact patient outcomes and nursing resources. Optimal risk assessment tool selection requires knowledge about interrater reliability and tool validity. The clinical nurse specialist will understand the reliability and validity issues associated with risk assessment tools, and be able to evaluate tools using basic calculations. Risk assessment tools are developed to objectively predict quality and safety events and ultimately reduce the risk of event occurrence through preventive interventions. To ensure high-quality tool use, clinical nurse specialists must critically assess tool properties. The better the tool's ability to predict adverse events, the more likely that event risk is mediated. Interrater reliability and validity assessment is relatively an easy skill to master and will result in better decisions when selecting or making recommendations for risk assessment tool use.

  12. Emerging Tools to Estimate and to Predict Exposures to ...

    EPA Pesticide Factsheets

    The timely assessment of the human and ecological risk posed by thousands of existing and emerging commercial chemicals is a critical challenge facing EPA in its mission to protect public health and the environment The US EPA has been conducting research to enhance methods used to estimate and forecast exposures for tens of thousands of chemicals. This research is aimed at both assessing risks and supporting life cycle analysis, by developing new models and tools for high throughput exposure screening and prioritization, as well as databases that support these and other tools, especially regarding consumer products. The models and data address usage, and take advantage of quantitative structural activity relationships (QSARs) for both inherent chemical properties and function (why the chemical is a product ingredient). To make them more useful and widely available, the new tools, data and models are designed to be: • Flexible • Intraoperative • Modular (useful to more than one, stand-alone application) • Open (publicly available software) Presented at the Society for Risk Analysis Forum: Risk Governance for Key Enabling Technologies, Venice, Italy, March 1-3, 2017

  13. Can eHealth tools enable health organizations to reach their target audience?

    PubMed

    Zbib, Ahmad; Hodgson, Corinne; Calderwood, Sarah

    2011-01-01

    Data from the health risk assessment operated by the Heart and Stroke Foundation found users were more likely to be female; married; have completed post secondary education; and report hypertension, stroke, or being overweight or obese. In developing and operating eHealth tools for health promotion, organizations should compare users to their target population(s). eHealth tools may not be optimal for reaching some higher-risk sub-groups, and a range of social marketing approaches may be required.

  14. Antibody-protein interactions: benchmark datasets and prediction tools evaluation

    PubMed Central

    Ponomarenko, Julia V; Bourne, Philip E

    2007-01-01

    Background The ability to predict antibody binding sites (aka antigenic determinants or B-cell epitopes) for a given protein is a precursor to new vaccine design and diagnostics. Among the various methods of B-cell epitope identification X-ray crystallography is one of the most reliable methods. Using these experimental data computational methods exist for B-cell epitope prediction. As the number of structures of antibody-protein complexes grows, further interest in prediction methods using 3D structure is anticipated. This work aims to establish a benchmark for 3D structure-based epitope prediction methods. Results Two B-cell epitope benchmark datasets inferred from the 3D structures of antibody-protein complexes were defined. The first is a dataset of 62 representative 3D structures of protein antigens with inferred structural epitopes. The second is a dataset of 82 structures of antibody-protein complexes containing different structural epitopes. Using these datasets, eight web-servers developed for antibody and protein binding sites prediction have been evaluated. In no method did performance exceed a 40% precision and 46% recall. The values of the area under the receiver operating characteristic curve for the evaluated methods were about 0.6 for ConSurf, DiscoTope, and PPI-PRED methods and above 0.65 but not exceeding 0.70 for protein-protein docking methods when the best of the top ten models for the bound docking were considered; the remaining methods performed close to random. The benchmark datasets are included as a supplement to this paper. Conclusion It may be possible to improve epitope prediction methods through training on datasets which include only immune epitopes and through utilizing more features characterizing epitopes, for example, the evolutionary conservation score. Notwithstanding, overall poor performance may reflect the generality of antigenicity and hence the inability to decipher B-cell epitopes as an intrinsic feature of the protein. It

  15. Chapter 17. Extension of endogenous primers as a tool to detect micro-RNA targets.

    PubMed

    Vatolin, Sergei; Weil, Robert J

    2008-01-01

    Mammalian cells express a large number of small, noncoding RNAs, including micro-RNAs (miRNAs), that can regulate both the level of a target mRNA and the protein produced by the target mRNA. Recognition of miRNA targets is a complicated process, as a single target mRNA may be regulated by several miRNAs. The potential for combinatorial miRNA-mediated regulation of miRNA targets complicates diagnostic and therapeutic applications of miRNAs. Despite significant progress in understanding the biology of miRNAs and advances in computational predictions of miRNA targets, methods that permit direct physical identification of miRNA-mRNA complexes in eukaryotic cells are still required. Several groups have utilized coimmunoprecipitation of RNA associated with a protein(s) that is part of the RNA silencing macromolecular complex. This chapter describes a detailed but straightforward strategy that identifies miRNA targets based on the assumption that small RNAs base paired with a complementary target mRNA can be used as a primer to synthesize cDNA that may be used for cloning, identification, and functional analysis.

  16. Mass Transport through Nanostructured Membranes: Towards a Predictive Tool

    PubMed Central

    Darvishmanesh, Siavash; Van der Bruggen, Bart

    2016-01-01

    This study proposes a new mechanism to understand the transport of solvents through nanostructured membranes from a fundamental point of view. The findings are used to develop readily applicable mathematical models to predict solvent fluxes and solute rejections through solvent resistant membranes used for nanofiltration. The new model was developed based on a pore-flow type of transport. New parameters found to be of fundamental importance were introduced to the equation, i.e., the affinity of the solute and the solvent for the membrane expressed as the hydrogen-bonding contribution of the solubility parameter for the solute, solvent and membrane. A graphical map was constructed to predict the solute rejection based on the hydrogen-bonding contribution of the solubility parameter. The model was evaluated with performance data from the literature. Both the solvent flux and the solute rejection calculated with the new approach were similar to values reported in the literature. PMID:27918434

  17. In silico prediction of novel therapeutic targets using gene-disease association data.

    PubMed

    Ferrero, Enrico; Dunham, Ian; Sanseau, Philippe

    2017-08-29

    Target identification and validation is a pressing challenge in the pharmaceutical industry, with many of the programmes that fail for efficacy reasons showing poor association between the drug target and the disease. Computational prediction of successful targets could have a considerable impact on attrition rates in the drug discovery pipeline by significantly reducing the initial search space. Here, we explore whether gene-disease association data from the Open Targets platform is sufficient to predict therapeutic targets that are actively being pursued by pharmaceutical companies or are already on the market. To test our hypothesis, we train four different classifiers (a random forest, a support vector machine, a neural network and a gradient boosting machine) on partially labelled data and evaluate their performance using nested cross-validation and testing on an independent set. We then select the best performing model and use it to make predictions on more than 15,000 genes. Finally, we validate our predictions by mining the scientific literature for proposed therapeutic targets. We observe that the data types with the best predictive power are animal models showing a disease-relevant phenotype, differential expression in diseased tissue and genetic association with the disease under investigation. On a test set, the neural network classifier achieves over 71% accuracy with an AUC of 0.76 when predicting therapeutic targets in a semi-supervised learning setting. We use this model to gain insights into current and failed programmes and to predict 1431 novel targets, of which a highly significant proportion has been independently proposed in the literature. Our in silico approach shows that data linking genes and diseases is sufficient to predict novel therapeutic targets effectively and confirms that this type of evidence is essential for formulating or strengthening hypotheses in the target discovery process. Ultimately, more rapid and automated target

  18. Development of Antimicrobial Peptide Prediction Tool for Aquaculture Industries.

    PubMed

    Gautam, Aditi; Sharma, Asuda; Jaiswal, Sarika; Fatma, Samar; Arora, Vasu; Iquebal, M A; Nandi, S; Sundaray, J K; Jayasankar, P; Rai, Anil; Kumar, Dinesh

    2016-09-01

    Microbial diseases in fish, plant, animal and human are rising constantly; thus, discovery of their antidote is imperative. The use of antibiotic in aquaculture further compounds the problem by development of resistance and consequent consumer health risk by bio-magnification. Antimicrobial peptides (AMPs) have been highly promising as natural alternative to chemical antibiotics. Though AMPs are molecules of innate immune defense of all advance eukaryotic organisms, fish being heavily dependent on their innate immune defense has been a good source of AMPs with much wider applicability. Machine learning-based prediction method using wet laboratory-validated fish AMP can accelerate the AMP discovery using available fish genomic and proteomic data. Earlier AMP prediction servers are based on multi-phyla/species data, and we report here the world's first AMP prediction server in fishes. It is freely accessible at http://webapp.cabgrid.res.in/fishamp/ . A total of 151 AMPs related to fish collected from various databases and published literature were taken for this study. For model development and prediction, N-terminus residues, C-terminus residues and full sequences were considered. Best models were with kernels polynomial-2, linear and radial basis function with accuracy of 97, 99 and 97 %, respectively. We found that performance of support vector machine-based models is superior to artificial neural network. This in silico approach can drastically reduce the time and cost of AMP discovery. This accelerated discovery of lead AMP molecules having potential wider applications in diverse area like fish and human health as substitute of antibiotics, immunomodulator, antitumor, vaccine adjuvant and inactivator, and also for packaged food can be of much importance for industries.

  19. Prediction of boiling points of organic compounds by QSPR tools.

    PubMed

    Dai, Yi-min; Zhu, Zhi-ping; Cao, Zhong; Zhang, Yue-fei; Zeng, Ju-lan; Li, Xun

    2013-07-01

    The novel electro-negativity topological descriptors of YC, WC were derived from molecular structure by equilibrium electro-negativity of atom and relative bond length of molecule. The quantitative structure-property relationships (QSPR) between descriptors of YC, WC as well as path number parameter P3 and the normal boiling points of 80 alkanes, 65 unsaturated hydrocarbons and 70 alcohols were obtained separately. The high-quality prediction models were evidenced by coefficient of determination (R(2)), the standard error (S), average absolute errors (AAE) and predictive parameters (Qext(2),RCV(2),Rm(2)). According to the regression equations, the influences of the length of carbon backbone, the size, the degree of branching of a molecule and the role of functional groups on the normal boiling point were analyzed. Comparison results with reference models demonstrated that novel topological descriptors based on the equilibrium electro-negativity of atom and the relative bond length were useful molecular descriptors for predicting the normal boiling points of organic compounds. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Concepts and tools for predictive modeling of microbial dynamics.

    PubMed

    Bernaerts, Kristel; Dens, Els; Vereecken, Karen; Geeraerd, Annemie H; Standaert, Arnout R; Devlieghere, Frank; Debevere, Johan; Van Impe, Jan F

    2004-09-01

    Description of microbial cell (population) behavior as influenced by dynamically changing environmental conditions intrinsically needs dynamic mathematical models. In the past, major effort has been put into the modeling of microbial growth and inactivation within a constant environment (static models). In the early 1990s, differential equation models (dynamic models) were introduced in the field of predictive microbiology. Here, we present a general dynamic model-building concept describing microbial evolution under dynamic conditions. Starting from an elementary model building block, the model structure can be gradually complexified to incorporate increasing numbers of influencing factors. Based on two case studies, the fundamentals of both macroscopic (population) and microscopic (individual) modeling approaches are revisited. These illustrations deal with the modeling of (i) microbial lag under variable temperature conditions and (ii) interspecies microbial interactions mediated by lactic acid production (product inhibition). Current and future research trends should address the need for (i) more specific measurements at the cell and/or population level, (ii) measurements under dynamic conditions, and (iii) more comprehensive (mechanistically inspired) model structures. In the context of quantitative microbial risk assessment, complexity of the mathematical model must be kept under control. An important challenge for the future is determination of a satisfactory trade-off between predictive power and manageability of predictive microbiology models.

  1. Using Search Engine Data as a Tool to Predict Syphilis.

    PubMed

    Young, Sean D; Torrone, Elizabeth A; Urata, John; Aral, Sevgi O

    2018-07-01

    Researchers have suggested that social media and online search data might be used to monitor and predict syphilis and other sexually transmitted diseases. Because people at risk for syphilis might seek sexual health and risk-related information on the internet, we investigated associations between internet state-level search query data (e.g., Google Trends) and reported weekly syphilis cases. We obtained weekly counts of reported primary and secondary syphilis for 50 states from 2012 to 2014 from the US Centers for Disease Control and Prevention. We collected weekly internet search query data regarding 25 risk-related keywords from 2012 to 2014 for 50 states using Google Trends. We joined 155 weeks of Google Trends data with 1-week lag to weekly syphilis data for a total of 7750 data points. Using the least absolute shrinkage and selection operator, we trained three linear mixed models on the first 10 weeks of each year. We validated models for 2012 and 2014 for the following 52 weeks and the 2014 model for the following 42 weeks. The models, consisting of different sets of keyword predictors for each year, accurately predicted 144 weeks of primary and secondary syphilis counts for each state, with an overall average R of 0.9 and overall average root mean squared error of 4.9. We used Google Trends search data from the prior week to predict cases of syphilis in the following weeks for each state. Further research could explore how search data could be integrated into public health monitoring systems.

  2. Contextual remapping in visual search after predictable target-location changes.

    PubMed

    Conci, Markus; Sun, Luning; Müller, Hermann J

    2011-07-01

    Invariant spatial context can facilitate visual search. For instance, detection of a target is faster if it is presented within a repeatedly encountered, as compared to a novel, layout of nontargets, demonstrating a role of contextual learning for attentional guidance ('contextual cueing'). Here, we investigated how context-based learning adapts to target location (and identity) changes. Three experiments were performed in which, in an initial learning phase, observers learned to associate a given context with a given target location. A subsequent test phase then introduced identity and/or location changes to the target. The results showed that contextual cueing could not compensate for target changes that were not 'predictable' (i.e. learnable). However, for predictable changes, contextual cueing remained effective even immediately after the change. These findings demonstrate that contextual cueing is adaptive to predictable target location changes. Under these conditions, learned contextual associations can be effectively 'remapped' to accommodate new task requirements.

  3. Cluster analysis as a prediction tool for pregnancy outcomes.

    PubMed

    Banjari, Ines; Kenjerić, Daniela; Šolić, Krešimir; Mandić, Milena L

    2015-03-01

    Considering specific physiology changes during gestation and thinking of pregnancy as a "critical window", classification of pregnant women at early pregnancy can be considered as crucial. The paper demonstrates the use of a method based on an approach from intelligent data mining, cluster analysis. Cluster analysis method is a statistical method which makes possible to group individuals based on sets of identifying variables. The method was chosen in order to determine possibility for classification of pregnant women at early pregnancy to analyze unknown correlations between different variables so that the certain outcomes could be predicted. 222 pregnant women from two general obstetric offices' were recruited. The main orient was set on characteristics of these pregnant women: their age, pre-pregnancy body mass index (BMI) and haemoglobin value. Cluster analysis gained a 94.1% classification accuracy rate with three branch- es or groups of pregnant women showing statistically significant correlations with pregnancy outcomes. The results are showing that pregnant women both of older age and higher pre-pregnancy BMI have a significantly higher incidence of delivering baby of higher birth weight but they gain significantly less weight during pregnancy. Their babies are also longer, and these women have significantly higher probability for complications during pregnancy (gestosis) and higher probability of induced or caesarean delivery. We can conclude that the cluster analysis method can appropriately classify pregnant women at early pregnancy to predict certain outcomes.

  4. About Using Predictive Models and Tools To Assess Chemicals under TSCA

    EPA Pesticide Factsheets

    As part of EPA's effort to promote chemical safety, OPPT provides public access to predictive models and tools which can help inform the public on the hazards and risks of substances and improve chemical management decisions.

  5. Literature-based condition-specific miRNA-mRNA target prediction.

    PubMed

    Oh, Minsik; Rhee, Sungmin; Moon, Ji Hwan; Chae, Heejoon; Lee, Sunwon; Kang, Jaewoo; Kim, Sun

    2017-01-01

    miRNAs are small non-coding RNAs that regulate gene expression by binding to the 3'-UTR of genes. Many recent studies have reported that miRNAs play important biological roles by regulating specific mRNAs or genes. Many sequence-based target prediction algorithms have been developed to predict miRNA targets. However, these methods are not designed for condition-specific target predictions and produce many false positives; thus, expression-based target prediction algorithms have been developed for condition-specific target predictions. A typical strategy to utilize expression data is to leverage the negative control roles of miRNAs on genes. To control false positives, a stringent cutoff value is typically set, but in this case, these methods tend to reject many true target relationships, i.e., false negatives. To overcome these limitations, additional information should be utilized. The literature is probably the best resource that we can utilize. Recent literature mining systems compile millions of articles with experiments designed for specific biological questions, and the systems provide a function to search for specific information. To utilize the literature information, we used a literature mining system, BEST, that automatically extracts information from the literature in PubMed and that allows the user to perform searches of the literature with any English words. By integrating omics data analysis methods and BEST, we developed Context-MMIA, a miRNA-mRNA target prediction method that combines expression data analysis results and the literature information extracted based on the user-specified context. In the pathway enrichment analysis using genes included in the top 200 miRNA-targets, Context-MMIA outperformed the four existing target prediction methods that we tested. In another test on whether prediction methods can re-produce experimentally validated target relationships, Context-MMIA outperformed the four existing target prediction methods. In summary

  6. On Why Targets Evoke P3 Components in Prediction Tasks: Drawing an Analogy between Prediction and Matching Tasks

    PubMed Central

    Verleger, Rolf; Cäsar, Stephanie; Siller, Bastian; Śmigasiewicz, Kamila

    2017-01-01

    P3 is the most conspicuous component in recordings of stimulus-evoked EEG potentials from the human scalp, occurring whenever some task has to be performed with the stimuli. The process underlying P3 has been assumed to be the updating of expectancies. More recently, P3 has been related to decision processing and to activation of established stimulus-response associations (S/R-link hypothesis). However, so far this latter approach has not provided a conception about how to explain the occurrence of P3 with predicted stimuli, although P3 was originally discovered in a prediction task. The present article proposes such a conception. We assume that the internal responses right or wrong both become associatively linked to each predicted target and that one of these two response alternatives gets activated as a function of match or mismatch of the target to the preceding prediction. This seems similar to comparison tasks where responses depend on the matching of the target stimulus with a preceding first stimulus (S1). Based on this idea, this study compared the effects of frequencies of first events (predictions or S1) on target-evoked P3s in prediction and comparison tasks. Indeed, frequencies not only of targets but also of first events had similar effects across tasks on target-evoked P3s. These results support the notion that P3 evoked by predicted stimuli reflects activation of appropriate internal “match” or “mismatch” responses, which is compatible with S/R-link hypothesis. PMID:29066965

  7. Affinity resins as new tools for identifying target proteins of ascorbic acid.

    PubMed

    Iwaoka, Yuji; Nishino, Kohei; Ishikawa, Takahiro; Ito, Hideyuki; Sawa, Yoshihiro; Tai, Akihiro

    2018-02-12

    l-Ascorbic acid (AA) has diverse physiological functions, but little is known about the functional mechanisms of AA. In this study, we synthesized two types of affinity resin on which AA is immobilized in a stable form to identify new AA-targeted proteins, which can provide important clues for elucidating unknown functional mechanisms of AA. To our knowledge, an affinity resin on which AA as a ligand is immobilized has not been prepared, because AA is very unstable and rapidly degraded in an aqueous solution. By using the affinity resins, cytochrome c (cyt c) was identified as an AA-targeted protein, and we showed that oxidized cyt c exhibits specific affinity for AA. These results suggest that two kinds of AA-affinity resin can be powerful tools to identify new target proteins of AA.

  8. A pollutant removal prediction tool for stormwater derived diffuse pollution.

    PubMed

    Revitt, D Michael; Scholes, Lian; Ellis, J Bryan

    2008-01-01

    This report describes the development of a methodology to theoretically assess the effectiveness of structural BMPs with regard to their treatment of selected stormwater pollutants (metals, polyaromatic hydrocarbons and herbicides). The result is a prioritisation, in terms of pollutant removal efficiency, of 15 different BMPs which can inform stormwater managers and other stakeholders of the best available options for the treatment of urban runoff pollutants of particular environmental concern. Regardless of the selected pollutant, infiltration basins and sub-surface flow constructed wetlands are predicted to perform most efficiently with lagoons, porous asphalt and sedimentation tanks being the least effective systems for the removal of pollutants. The limitations of the approach in terms of the variabilities in BMP designs and applications are considered. (c) IWA Publishing 2008.

  9. A clinical tool for predicting survival in ALS.

    PubMed

    Knibb, Jonathan A; Keren, Noa; Kulka, Anna; Leigh, P Nigel; Martin, Sarah; Shaw, Christopher E; Tsuda, Miho; Al-Chalabi, Ammar

    2016-12-01

    Amyotrophic lateral sclerosis (ALS) is a progressive and usually fatal neurodegenerative disease. Survival from diagnosis varies considerably. Several prognostic factors are known, including site of onset (bulbar or limb), age at symptom onset, delay from onset to diagnosis and the use of riluzole and non-invasive ventilation (NIV). Clinicians and patients would benefit from a practical way of using these factors to provide an individualised prognosis. 575 consecutive patients with incident ALS from a population-based registry in South-East England register for ALS (SEALS) were studied. Their survival was modelled as a two-step process: the time from diagnosis to respiratory muscle involvement, followed by the time from respiratory involvement to death. The effects of predictor variables were assessed separately for each time interval. Younger age at symptom onset, longer delay from onset to diagnosis and riluzole use were associated with slower progression to respiratory involvement, and NIV use was associated with lower mortality after respiratory involvement, each with a clinically significant effect size. Riluzole may have a greater effect in younger patients and those with longer delay to diagnosis. A patient's survival time has a roughly 50% chance of falling between half and twice the predicted median. A simple and clinically applicable graphical method of predicting an individual patient's survival from diagnosis is presented. The model should be validated in an independent cohort, and extended to include other important prognostic factors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  10. Predictive saccade in the absence of smooth pursuit: interception of moving targets in the archer fish.

    PubMed

    Ben-Simon, Avi; Ben-Shahar, Ohad; Vasserman, Genadiy; Segev, Ronen

    2012-12-15

    Interception of fast-moving targets is a demanding task many animals solve. To handle it successfully, mammals employ both saccadic and smooth pursuit eye movements in order to confine the target to their area centralis. But how can non-mammalian vertebrates, which lack smooth pursuit, intercept moving targets? We studied this question by exploring eye movement strategies employed by archer fish, an animal that possesses an area centralis, lacks smooth pursuit eye movements, but can intercept moving targets by shooting jets of water at them. We tracked the gaze direction of fish during interception of moving targets and found that they employ saccadic eye movements based on prediction of target position when it is hit. The fish fixates on the target's initial position for ∼0.2 s from the onset of its motion, a time period used to predict whether a shot can be made before the projection of the target exits the area centralis. If the prediction indicates otherwise, the fish performs a saccade that overshoots the center of gaze beyond the present target projection on the retina, such that after the saccade the moving target remains inside the area centralis long enough to prepare and perform a shot. These results add to the growing body of knowledge on biological target tracking and may shed light on the mechanism underlying this behavior in other animals with no neural system for the generation of smooth pursuit eye movements.

  11. Biomimetic Dissolution: A Tool to Predict Amorphous Solid Dispersion Performance.

    PubMed

    Puppolo, Michael M; Hughey, Justin R; Dillon, Traciann; Storey, David; Jansen-Varnum, Susan

    2017-11-01

    The presented study describes the development of a membrane permeation non-sink dissolution method that can provide analysis of complete drug speciation and emulate the in vivo performance of poorly water-soluble Biopharmaceutical Classification System class II compounds. The designed membrane permeation methodology permits evaluation of free/dissolved/unbound drug from amorphous solid dispersion formulations with the use of a two-cell apparatus, biorelevant dissolution media, and a biomimetic polymer membrane. It offers insight into oral drug dissolution, permeation, and absorption. Amorphous solid dispersions of felodipine were prepared by hot melt extrusion and spray drying techniques and evaluated for in vitro performance. Prior to ranking performance of extruded and spray-dried felodipine solid dispersions, optimization of the dissolution methodology was performed for parameters such as agitation rate, membrane type, and membrane pore size. The particle size and zeta potential were analyzed during dissolution experiments to understand drug/polymer speciation and supersaturation sustainment of felodipine solid dispersions. Bland-Altman analysis was performed to measure the agreement or equivalence between dissolution profiles acquired using polymer membranes and porcine intestines and to establish the biomimetic nature of the treated polymer membranes. The utility of the membrane permeation dissolution methodology is seen during the evaluation of felodipine solid dispersions produced by spray drying and hot melt extrusion. The membrane permeation dissolution methodology can suggest formulation performance and be employed as a screening tool for selection of candidates to move forward to pharmacokinetic studies. Furthermore, the presented model is a cost-effective technique.

  12. Proteasix: a tool for automated and large-scale prediction of proteases involved in naturally occurring peptide generation.

    PubMed

    Klein, Julie; Eales, James; Zürbig, Petra; Vlahou, Antonia; Mischak, Harald; Stevens, Robert

    2013-04-01

    In this study, we have developed Proteasix, an open-source peptide-centric tool that can be used to predict in silico the proteases involved in naturally occurring peptide generation. We developed a curated cleavage site (CS) database, containing 3500 entries about human protease/CS combinations. On top of this database, we built a tool, Proteasix, which allows CS retrieval and protease associations from a list of peptides. To establish the proof of concept of the approach, we used a list of 1388 peptides identified from human urine samples, and compared the prediction to the analysis of 1003 randomly generated amino acid sequences. Metalloprotease activity was predominantly involved in urinary peptide generation, and more particularly to peptides associated with extracellular matrix remodelling, compared to proteins from other origins. In comparison, random sequences returned almost no results, highlighting the specificity of the prediction. This study provides a tool that can facilitate linking of identified protein fragments to predicted protease activity, and therefore into presumed mechanisms of disease. Experiments are needed to confirm the in silico hypotheses; nevertheless, this approach may be of great help to better understand molecular mechanisms of disease, and define new biomarkers, and therapeutic targets. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    PubMed

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Computational Predictions Provide Insights into the Biology of TAL Effector Target Sites

    PubMed Central

    Grau, Jan; Wolf, Annett; Reschke, Maik; Bonas, Ulla; Posch, Stefan; Boch, Jens

    2013-01-01

    Transcription activator-like (TAL) effectors are injected into host plant cells by Xanthomonas bacteria to function as transcriptional activators for the benefit of the pathogen. The DNA binding domain of TAL effectors is composed of conserved amino acid repeat structures containing repeat-variable diresidues (RVDs) that determine DNA binding specificity. In this paper, we present TALgetter, a new approach for predicting TAL effector target sites based on a statistical model. In contrast to previous approaches, the parameters of TALgetter are estimated from training data computationally. We demonstrate that TALgetter successfully predicts known TAL effector target sites and often yields a greater number of predictions that are consistent with up-regulation in gene expression microarrays than an existing approach, Target Finder of the TALE-NT suite. We study the binding specificities estimated by TALgetter and approve that different RVDs are differently important for transcriptional activation. In subsequent studies, the predictions of TALgetter indicate a previously unreported positional preference of TAL effector target sites relative to the transcription start site. In addition, several TAL effectors are predicted to bind to the TATA-box, which might constitute one general mode of transcriptional activation by TAL effectors. Scrutinizing the predicted target sites of TALgetter, we propose several novel TAL effector virulence targets in rice and sweet orange. TAL-mediated induction of the candidates is supported by gene expression microarrays. Validity of these targets is also supported by functional analogy to known TAL effector targets, by an over-representation of TAL effector targets with similar function, or by a biological function related to pathogen infection. Hence, these predicted TAL effector virulence targets are promising candidates for studying the virulence function of TAL effectors. TALgetter is implemented as part of the open-source Java library

  15. Prostate cancer: predicting high-risk prostate cancer-a novel stratification tool.

    PubMed

    Buck, Jessica; Chughtai, Bilal

    2014-05-01

    Currently, numerous systems exist for the identification of high-risk prostate cancer, but few of these systems can guide treatment strategies. A new stratification tool that uses common diagnostic factors can help to predict outcomes after radical prostatectomy. The tool aids physicians in the identification of appropriate candidates for aggressive, local treatment.

  16. Variability in Predictions from Online Tools: A Demonstration Using Internet-Based Melanoma Predictors.

    PubMed

    Zabor, Emily C; Coit, Daniel; Gershenwald, Jeffrey E; McMasters, Kelly M; Michaelson, James S; Stromberg, Arnold J; Panageas, Katherine S

    2018-02-22

    Prognostic models are increasingly being made available online, where they can be publicly accessed by both patients and clinicians. These online tools are an important resource for patients to better understand their prognosis and for clinicians to make informed decisions about treatment and follow-up. The goal of this analysis was to highlight the possible variability in multiple online prognostic tools in a single disease. To demonstrate the variability in survival predictions across online prognostic tools, we applied a single validation dataset to three online melanoma prognostic tools. Data on melanoma patients treated at Memorial Sloan Kettering Cancer Center between 2000 and 2014 were retrospectively collected. Calibration was assessed using calibration plots and discrimination was assessed using the C-index. In this demonstration project, we found important differences across the three models that led to variability in individual patients' predicted survival across the tools, especially in the lower range of predictions. In a validation test using a single-institution data set, calibration and discrimination varied across the three models. This study underscores the potential variability both within and across online tools, and highlights the importance of using methodological rigor when developing a prognostic model that will be made publicly available online. The results also reinforce that careful development and thoughtful interpretation, including understanding a given tool's limitations, are required in order for online prognostic tools that provide survival predictions to be a useful resource for both patients and clinicians.

  17. Drug-Target Interaction Prediction through Label Propagation with Linear Neighborhood Information.

    PubMed

    Zhang, Wen; Chen, Yanlin; Li, Dingfang

    2017-11-25

    Interactions between drugs and target proteins provide important information for the drug discovery. Currently, experiments identified only a small number of drug-target interactions. Therefore, the development of computational methods for drug-target interaction prediction is an urgent task of theoretical interest and practical significance. In this paper, we propose a label propagation method with linear neighborhood information (LPLNI) for predicting unobserved drug-target interactions. Firstly, we calculate drug-drug linear neighborhood similarity in the feature spaces, by considering how to reconstruct data points from neighbors. Then, we take similarities as the manifold of drugs, and assume the manifold unchanged in the interaction space. At last, we predict unobserved interactions between known drugs and targets by using drug-drug linear neighborhood similarity and known drug-target interactions. The experiments show that LPLNI can utilize only known drug-target interactions to make high-accuracy predictions on four benchmark datasets. Furthermore, we consider incorporating chemical structures into LPLNI models. Experimental results demonstrate that the model with integrated information (LPLNI-II) can produce improved performances, better than other state-of-the-art methods. The known drug-target interactions are an important information source for computational predictions. The usefulness of the proposed method is demonstrated by cross validation and the case study.

  18. DEEP--a tool for differential expression effector prediction.

    PubMed

    Degenhardt, Jost; Haubrock, Martin; Dönitz, Jürgen; Wingender, Edgar; Crass, Torsten

    2007-07-01

    High-throughput methods for measuring transcript abundance, like SAGE or microarrays, are widely used for determining differences in gene expression between different tissue types, dignities (normal/malignant) or time points. Further analysis of such data frequently aims at the identification of gene interaction networks that form the causal basis for the observed properties of the systems under examination. To this end, it is usually not sufficient to rely on the measured gene expression levels alone; rather, additional biological knowledge has to be taken into account in order to generate useful hypotheses about the molecular mechanism leading to the realization of a certain phenotype. We present a method that combines gene expression data with biological expert knowledge on molecular interaction networks, as described by the TRANSPATH database on signal transduction, to predict additional--and not necessarily differentially expressed--genes or gene products which might participate in processes specific for either of the examined tissues or conditions. In a first step, significance values for over-expression in tissue/condition A or B are assigned to all genes in the expression data set. Genes with a significance value exceeding a certain threshold are used as starting points for the reconstruction of a graph with signaling components as nodes and signaling events as edges. In a subsequent graph traversal process, again starting from the previously identified differentially expressed genes, all encountered nodes 'inherit' all their starting nodes' significance values. In a final step, the graph is visualized, the nodes being colored according to a weighted average of their inherited significance values. Each node's, or sub-network's, predominant color, ranging from green (significant for tissue/condition A) over yellow (not significant for either tissue/condition) to red (significant for tissue/condition B), thus gives an immediate visual clue on which molecules

  19. A Comparison of Predictive Thermo and Water Solvation Property Prediction Tools and Experimental Data for Selected Traditional Chemical Warfare Agents and Simulants II: COSMO RS and COSMOTherm

    DTIC Science & Technology

    2017-04-01

    A COMPARISON OF PREDICTIVE THERMO AND WATER SOLVATION PROPERTY PREDICTION TOOLS AND EXPERIMENTAL DATA FOR...4. TITLE AND SUBTITLE A Comparison of Predictive Thermo and Water Solvation Property Prediction Tools and Experimental Data for Selected...1  2.  EXPERIMENTAL PROCEDURE

  20. C-mii: a tool for plant miRNA and target identification.

    PubMed

    Numnark, Somrak; Mhuantong, Wuttichai; Ingsriswang, Supawadee; Wichadakul, Duangdao

    2012-01-01

    MicroRNAs (miRNAs) have been known to play an important role in several biological processes in both animals and plants. Although several tools for miRNA and target identification are available, the number of tools tailored towards plants is limited, and those that are available have specific functionality, lack graphical user interfaces, and restrict the number of input sequences. Large-scale computational identifications of miRNAs and/or targets of several plants have been also reported. Their methods, however, are only described as flow diagrams, which require programming skills and the understanding of input and output of the connected programs to reproduce. To overcome these limitations and programming complexities, we proposed C-mii as a ready-made software package for both plant miRNA and target identification. C-mii was designed and implemented based on established computational steps and criteria derived from previous literature with the following distinguishing features. First, software is easy to install with all-in-one programs and packaged databases. Second, it comes with graphical user interfaces (GUIs) for ease of use. Users can identify plant miRNAs and targets via step-by-step execution, explore the detailed results from each step, filter the results according to proposed constraints in plant miRNA and target biogenesis, and export sequences and structures of interest. Third, it supplies bird's eye views of the identification results with infographics and grouping information. Fourth, in terms of functionality, it extends the standard computational steps of miRNA target identification with miRNA-target folding and GO annotation. Fifth, it provides helper functions for the update of pre-installed databases and automatic recovery. Finally, it supports multi-project and multi-thread management. C-mii constitutes the first complete software package with graphical user interfaces enabling computational identification of both plant miRNA genes and mi

  1. C-mii: a tool for plant miRNA and target identification

    PubMed Central

    2012-01-01

    Background MicroRNAs (miRNAs) have been known to play an important role in several biological processes in both animals and plants. Although several tools for miRNA and target identification are available, the number of tools tailored towards plants is limited, and those that are available have specific functionality, lack graphical user interfaces, and restrict the number of input sequences. Large-scale computational identifications of miRNAs and/or targets of several plants have been also reported. Their methods, however, are only described as flow diagrams, which require programming skills and the understanding of input and output of the connected programs to reproduce. Results To overcome these limitations and programming complexities, we proposed C-mii as a ready-made software package for both plant miRNA and target identification. C-mii was designed and implemented based on established computational steps and criteria derived from previous literature with the following distinguishing features. First, software is easy to install with all-in-one programs and packaged databases. Second, it comes with graphical user interfaces (GUIs) for ease of use. Users can identify plant miRNAs and targets via step-by-step execution, explore the detailed results from each step, filter the results according to proposed constraints in plant miRNA and target biogenesis, and export sequences and structures of interest. Third, it supplies bird's eye views of the identification results with infographics and grouping information. Fourth, in terms of functionality, it extends the standard computational steps of miRNA target identification with miRNA-target folding and GO annotation. Fifth, it provides helper functions for the update of pre-installed databases and automatic recovery. Finally, it supports multi-project and multi-thread management. Conclusions C-mii constitutes the first complete software package with graphical user interfaces enabling computational identification of

  2. Evaluation in medical education: A topical review of target parameters, data collection tools and confounding factors.

    PubMed

    Schiekirka, Sarah; Feufel, Markus A; Herrmann-Lingen, Christoph; Raupach, Tobias

    2015-01-01

    Evaluation is an integral part of education in German medical schools. According to the quality standards set by the German Society for Evaluation, evaluation tools must provide an accurate and fair appraisal of teaching quality. Thus, data collection tools must be highly reliable and valid. This review summarises the current literature on evaluation of medical education with regard to the possible dimensions of teaching quality, the psychometric properties of survey instruments and potential confounding factors. We searched Pubmed, PsycINFO and PSYNDEX for literature on evaluation in medical education and included studies published up until June 30, 2011 as well as articles identified in the "grey literature". RESULTS are presented as a narrative review. We identified four dimensions of teaching quality: structure, process, teacher characteristics, and outcome. Student ratings are predominantly used to address the first three dimensions, and a number of reliable tools are available for this purpose. However, potential confounders of student ratings pose a threat to the validity of these instruments. Outcome is usually operationalised in terms of student performance on examinations, but methodological problems may limit the usability of these data for evaluation purposes. In addition, not all examinations at German medical schools meet current quality standards. The choice of tools for evaluating medical education should be guided by the dimension that is targeted by the evaluation. Likewise, evaluation results can only be interpreted within the context of the construct addressed by the data collection tool that was used as well as its specific confounding factors.

  3. Small-molecule inhibitors of the receptor tyrosine kinases: promising tools for targeted cancer therapies.

    PubMed

    Hojjat-Farsangi, Mohammad

    2014-08-08

    Chemotherapeutic and cytotoxic drugs are widely used in the treatment of cancer. In spite of the improvements in the life quality of patients, their effectiveness is compromised by several disadvantages. This represents a demand for developing new effective strategies with focusing on tumor cells and minimum side effects. Targeted cancer therapies and personalized medicine have been defined as a new type of emerging treatments. Small molecule inhibitors (SMIs) are among the most effective drugs for targeted cancer therapy. The growing number of approved SMIs of receptor tyrosine kinases (RTKs) i.e., tyrosine kinase inhibitors (TKIs) in the clinical oncology imply the increasing attention and application of these therapeutic tools. Most of the current approved RTK-TKIs in preclinical and clinical settings are multi-targeted inhibitors with several side effects. Only a few specific/selective RTK-TKIs have been developed for the treatment of cancer patients. Specific/selective RTK-TKIs have shown less deleterious effects compared to multi-targeted inhibitors. This review intends to highlight the importance of specific/selective TKIs for future development with less side effects and more manageable agents. This article provides an overview of: (1) the characteristics and function of RTKs and TKIs; (2) the recent advances in the improvement of specific/selective RTK-TKIs in preclinical or clinical settings; and (3) emerging RTKs for targeted cancer therapies by TKIs.

  4. Global analysis of bacterial transcription factors to predict cellular target processes.

    PubMed

    Doerks, Tobias; Andrade, Miguel A; Lathe, Warren; von Mering, Christian; Bork, Peer

    2004-03-01

    Whole-genome sequences are now available for >100 bacterial species, giving unprecedented power to comparative genomics approaches. We have applied genome-context methods to predict target processes that are regulated by transcription factors (TFs). Of 128 orthologous groups of proteins annotated as TFs, to date, 36 are functionally uncharacterized; in our analysis we predict a probable cellular target process or biochemical pathway for half of these functionally uncharacterized TFs.

  5. Cas9-based tools for targeted genome editing and transcriptional control.

    PubMed

    Xu, Tao; Li, Yongchao; Van Nostrand, Joy D; He, Zhili; Zhou, Jizhong

    2014-03-01

    Development of tools for targeted genome editing and regulation of gene expression has significantly expanded our ability to elucidate the mechanisms of interesting biological phenomena and to engineer desirable biological systems. Recent rapid progress in the study of a clustered, regularly interspaced short palindromic repeat (CRISPR)/CRISPR-associated (Cas) protein system in bacteria has facilitated the development of newly facile and programmable platforms for genome editing and transcriptional control in a sequence-specific manner. The core RNA-guided Cas9 endonuclease in the type II CRISPR system has been harnessed to realize gene mutation and DNA deletion and insertion, as well as transcriptional activation and repression, with multiplex targeting ability, just by customizing 20-nucleotide RNA components. Here we describe the molecular basis of the type II CRISPR/Cas system and summarize applications and factors affecting its utilization in model organisms. We also discuss the advantages and disadvantages of Cas9-based tools in comparison with widely used customizable tools, such as Zinc finger nucleases and transcription activator-like effector nucleases.

  6. A dual selection based, targeted gene replacement tool for Magnaporthe grisea and Fusarium oxysporum.

    PubMed

    Khang, Chang Hyun; Park, Sook-Young; Lee, Yong-Hwan; Kang, Seogchan

    2005-06-01

    Rapid progress in fungal genome sequencing presents many new opportunities for functional genomic analysis of fungal biology through the systematic mutagenesis of the genes identified through sequencing. However, the lack of efficient tools for targeted gene replacement is a limiting factor for fungal functional genomics, as it often necessitates the screening of a large number of transformants to identify the desired mutant. We developed an efficient method of gene replacement and evaluated factors affecting the efficiency of this method using two plant pathogenic fungi, Magnaporthe grisea and Fusarium oxysporum. This method is based on Agrobacterium tumefaciens-mediated transformation with a mutant allele of the target gene flanked by the herpes simplex virus thymidine kinase (HSVtk) gene as a conditional negative selection marker against ectopic transformants. The HSVtk gene product converts 5-fluoro-2'-deoxyuridine to a compound toxic to diverse fungi. Because ectopic transformants express HSVtk, while gene replacement mutants lack HSVtk, growing transformants on a medium amended with 5-fluoro-2'-deoxyuridine facilitates the identification of targeted mutants by counter-selecting against ectopic transformants. In addition to M. grisea and F. oxysporum, the method and associated vectors are likely to be applicable to manipulating genes in a broad spectrum of fungi, thus potentially serving as an efficient, universal functional genomic tool for harnessing the growing body of fungal genome sequence data to study fungal biology.

  7. A simple prediction tool for inhaled corticosteroid response in asthmatic children.

    PubMed

    Wu, Yi-Fan; Su, Ming-Wei; Chiang, Bor-Luen; Yang, Yao-Hsu; Tsai, Ching-Hui; Lee, Yungling L

    2017-12-07

    Inhaled corticosteroids are recommended as the first-line controller medication for childhood asthma owing to their multiple clinical benefits. However, heterogeneity in the response towards these drugs remains a significant clinical problem. Children aged 5 to 18 years with mild to moderate persistent asthma were recruited into the Taiwanese Consortium of Childhood Asthma Study. Their responses to inhaled corticosteroids were assessed based on their improvements in the asthma control test and peak expiratory flow. The predictors of responsiveness were demographic and clinical features that were available in primary care settings. We have developed a prediction model using logistic regression and have simplified it to formulate a practical tool. We assessed its predictive performance using the area under the receiver operating characteristic curve. Of the 73 asthmatic children with baseline and follow-up outcome measurements for inhaled corticosteroids treatment, 24 (33%) were defined as non-responders. The tool we have developed consisted of three predictors yielding a total score between 0 and 5, which are comprised of the following parameters: the age at physician-diagnosis of asthma, sex, and exhaled nitric oxide. Sensitivity and specificity of the tool for prediction of inhaled corticosteroids non-responsiveness, for a score of 3, were 0.75 and 0.69, respectively. The areas under the receiver operating characteristic curve for the prediction tool was 0.763. Our prediction tool represents a simple and low-cost method for predicting the response of inhaled corticosteroids treatment in asthmatic children.

  8. Predictive encoding of moving target trajectory by neurons in the parabigeminal nucleus

    PubMed Central

    Ma, Rui; Cui, He; Lee, Sang-Hun; Anastasio, Thomas J.

    2013-01-01

    Intercepting momentarily invisible moving objects requires internally generated estimations of target trajectory. We demonstrate here that the parabigeminal nucleus (PBN) encodes such estimations, combining sensory representations of target location, extrapolated positions of briefly obscured targets, and eye position information. Cui and Malpeli (Cui H, Malpeli JG. J Neurophysiol 89: 3128–3142, 2003) reported that PBN activity for continuously visible tracked targets is determined by retinotopic target position. Here we show that when cats tracked moving, blinking targets the relationship between activity and target position was similar for ON and OFF phases (400 ms for each phase). The dynamic range of activity evoked by virtual targets was 94% of that of real targets for the first 200 ms after target offset and 64% for the next 200 ms. Activity peaked at about the same best target position for both real and virtual targets. PBN encoding of target position takes into account changes in eye position resulting from saccades, even without visual feedback. Since PBN response fields are retinotopically organized, our results suggest that activity foci associated with real and virtual targets at a given target position lie in the same physical location in the PBN, i.e., a retinotopic as well as a rate encoding of virtual-target position. We also confirm that PBN activity is specific to the intended target of a saccade and is predictive of which target will be chosen if two are offered. A Bayesian predictor-corrector model is presented that conceptually explains the differences in the dynamic ranges of PBN neuronal activity evoked during tracking of real and virtual targets. PMID:23365185

  9. TarPmiR: a new approach for microRNA target site prediction.

    PubMed

    Ding, Jun; Li, Xiaoman; Hu, Haiyan

    2016-09-15

    The identification of microRNA (miRNA) target sites is fundamentally important for studying gene regulation. There are dozens of computational methods available for miRNA target site prediction. Despite their existence, we still cannot reliably identify miRNA target sites, partially due to our limited understanding of the characteristics of miRNA target sites. The recently published CLASH (crosslinking ligation and sequencing of hybrids) data provide an unprecedented opportunity to study the characteristics of miRNA target sites and improve miRNA target site prediction methods. Applying four different machine learning approaches to the CLASH data, we identified seven new features of miRNA target sites. Combining these new features with those commonly used by existing miRNA target prediction algorithms, we developed an approach called TarPmiR for miRNA target site prediction. Testing on two human and one mouse non-CLASH datasets, we showed that TarPmiR predicted more than 74.2% of true miRNA target sites in each dataset. Compared with three existing approaches, we demonstrated that TarPmiR is superior to these existing approaches in terms of better recall and better precision. The TarPmiR software is freely available at http://hulab.ucf.edu/research/projects/miRNA/TarPmiR/ CONTACTS: haihu@cs.ucf.edu or xiaoman@mail.ucf.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  10. Nanodelivery Systems as New Tools for Immunostimulant or Vaccine Administration: Targeting the Fish Immune System

    PubMed Central

    Ji, Jie; Torrealba, Debora; Ruyra, Àngels; Roher, Nerea

    2015-01-01

    Fish disease treatments have progressed significantly over the last few years and have moved from the massive use of antibiotics to the development of vaccines mainly based on inactivated bacteria. Today, the incorporation of immunostimulants and antigens into nanomaterials provide us with new tools to enhance the performance of immunostimulation. Nanoparticles are dispersions or solid particles designed with specific physical properties (size, surface charge, or loading capacity), which allow controlled delivery and therefore improved targeting and stimulation of the immune system. The use of these nanodelivery platforms in fish is in the initial steps of development. Here we review the advances in the application of nanoparticles to fish disease prevention including: the type of biomaterial, the type of immunostimulant or vaccine loaded into the nanoparticles, and how they target the fish immune system. PMID:26492276

  11. Target-motion prediction for robotic search and rescue in wilderness environments.

    PubMed

    Macwan, Ashish; Nejat, Goldie; Benhabib, Beno

    2011-10-01

    This paper presents a novel modular methodology for predicting a lost person's (motion) behavior for autonomous coordinated multirobot wilderness search and rescue. The new concept of isoprobability curves is introduced and developed, which represents a unique mechanism for identifying the target's probable location at any given time within the search area while accounting for influences such as terrain topology, target physiology and psychology, clues found, etc. The isoprobability curves are propagated over time and space. The significant tangible benefit of the proposed target-motion prediction methodology is demonstrated through a comparison to a nonprobabilistic approach, as well as through a simulated realistic wilderness search scenario.

  12. Predicting tool life in turning operations using neural networks and image processing

    NASA Astrophysics Data System (ADS)

    Mikołajczyk, T.; Nowicki, K.; Bustillo, A.; Yu Pimenov, D.

    2018-05-01

    A two-step method is presented for the automatic prediction of tool life in turning operations. First, experimental data are collected for three cutting edges under the same constant processing conditions. In these experiments, the parameter of tool wear, VB, is measured with conventional methods and the same parameter is estimated using Neural Wear, a customized software package that combines flank wear image recognition and Artificial Neural Networks (ANNs). Second, an ANN model of tool life is trained with the data collected from the first two cutting edges and the subsequent model is evaluated on two different subsets for the third cutting edge: the first subset is obtained from the direct measurement of tool wear and the second is obtained from the Neural Wear software that estimates tool wear using edge images. Although the complete-automated solution, Neural Wear software for tool wear recognition plus the ANN model of tool life prediction, presented a slightly higher error than the direct measurements, it was within the same range and can meet all industrial requirements. These results confirm that the combination of image recognition software and ANN modelling could potentially be developed into a useful industrial tool for low-cost estimation of tool life in turning operations.

  13. Drug Target Prediction and Repositioning Using an Integrated Network-Based Approach

    PubMed Central

    Emig, Dorothea; Ivliev, Alexander; Pustovalova, Olga; Lancashire, Lee; Bureeva, Svetlana; Nikolsky, Yuri; Bessarabova, Marina

    2013-01-01

    The discovery of novel drug targets is a significant challenge in drug development. Although the human genome comprises approximately 30,000 genes, proteins encoded by fewer than 400 are used as drug targets in the treatment of diseases. Therefore, novel drug targets are extremely valuable as the source for first in class drugs. On the other hand, many of the currently known drug targets are functionally pleiotropic and involved in multiple pathologies. Several of them are exploited for treating multiple diseases, which highlights the need for methods to reliably reposition drug targets to new indications. Network-based methods have been successfully applied to prioritize novel disease-associated genes. In recent years, several such algorithms have been developed, some focusing on local network properties only, and others taking the complete network topology into account. Common to all approaches is the understanding that novel disease-associated candidates are in close overall proximity to known disease genes. However, the relevance of these methods to the prediction of novel drug targets has not yet been assessed. Here, we present a network-based approach for the prediction of drug targets for a given disease. The method allows both repositioning drug targets known for other diseases to the given disease and the prediction of unexploited drug targets which are not used for treatment of any disease. Our approach takes as input a disease gene expression signature and a high-quality interaction network and outputs a prioritized list of drug targets. We demonstrate the high performance of our method and highlight the usefulness of the predictions in three case studies. We present novel drug targets for scleroderma and different types of cancer with their underlying biological processes. Furthermore, we demonstrate the ability of our method to identify non-suspected repositioning candidates using diabetes type 1 as an example. PMID:23593264

  14. Drug-target interaction prediction via class imbalance-aware ensemble learning.

    PubMed

    Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong

    2016-12-22

    Multiple computational methods for predicting drug-target interactions have been developed to facilitate the drug discovery process. These methods use available data on known drug-target interactions to train classifiers with the purpose of predicting new undiscovered interactions. However, a key challenge regarding this data that has not yet been addressed by these methods, namely class imbalance, is potentially degrading the prediction performance. Class imbalance can be divided into two sub-problems. Firstly, the number of known interacting drug-target pairs is much smaller than that of non-interacting drug-target pairs. This imbalance ratio between interacting and non-interacting drug-target pairs is referred to as the between-class imbalance. Between-class imbalance degrades prediction performance due to the bias in prediction results towards the majority class (i.e. the non-interacting pairs), leading to more prediction errors in the minority class (i.e. the interacting pairs). Secondly, there are multiple types of drug-target interactions in the data with some types having relatively fewer members (or are less represented) than others. This variation in representation of the different interaction types leads to another kind of imbalance referred to as the within-class imbalance. In within-class imbalance, prediction results are biased towards the better represented interaction types, leading to more prediction errors in the less represented interaction types. We propose an ensemble learning method that incorporates techniques to address the issues of between-class imbalance and within-class imbalance. Experiments show that the proposed method improves results over 4 state-of-the-art methods. In addition, we simulated cases for new drugs and targets to see how our method would perform in predicting their interactions. New drugs and targets are those for which no prior interactions are known. Our method displayed satisfactory prediction performance and was

  15. Augmenting Predictive Modeling Tools with Clinical Insights for Care Coordination Program Design and Implementation.

    PubMed

    Johnson, Tracy L; Brewer, Daniel; Estacio, Raymond; Vlasimsky, Tara; Durfee, Michael J; Thompson, Kathy R; Everhart, Rachel M; Rinehart, Deborath J; Batal, Holly

    2015-01-01

    The Center for Medicare and Medicaid Innovation (CMMI) awarded Denver Health's (DH) integrated, safety net health care system $19.8 million to implement a "population health" approach into the delivery of primary care. This major practice transformation builds on the Patient Centered Medical Home (PCMH) and Wagner's Chronic Care Model (CCM) to achieve the "Triple Aim": improved health for populations, care to individuals, and lower per capita costs. This paper presents a case study of how DH integrated published predictive models and front-line clinical judgment to implement a clinically actionable, risk stratification of patients. This population segmentation approach was used to deploy enhanced care team staff resources and to tailor care-management services to patient need, especially for patients at high risk of avoidable hospitalization. Developing, implementing, and gaining clinical acceptance of the Health Information Technology (HIT) solution for patient risk stratification was a major grant objective. In addition to describing the Information Technology (IT) solution itself, we focus on the leadership and organizational processes that facilitated its multidisciplinary development and ongoing iterative refinement, including the following: team composition, target population definition, algorithm rule development, performance assessment, and clinical-workflow optimization. We provide examples of how dynamic business intelligence tools facilitated clinical accessibility for program design decisions by enabling real-time data views from a population perspective down to patient-specific variables. We conclude that population segmentation approaches that integrate clinical perspectives with predictive modeling results can better identify high opportunity patients amenable to medical home-based, enhanced care team interventions.

  16. FSPP: A Tool for Genome-Wide Prediction of smORF-Encoded Peptides and Their Functions

    PubMed Central

    Li, Hui; Xiao, Li; Zhang, Lili; Wu, Jiarui; Wei, Bin; Sun, Ninghui; Zhao, Yi

    2018-01-01

    smORFs are small open reading frames of less than 100 codons. Recent low throughput experiments showed a lot of smORF-encoded peptides (SEPs) played crucial rule in processes such as regulation of transcription or translation, transportation through membranes and the antimicrobial activity. In order to gather more functional SEPs, it is necessary to have access to genome-wide prediction tools to give profound directions for low throughput experiments. In this study, we put forward a functional smORF-encoded peptides predictor (FSPP) which tended to predict authentic SEPs and their functions in a high throughput method. FSPP used the overlap of detected SEPs from Ribo-seq and mass spectrometry as target objects. With the expression data on transcription and translation levels, FSPP built two co-expression networks. Combing co-location relations, FSPP constructed a compound network and then annotated SEPs with functions of adjacent nodes. Tested on 38 sequenced samples of 5 human cell lines, FSPP successfully predicted 856 out of 960 annotated proteins. Interestingly, FSPP also highlighted 568 functional SEPs from these samples. After comparison, the roles predicted by FSPP were consistent with known functions. These results suggest that FSPP is a reliable tool for the identification of functional small peptides. FSPP source code can be acquired at https://www.bioinfo.org/FSPP. PMID:29675032

  17. Analysis and Visualization Tool for Targeted Amplicon Bisulfite Sequencing on Ion Torrent Sequencers

    PubMed Central

    Pabinger, Stephan; Ernst, Karina; Pulverer, Walter; Kallmeyer, Rainer; Valdes, Ana M.; Metrustry, Sarah; Katic, Denis; Nuzzo, Angelo; Kriegner, Albert; Vierlinger, Klemens; Weinhaeusel, Andreas

    2016-01-01

    Targeted sequencing of PCR amplicons generated from bisulfite deaminated DNA is a flexible, cost-effective way to study methylation of a sample at single CpG resolution and perform subsequent multi-target, multi-sample comparisons. Currently, no platform specific protocol, support, or analysis solution is provided to perform targeted bisulfite sequencing on a Personal Genome Machine (PGM). Here, we present a novel tool, called TABSAT, for analyzing targeted bisulfite sequencing data generated on Ion Torrent sequencers. The workflow starts with raw sequencing data, performs quality assessment, and uses a tailored version of Bismark to map the reads to a reference genome. The pipeline visualizes results as lollipop plots and is able to deduce specific methylation-patterns present in a sample. The obtained profiles are then summarized and compared between samples. In order to assess the performance of the targeted bisulfite sequencing workflow, 48 samples were used to generate 53 different Bisulfite-Sequencing PCR amplicons from each sample, resulting in 2,544 amplicon targets. We obtained a mean coverage of 282X using 1,196,822 aligned reads. Next, we compared the sequencing results of these targets to the methylation level of the corresponding sites on an Illumina 450k methylation chip. The calculated average Pearson correlation coefficient of 0.91 confirms the sequencing results with one of the industry-leading CpG methylation platforms and shows that targeted amplicon bisulfite sequencing provides an accurate and cost-efficient method for DNA methylation studies, e.g., to provide platform-independent confirmation of Illumina Infinium 450k methylation data. TABSAT offers a novel way to analyze data generated by Ion Torrent instruments and can also be used with data from the Illumina MiSeq platform. It can be easily accessed via the Platomics platform, which offers a web-based graphical user interface along with sample and parameter storage. TABSAT is freely

  18. Hinge Moment Coefficient Prediction Tool and Control Force Analysis of Extra-300 Aerobatic Aircraft

    NASA Astrophysics Data System (ADS)

    Nurohman, Chandra; Arifianto, Ony; Barecasco, Agra

    2018-04-01

    This paper presents the development of tool that is applicable to predict hinge moment coefficients of subsonic aircraft based on Roskam’s method, including the validation and its application to predict hinge moment coefficient of an Extra-300. The hinge moment coefficients are used to predict the stick forces of the aircraft during several aerobatic maneuver i.e. inside loop, half cuban 8, split-s, and aileron roll. The maximum longitudinal stick force is 566.97 N occurs in inside loop while the maximum lateral stick force is 340.82 N occurs in aileron roll. Furthermore, validation hinge moment prediction method is performed using Cessna 172 data.

  19. Research of maneuvering target prediction and tracking technology based on IMM algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Zheng; Mao, Yao; Deng, Chao; Liu, Qiong; Chen, Jing

    2016-09-01

    Maneuvering target prediction and tracking technology is widely used in both military and civilian applications, the study of those technologies is all along the hotspot and difficulty. In the Electro-Optical acquisition-tracking-pointing system (ATP), the primary traditional maneuvering targets are ballistic target, large aircraft and other big targets. Those targets have the features of fast velocity and a strong regular trajectory and Kalman Filtering and polynomial fitting have good effects when they are used to track those targets. In recent years, the small unmanned aerial vehicles developed rapidly for they are small, nimble and simple operation. The small unmanned aerial vehicles have strong maneuverability in the observation system of ATP although they are close-in, slow and small targets. Moreover, those vehicles are under the manual operation, therefore, the acceleration of them changes greatly and they move erratically. So the prediction and tracking precision is low when traditional algorithms are used to track the maneuvering fly of those targets, such as speeding up, turning, climbing and so on. The interacting multiple model algorithm (IMM) use multiple models to match target real movement trajectory, there are interactions between each model. The IMM algorithm can switch model based on a Markov chain to adapt to the change of target movement trajectory, so it is suitable to solve the prediction and tracking problems of the small unmanned aerial vehicles because of the better adaptability of irregular movement. This paper has set up model set of constant velocity model (CV), constant acceleration model (CA), constant turning model (CT) and current statistical model. And the results of simulating and analyzing the real movement trajectory data of the small unmanned aerial vehicles show that the prediction and tracking technology based on the interacting multiple model algorithm can get relatively lower tracking error and improve tracking precision

  20. Target Highlights in CASP9: Experimental Target Structures for the Critical Assessment of Techniques for Protein Structure Prediction

    PubMed Central

    Kryshtafovych, Andriy; Moult, John; Bartual, Sergio G.; Bazan, J. Fernando; Berman, Helen; Casteel, Darren E.; Christodoulou, Evangelos; Everett, John K.; Hausmann, Jens; Heidebrecht, Tatjana; Hills, Tanya; Hui, Raymond; Hunt, John F.; Jayaraman, Seetharaman; Joachimiak, Andrzej; Kennedy, Michael A.; Kim, Choel; Lingel, Andreas; Michalska, Karolina; Montelione, Gaetano T.; Otero, José M.; Perrakis, Anastassis; Pizarro, Juan C.; van Raaij, Mark J.; Ramelot, Theresa A.; Rousseau, Francois; Tong, Liang; Wernimont, Amy K.; Young, Jasmine; Schwede, Torsten

    2011-01-01

    One goal of the CASP Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction is to identify the current state of the art in protein structure prediction and modeling. A fundamental principle of CASP is blind prediction on a set of relevant protein targets, i.e. the participating computational methods are tested on a common set of experimental target proteins, for which the experimental structures are not known at the time of modeling. Therefore, the CASP experiment would not have been possible without broad support of the experimental protein structural biology community. In this manuscript, several experimental groups discuss the structures of the proteins which they provided as prediction targets for CASP9, highlighting structural and functional peculiarities of these structures: the long tail fibre protein gp37 from bacteriophage T4, the cyclic GMP-dependent protein kinase Iβ (PKGIβ) dimerization/docking domain, the ectodomain of the JTB (Jumping Translocation Breakpoint) transmembrane receptor, Autotaxin (ATX) in complex with an inhibitor, the DNA-Binding J-Binding Protein 1 (JBP1) domain essential for biosynthesis and maintenance of DNA base-J (β-D-glucosyl-hydroxymethyluracil) in Trypanosoma and Leishmania, an so far uncharacterized 73 residue domain from Ruminococcus gnavus with a fold typical for PDZ-like domains, a domain from the Phycobilisome (PBS) core-membrane linker (LCM) phycobiliprotein ApcE from Synechocystis, the Heat shock protein 90 (Hsp90) activators PFC0360w and PFC0270w from Plasmodium falciparum, and 2-oxo-3-deoxygalactonate kinase from Klebsiella pneumoniae. PMID:22020785

  1. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  2. Ramping and Uncertainty Prediction Tool - Analysis and Visualization of Wind Generation Impact on Electrical Grid

    SciT

    Etingov, Pavel; Makarov, PNNL Yuri; Subbarao, PNNL Kris

    RUT software is designed for use by the Balancing Authorities to predict and display additional requirements caused by the variability and uncertainty in load and generation. The prediction is made for the next operating hours as well as for the next day. The tool predicts possible deficiencies in generation capability and ramping capability. This deficiency of balancing resources can cause serious risks to power system stability and also impact real-time market energy prices. The tool dynamically and adaptively correlates changing system conditions with the additional balancing needs triggered by the interplay between forecasted and actual load and output of variablemore » resources. The assessment is performed using a specially developed probabilistic algorithm incorporating multiple sources of uncertainty including wind, solar and load forecast errors. The tool evaluates required generation for a worst case scenario, with a user-specified confidence level.« less

  3. External validation of a simple clinical tool used to predict falls in people with Parkinson disease

    PubMed Central

    Duncan, Ryan P.; Cavanaugh, James T.; Earhart, Gammon M.; Ellis, Terry D.; Ford, Matthew P.; Foreman, K. Bo; Leddy, Abigail L.; Paul, Serene S.; Canning, Colleen G.; Thackeray, Anne; Dibble, Leland E.

    2015-01-01

    Background Assessment of fall risk in an individual with Parkinson disease (PD) is a critical yet often time consuming component of patient care. Recently a simple clinical prediction tool based only on fall history in the previous year, freezing of gait in the past month, and gait velocity <1.1 m/s was developed and accurately predicted future falls in a sample of individuals with PD. METHODS We sought to externally validate the utility of the tool by administering it to a different cohort of 171 individuals with PD. Falls were monitored prospectively for 6 months following predictor assessment. RESULTS The tool accurately discriminated future fallers from non-fallers (area under the curve [AUC] = 0.83; 95% CI 0.76 –0.89), comparable to the developmental study. CONCLUSION The results validated the utility of the tool for allowing clinicians to quickly and accurately identify an individual’s risk of an impending fall. PMID:26003412

  4. External validation of a simple clinical tool used to predict falls in people with Parkinson disease.

    PubMed

    Duncan, Ryan P; Cavanaugh, James T; Earhart, Gammon M; Ellis, Terry D; Ford, Matthew P; Foreman, K Bo; Leddy, Abigail L; Paul, Serene S; Canning, Colleen G; Thackeray, Anne; Dibble, Leland E

    2015-08-01

    Assessment of fall risk in an individual with Parkinson disease (PD) is a critical yet often time consuming component of patient care. Recently a simple clinical prediction tool based only on fall history in the previous year, freezing of gait in the past month, and gait velocity <1.1 m/s was developed and accurately predicted future falls in a sample of individuals with PD. We sought to externally validate the utility of the tool by administering it to a different cohort of 171 individuals with PD. Falls were monitored prospectively for 6 months following predictor assessment. The tool accurately discriminated future fallers from non-fallers (area under the curve [AUC] = 0.83; 95% CI 0.76-0.89), comparable to the developmental study. The results validated the utility of the tool for allowing clinicians to quickly and accurately identify an individual's risk of an impending fall. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Musite, a tool for global prediction of general and kinase-specific phosphorylation sites.

    PubMed

    Gao, Jianjiong; Thelen, Jay J; Dunker, A Keith; Xu, Dong

    2010-12-01

    Reversible protein phosphorylation is one of the most pervasive post-translational modifications, regulating diverse cellular processes in various organisms. High throughput experimental studies using mass spectrometry have identified many phosphorylation sites, primarily from eukaryotes. However, the vast majority of phosphorylation sites remain undiscovered, even in well studied systems. Because mass spectrometry-based experimental approaches for identifying phosphorylation events are costly, time-consuming, and biased toward abundant proteins and proteotypic peptides, in silico prediction of phosphorylation sites is potentially a useful alternative strategy for whole proteome annotation. Because of various limitations, current phosphorylation site prediction tools were not well designed for comprehensive assessment of proteomes. Here, we present a novel software tool, Musite, specifically designed for large scale predictions of both general and kinase-specific phosphorylation sites. We collected phosphoproteomics data in multiple organisms from several reliable sources and used them to train prediction models by a comprehensive machine-learning approach that integrates local sequence similarities to known phosphorylation sites, protein disorder scores, and amino acid frequencies. Application of Musite on several proteomes yielded tens of thousands of phosphorylation site predictions at a high stringency level. Cross-validation tests show that Musite achieves some improvement over existing tools in predicting general phosphorylation sites, and it is at least comparable with those for predicting kinase-specific phosphorylation sites. In Musite V1.0, we have trained general prediction models for six organisms and kinase-specific prediction models for 13 kinases or kinase families. Although the current pretrained models were not correlated with any particular cellular conditions, Musite provides a unique functionality for training customized prediction models

  6. Predicting Drug-Target Interactions for New Drug Compounds Using a Weighted Nearest Neighbor Profile.

    PubMed

    van Laarhoven, Twan; Marchiori, Elena

    2013-01-01

    In silico discovery of interactions between drug compounds and target proteins is of core importance for improving the efficiency of the laborious and costly experimental determination of drug-target interaction. Drug-target interaction data are available for many classes of pharmaceutically useful target proteins including enzymes, ion channels, GPCRs and nuclear receptors. However, current drug-target interaction databases contain a small number of drug-target pairs which are experimentally validated interactions. In particular, for some drug compounds (or targets) there is no available interaction. This motivates the need for developing methods that predict interacting pairs with high accuracy also for these 'new' drug compounds (or targets). We show that a simple weighted nearest neighbor procedure is highly effective for this task. We integrate this procedure into a recent machine learning method for drug-target interaction we developed in previous work. Results of experiments indicate that the resulting method predicts true interactions with high accuracy also for new drug compounds and achieves results comparable or better than those of recent state-of-the-art algorithms. Software is publicly available at http://cs.ru.nl/~tvanlaarhoven/drugtarget2013/.

  7. Synergistic target combination prediction from curated signaling networks: Machine learning meets systems biology and pharmacology.

    PubMed

    Chua, Huey Eng; Bhowmick, Sourav S; Tucker-Kellogg, Lisa

    2017-10-01

    Given a signaling network, the target combination prediction problem aims to predict efficacious and safe target combinations for combination therapy. State-of-the-art in silico methods use Monte Carlo simulated annealing (mcsa) to modify a candidate solution stochastically, and use the Metropolis criterion to accept or reject the proposed modifications. However, such stochastic modifications ignore the impact of the choice of targets and their activities on the combination's therapeutic effect and off-target effects, which directly affect the solution quality. In this paper, we present mascot, a method that addresses this limitation by leveraging two additional heuristic criteria to minimize off-target effects and achieve synergy for candidate modification. Specifically, off-target effects measure the unintended response of a signaling network to the target combination and is often associated with toxicity. Synergy occurs when a pair of targets exerts effects that are greater than the sum of their individual effects, and is generally a beneficial strategy for maximizing effect while minimizing toxicity. mascot leverages on a machine learning-based target prioritization method which prioritizes potential targets in a given disease-associated network to select more effective targets (better therapeutic effect and/or lower off-target effects); and on Loewe additivity theory from pharmacology which assesses the non-additive effects in a combination drug treatment to select synergistic target activities. Our experimental study on two disease-related signaling networks demonstrates the superiority of mascot in comparison to existing approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Predicting drug-target interactions by dual-network integrated logistic matrix factorization

    NASA Astrophysics Data System (ADS)

    Hao, Ming; Bryant, Stephen H.; Wang, Yanli

    2017-01-01

    In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.

  9. ACTP: A webserver for predicting potential targets and relevant pathways of autophagy-modulating compounds

    PubMed Central

    Ouyang, Liang; Cai, Haoyang; Liu, Bo

    2016-01-01

    Autophagy (macroautophagy) is well known as an evolutionarily conserved lysosomal degradation process for long-lived proteins and damaged organelles. Recently, accumulating evidence has revealed a series of small-molecule compounds that may activate or inhibit autophagy for therapeutic potential on human diseases. However, targeting autophagy for drug discovery still remains in its infancy. In this study, we developed a webserver called Autophagic Compound-Target Prediction (ACTP) (http://actp.liu-lab.com/) that could predict autophagic targets and relevant pathways for a given compound. The flexible docking of submitted small-molecule compound (s) to potential autophagic targets could be performed by backend reverse docking. The webpage would return structure-based scores and relevant pathways for each predicted target. Thus, these results provide a basis for the rapid prediction of potential targets/pathways of possible autophagy-activating or autophagy-inhibiting compounds without labor-intensive experiments. Moreover, ACTP will be helpful to shed light on identifying more novel autophagy-activating or autophagy-inhibiting compounds for future therapeutic implications. PMID:26824420

  10. Drug-target interaction prediction using ensemble learning and dimensionality reduction.

    PubMed

    Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong

    2017-10-01

    Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Predictive distractor context facilitates attentional selection of high, but not intermediate and low, salience targets.

    PubMed

    Töllner, Thomas; Conci, Markus; Müller, Hermann J

    2015-03-01

    It is well established that we can focally attend to a specific region in visual space without shifting our eyes, so as to extract action-relevant sensory information from covertly attended locations. The underlying mechanisms that determine how fast we engage our attentional spotlight in visual-search scenarios, however, remain controversial. One dominant view advocated by perceptual decision-making models holds that the times taken for focal-attentional selection are mediated by an internal template that biases perceptual coding and selection decisions exclusively through target-defining feature coding. This notion directly predicts that search times remain unaffected whether or not participants can anticipate the upcoming distractor context. Here we tested this hypothesis by employing an illusory-figure localization task that required participants to search for an invariant target amongst a variable distractor context, which gradually changed--either randomly or predictably--as a function of distractor-target similarity. We observed a graded decrease in internal focal-attentional selection times--correlated with external behavioral latencies--for distractor contexts of higher relative to lower similarity to the target. Critically, for low but not intermediate and high distractor-target similarity, these context-driven effects were cortically and behaviorally amplified when participants could reliably predict the type of distractors. This interactive pattern demonstrates that search guidance signals can integrate information about distractor, in addition to target, identities to optimize distractor-target competition for focal-attentional selection. © 2014 Wiley Periodicals, Inc.

  12. Atomic Oxygen Erosion Yield Predictive Tool for Spacecraft Polymers in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Bank, Bruce A.; de Groh, Kim K.; Backus, Jane A.

    2008-01-01

    A predictive tool was developed to estimate the low Earth orbit (LEO) atomic oxygen erosion yield of polymers based on the results of the Polymer Erosion and Contamination Experiment (PEACE) Polymers experiment flown as part of the Materials International Space Station Experiment 2 (MISSE 2). The MISSE 2 PEACE experiment accurately measured the erosion yield of a wide variety of polymers and pyrolytic graphite. The 40 different materials tested were selected specifically to represent a variety of polymers used in space as well as a wide variety of polymer chemical structures. The resulting erosion yield data was used to develop a predictive tool which utilizes chemical structure and physical properties of polymers that can be measured in ground laboratory testing to predict the in-space atomic oxygen erosion yield of a polymer. The properties include chemical structure, bonding information, density and ash content. The resulting predictive tool has a correlation coefficient of 0.914 when compared with actual MISSE 2 space data for 38 polymers and pyrolytic graphite. The intent of the predictive tool is to be able to make estimates of atomic oxygen erosion yields for new polymers without requiring expensive and time consumptive in-space testing.

  13. Ensemble-sensitivity Analysis Based Observation Targeting for Mesoscale Convection Forecasts and Factors Influencing Observation-Impact Prediction

    NASA Astrophysics Data System (ADS)

    Hill, A.; Weiss, C.; Ancell, B. C.

    2017-12-01

    The basic premise of observation targeting is that additional observations, when gathered and assimilated with a numerical weather prediction (NWP) model, will produce a more accurate forecast related to a specific phenomenon. Ensemble-sensitivity analysis (ESA; Ancell and Hakim 2007; Torn and Hakim 2008) is a tool capable of accurately estimating the proper location of targeted observations in areas that have initial model uncertainty and large error growth, as well as predicting the reduction of forecast variance due to the assimilated observation. ESA relates an ensemble of NWP model forecasts, specifically an ensemble of scalar forecast metrics, linearly to earlier model states. A thorough investigation is presented to determine how different factors of the forecast process are impacting our ability to successfully target new observations for mesoscale convection forecasts. Our primary goals for this work are to determine: (1) If targeted observations hold more positive impact over non-targeted (i.e. randomly chosen) observations; (2) If there are lead-time constraints to targeting for convection; (3) How inflation, localization, and the assimilation filter influence impact prediction and realized results; (4) If there exist differences between targeted observations at the surface versus aloft; and (5) how physics errors and nonlinearity may augment observation impacts.Ten cases of dryline-initiated convection between 2011 to 2013 are simulated within a simplified OSSE framework and presented here. Ensemble simulations are produced from a cycling system that utilizes the Weather Research and Forecasting (WRF) model v3.8.1 within the Data Assimilation Research Testbed (DART). A "truth" (nature) simulation is produced by supplying a 3-km WRF run with GFS analyses and integrating the model forward 90 hours, from the beginning of ensemble initialization through the end of the forecast. Target locations for surface and radiosonde observations are computed 6, 12, and

  14. Accurate Prediction of Motor Failures by Application of Multi CBM Tools: A Case Study

    NASA Astrophysics Data System (ADS)

    Dutta, Rana; Singh, Veerendra Pratap; Dwivedi, Jai Prakash

    2018-02-01

    Motor failures are very difficult to predict accurately with a single condition-monitoring tool as both electrical and the mechanical systems are closely related. Electrical problem, like phase unbalance, stator winding insulation failures can, at times, lead to vibration problem and at the same time mechanical failures like bearing failure, leads to rotor eccentricity. In this case study of a 550 kW blower motor it has been shown that a rotor bar crack was detected by current signature analysis and vibration monitoring confirmed the same. In later months in a similar motor vibration monitoring predicted bearing failure and current signature analysis confirmed the same. In both the cases, after dismantling the motor, the predictions were found to be accurate. In this paper we will be discussing the accurate predictions of motor failures through use of multi condition monitoring tools with two case studies.

  15. The search for drug-targetable diagnostic, prognostic and predictive biomarkers in chronic graft-versus-host disease.

    PubMed

    Ren, Hong-Gang; Adom, Djamilatou; Paczesny, Sophie

    2018-05-01

    Chronic graft-versus-host disease (cGVHD) continues to be the leading cause of late morbidity and mortality after allogeneic hematopoietic stem cell transplantation (allo-HSCT), which is an increasingly applied curative method for both benign and malignant hematologic disorders. Biomarker identification is crucial for the development of noninvasive and cost-effective cGVHD diagnostic, prognostic, and predictive test for use in clinic. Furthermore, biomarkers may help to gain a better insight on ongoing pathophysiological processes. The recent widespread application of omics technologies including genomics, transcriptomics, proteomics and cytomics provided opportunities to discover novel biomarkers. Areas covered: This review focuses on biomarkers identified through omics that play a critical role in target identification for drug development, and that were verified in at least two independent cohorts. It also summarizes the current status on omics tools used to identify these useful cGVHD targets. We briefly list the biomarkers identified and verified so far. We further address challenges associated to their exploitation and application in the management of cGVHD patients. Finally, insights on biomarkers that are drug targetable and represent potential therapeutic targets are discussed. Expert commentary: We focus on biomarkers that play an essential role in target identification.

  16. A computational tool to predict the evolutionarily conserved protein-protein interaction hot-spot residues from the structure of the unbound protein.

    PubMed

    Agrawal, Neeraj J; Helk, Bernhard; Trout, Bernhardt L

    2014-01-21

    Identifying hot-spot residues - residues that are critical to protein-protein binding - can help to elucidate a protein's function and assist in designing therapeutic molecules to target those residues. We present a novel computational tool, termed spatial-interaction-map (SIM), to predict the hot-spot residues of an evolutionarily conserved protein-protein interaction from the structure of an unbound protein alone. SIM can predict the protein hot-spot residues with an accuracy of 36-57%. Thus, the SIM tool can be used to predict the yet unknown hot-spot residues for many proteins for which the structure of the protein-protein complexes are not available, thereby providing a clue to their functions and an opportunity to design therapeutic molecules to target these proteins. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  17. On-Line, Self-Learning, Predictive Tool for Determining Payload Thermal Response

    NASA Technical Reports Server (NTRS)

    Jen, Chian-Li; Tilwick, Leon

    2000-01-01

    This paper will present the results of a joint ManTech / Goddard R&D effort, currently under way, to develop and test a computer based, on-line, predictive simulation model for use by facility operators to predict the thermal response of a payload during thermal vacuum testing. Thermal response was identified as an area that could benefit from the algorithms developed by Dr. Jeri for complex computer simulations. Most thermal vacuum test setups are unique since no two payloads have the same thermal properties. This requires that the operators depend on their past experiences to conduct the test which requires time for them to learn how the payload responds while at the same time limiting any risk of exceeding hot or cold temperature limits. The predictive tool being developed is intended to be used with the new Thermal Vacuum Data System (TVDS) developed at Goddard for the Thermal Vacuum Test Operations group. This model can learn the thermal response of the payload by reading a few data points from the TVDS, accepting the payload's current temperature as the initial condition for prediction. The model can then be used as a predictive tool to estimate the future payload temperatures according to a predetermined shroud temperature profile. If the error of prediction is too big, the model can be asked to re-learn the new situation on-line in real-time and give a new prediction. Based on some preliminary tests, we feel this predictive model can forecast the payload temperature of the entire test cycle within 5 degrees Celsius after it has learned 3 times during the beginning of the test. The tool will allow the operator to play "what-if' experiments to decide what is his best shroud temperature set-point control strategy. This tool will save money by minimizing guess work and optimizing transitions as well as making the testing process safer and easier to conduct.

  18. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    NASA Technical Reports Server (NTRS)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  19. Predicting the Noise of High Power Fluid Targets Using Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Moore, Michael; Covrig Dusa, Silviu

    The 2.5 kW liquid hydrogen (LH2) target used in the Qweak parity violation experiment is the highest power LH2 target in the world and the first to be designed with Computational Fluid Dynamics (CFD) at Jefferson Lab. The Qweak experiment determined the weak charge of the proton by measuring the parity-violating elastic scattering asymmetry of longitudinally polarized electrons from unpolarized liquid hydrogen at small momentum transfer (Q2 = 0 . 025 GeV2). This target satisfied the design goals of < 1 % luminosity reduction and < 5 % contribution to the total asymmetry width (the Qweak target achieved 2 % or 55ppm). State of the art time dependent CFD simulations are being developed to improve the predictions of target noise on the time scale of the electron beam helicity period. These predictions will be bench-marked with the Qweak target data. This work is an essential component in future designs of very high power low noise targets like MOLLER (5 kW, target noise asymmetry contribution < 25 ppm) and MESA (4.5 kW).

  20. Predicting essential genes for identifying potential drug targets in Aspergillus fumigatus.

    PubMed

    Lu, Yao; Deng, Jingyuan; Rhodes, Judith C; Lu, Hui; Lu, Long Jason

    2014-06-01

    Aspergillus fumigatus (Af) is a ubiquitous and opportunistic pathogen capable of causing acute, invasive pulmonary disease in susceptible hosts. Despite current therapeutic options, mortality associated with invasive Af infections remains unacceptably high, increasing 357% since 1980. Therefore, there is an urgent need for the development of novel therapeutic strategies, including more efficacious drugs acting on new targets. Thus, as noted in a recent review, "the identification of essential genes in fungi represents a crucial step in the development of new antifungal drugs". Expanding the target space by rapidly identifying new essential genes has thus been described as "the most important task of genomics-based target validation". In previous research, we were the first to show that essential gene annotation can be reliably transferred between distantly related four Prokaryotic species. In this study, we extend our machine learning approach to the much more complex Eukaryotic fungal species. A compendium of essential genes is predicted in Af by transferring known essential gene annotations from another filamentous fungus Neurospora crassa. This approach predicts essential genes by integrating diverse types of intrinsic and context-dependent genomic features encoded in microbial genomes. The predicted essential datasets contained 1674 genes. We validated our results by comparing our predictions with known essential genes in Af, comparing our predictions with those predicted by homology mapping, and conducting conditional expressed alleles. We applied several layers of filters and selected a set of potential drug targets from the predicted essential genes. Finally, we have conducted wet lab knockout experiments to verify our predictions, which further validates the accuracy and wide applicability of the machine learning approach. The approach presented here significantly extended our ability to predict essential genes beyond orthologs and made it possible to

  1. Predicting miRNA targets for head and neck squamous cell carcinoma using an ensemble method.

    PubMed

    Gao, Hong; Jin, Hui; Li, Guijun

    2018-01-01

    This study aimed to uncover potential microRNA (miRNA) targets in head and neck squamous cell carcinoma (HNSCC) using an ensemble method which combined 3 different methods: Pearson's correlation coefficient (PCC), Lasso and a causal inference method (i.e., intervention calculus when the directed acyclic graph (DAG) is absent [IDA]), based on Borda count election. The Borda count election method was used to integrate the top 100 predicted targets of each miRNA generated by individual methods. Afterwards, to validate the performance ability of our method, we checked the TarBase v6.0, miRecords v2013, miRWalk v2.0 and miRTarBase v4.5 databases to validate predictions for miRNAs. Pathway enrichment analysis of target genes in the top 1,000 miRNA-messenger RNA (mRNA) interactions was conducted to focus on significant KEGG pathways. Finally, we extracted target genes based on occurrence frequency ≥3. Based on an absolute value of PCC >0.7, we found 33 miRNAs and 288 mRNAs for further analysis. We extracted 10 target genes with predicted frequencies not less than 3. The target gene MYO5C possessed the highest frequency, which was predicted by 7 different miRNAs. Significantly, a total of 8 pathways were identified; the pathways of cytokine-cytokine receptor interaction and chemokine signaling pathway were the most significant. We successfully predicted target genes and pathways for HNSCC relying on miRNA expression data, mRNA expression profile, an ensemble method and pathway information. Our results may offer new information for the diagnosis and estimation of the prognosis of HNSCC.

  2. Cost Minimization Using an Artificial Neural Network Sleep Apnea Prediction Tool for Sleep Studies

    PubMed Central

    Teferra, Rahel A.; Grant, Brydon J. B.; Mindel, Jesse W.; Siddiqi, Tauseef A.; Iftikhar, Imran H.; Ajaz, Fatima; Aliling, Jose P.; Khan, Meena S.; Hoffmann, Stephen P.

    2014-01-01

    Rationale: More than a million polysomnograms (PSGs) are performed annually in the United States to diagnose obstructive sleep apnea (OSA). Third-party payers now advocate a home sleep test (HST), rather than an in-laboratory PSG, as the diagnostic study for OSA regardless of clinical probability, but the economic benefit of this approach is not known. Objectives: We determined the diagnostic performance of OSA prediction tools including the newly developed OSUNet, based on an artificial neural network, and performed a cost-minimization analysis when the prediction tools are used to identify patients who should undergo HST. Methods: The OSUNet was trained to predict the presence of OSA in a derivation group of patients who underwent an in-laboratory PSG (n = 383). Validation group 1 consisted of in-laboratory PSG patients (n = 149). The network was trained further in 33 patients who underwent HST and then was validated in a separate group of 100 HST patients (validation group 2). Likelihood ratios (LRs) were compared with two previously published prediction tools. The total costs from the use of the three prediction tools and the third-party approach within a clinical algorithm were compared. Measurements and Main Results: The OSUNet had a higher +LR in all groups compared with the STOP-BANG and the modified neck circumference (MNC) prediction tools. The +LRs for STOP-BANG, MNC, and OSUNet in validation group 1 were 1.1 (1.0–1.2), 1.3 (1.1–1.5), and 2.1 (1.4–3.1); and in validation group 2 they were 1.4 (1.1–1.7), 1.7 (1.3–2.2), and 3.4 (1.8–6.1), respectively. With an OSA prevalence less than 52%, the use of all three clinical prediction tools resulted in cost savings compared with the third-party approach. Conclusions: The routine requirement of an HST to diagnose OSA regardless of clinical probability is more costly compared with the use of OSA clinical prediction tools that identify patients who should undergo this procedure when OSA is expected to

  3. Development and Validation of an Empiric Tool to Predict Favorable Neurologic Outcomes Among PICU Patients.

    PubMed

    Gupta, Punkaj; Rettiganti, Mallikarjuna; Gossett, Jeffrey M; Daufeldt, Jennifer; Rice, Tom B; Wetzel, Randall C

    2018-01-01

    To create a novel tool to predict favorable neurologic outcomes during ICU stay among children with critical illness. Logistic regression models using adaptive lasso methodology were used to identify independent factors associated with favorable neurologic outcomes. A mixed effects logistic regression model was used to create the final prediction model including all predictors selected from the lasso model. Model validation was performed using a 10-fold internal cross-validation approach. Virtual Pediatric Systems (VPS, LLC, Los Angeles, CA) database. Patients less than 18 years old admitted to one of the participating ICUs in the Virtual Pediatric Systems database were included (2009-2015). None. A total of 160,570 patients from 90 hospitals qualified for inclusion. Of these, 1,675 patients (1.04%) were associated with a decline in Pediatric Cerebral Performance Category scale by at least 2 between ICU admission and ICU discharge (unfavorable neurologic outcome). The independent factors associated with unfavorable neurologic outcome included higher weight at ICU admission, higher Pediatric Index of Morality-2 score at ICU admission, cardiac arrest, stroke, seizures, head/nonhead trauma, use of conventional mechanical ventilation and high-frequency oscillatory ventilation, prolonged hospital length of ICU stay, and prolonged use of mechanical ventilation. The presence of chromosomal anomaly, cardiac surgery, and utilization of nitric oxide were associated with favorable neurologic outcome. The final online prediction tool can be accessed at https://soipredictiontool.shinyapps.io/GNOScore/. Our model predicted 139,688 patients with favorable neurologic outcomes in an internal validation sample when the observed number of patients with favorable neurologic outcomes was among 139,591 patients. The area under the receiver operating curve for the validation model was 0.90. This proposed prediction tool encompasses 20 risk factors into one probability to predict

  4. "In silico" mechanistic studies as predictive tools in microwave-assisted organic synthesis.

    PubMed

    Rodriguez, A M; Prieto, P; de la Hoz, A; Díaz-Ortiz, A

    2011-04-07

    Computational calculations can be used as a predictive tool in Microwave-Assisted Organic Synthesis (MAOS). A DFT study on Intramolecular Diels-Alder reactions (IMDA) indicated that the activation energy of the reaction and the polarity of the stationary points are two fundamental parameters to determine "a priori" if a reaction can be improved by using microwave irradiation.

  5. Predicting Knowledge Workers' Participation in Voluntary Learning with Employee Characteristics and Online Learning Tools

    ERIC Educational Resources Information Center

    Hicks, Catherine

    2018-01-01

    Purpose: This paper aims to explore predicting employee learning activity via employee characteristics and usage for two online learning tools. Design/methodology/approach: Statistical analysis focused on observational data collected from user logs. Data are analyzed via regression models. Findings: Findings are presented for over 40,000…

  6. Lung cancer in symptomatic patients presenting in primary care: a systematic review of risk prediction tools

    PubMed Central

    Schmidt-Hansen, Mia; Berendse, Sabine; Hamilton, Willie; Baldwin, David R

    2017-01-01

    Background Lung cancer is the leading cause of cancer deaths. Around 70% of patients first presenting to specialist care have advanced disease, at which point current treatments have little effect on survival. The issue for primary care is how to recognise patients earlier and investigate appropriately. This requires an assessment of the risk of lung cancer. Aim The aim of this study was to systematically review the existing risk prediction tools for patients presenting in primary care with symptoms that may indicate lung cancer Design and setting Systematic review of primary care data. Method Medline, PreMedline, Embase, the Cochrane Library, Web of Science, and ISI Proceedings (1980 to March 2016) were searched. The final list of included studies was agreed between two of the authors, who also appraised and summarised them. Results Seven studies with between 1482 and 2 406 127 patients were included. The tools were all based on UK primary care data, but differed in complexity of development, number/type of variables examined/included, and outcome time frame. There were four multivariable tools with internal validation area under the curves between 0.88 and 0.92. The tools all had a number of limitations, and none have been externally validated, or had their clinical and cost impact examined. Conclusion There is insufficient evidence for the recommendation of any one of the available risk prediction tools. However, some multivariable tools showed promising discrimination. What is needed to guide clinical practice is both external validation of the existing tools and a comparative study, so that the best tools can be incorporated into clinical decision tools used in primary care. PMID:28483820

  7. RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis.

    PubMed

    Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab

    2012-01-01

    RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. http://www.cemb.edu.pk/sw.html RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language.

  8. RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis

    PubMed Central

    Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab

    2012-01-01

    RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. Availability http://www.cemb.edu.pk/sw.html Abbreviations RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language. PMID:23055611

  9. Can we predict Acute Medical readmissions using the BOOST tool? A retrospective case note review.

    PubMed

    Lee, Geraldine A; Freedman, Daniel; Beddoes, Penelope; Lyness, Emily; Nixon, Imogen; Srivastava, Vivek

    2016-01-01

    Readmissions within 30-days of hospital discharge are a problem. The aim was to determine if the Better Outcomes for Older Adults through Safe Transitions (BOOST) risk assessment tool was applicable within the UK. Patients over 65 readmitted were identified retrospectively via a casenote review. BOOST assessment was applied with 1 point for each risk factor. 324 patients were readmitted (mean age 77 years) with a median of 7 days between discharge and readmission. The median BOOST score was 3 (IQR 2-4) with polypharmacy evident in 88% and prior hospitalisation in 70%. The tool correctly predicted 90% of readmissions using two or more risk factors and 99.1% if one risk factor was included. The BOOST assessment tool appears appropriate in predicting readmissions however further analysis is required to determine its precision.

  10. CRISPR/Cas9-based tools for targeted genome editing and replication control of HBV.

    PubMed

    Peng, Cheng; Lu, Mengji; Yang, Dongliang

    2015-10-01

    Hepatitis B virus (HBV) infection remains a major global health problem because current therapies rarely eliminate HBV infections to achieve a complete cure. A different treatment paradigm to effectively clear HBV infection and eradicate latent viral reservoirs is urgently required. In recent years, the development of a new RNA-guided gene-editing tool, the CRISPR/Cas9 (clustered regularly interspaced short palindromic repeats/CRISPR-associated nuclease 9) system, has greatly facilitated site-specific mutagenesis and represents a very promising potential therapeutic tool for diseases, including for eradication of invasive pathogens such as HBV. Here, we review recent advances in the use of CRISPR/Cas9, which is designed to target HBV specific DNA sequences to inhibit HBV replication and to induce viral genome mutation, in cell lines or animal models. Advantages, limitations and possible solutions, and proposed directions for future research are discussed to highlight the opportunities and challenges of CRISPR/Cas9 as a new, potentially curative therapy for chronic hepatitis B infection.

  11. Mechanisms, Prediction, and Prevention of ACL Injuries: Cut Risk With Three Sharpened and Validated Tools

    PubMed Central

    Hewett, Timothy E.; Myer, Gregory D.; Ford, Kevin R.; Paterno, Mark V.; Quatman, Carmen E.

    2017-01-01

    Economic and societal pressures influence modern medical practice to develop and implement prevention strategies. Anterior cruciate ligament (ACL) injury devastates the knee joint leading to short term disability and long term sequelae. Due to the high risk of long term osteoarthritis in all treatment populations following ACL injury, prevention is the only effective intervention for this life-altering disruption in knee health. The “Sequence of Prevention” Model provides a framework to monitor progress towards the ultimate goal of preventing ACL injuries. Utilizing this model, our multidisciplinary collaborative research team has spent the last decade working to delineate injury mechanisms, identify injury risk factors, predict which athletes are at-risk for injury, and develop ACL injury prevention programs. Within this model of injury prevention, modifiable factors (biomechanical and neuromuscular) related to injury mechanisms likely provide the best opportunity for intervention strategies aimed to decrease the risk of ACL injury, particularly in female athletes. Knowledge advancements have led to the development of potential solutions that allow athletes to compete with lowered risk of ACL injury. Design and integration of personalized clinical assessment tools and targeted prevention strategies for athletes at high risk for ACL injury may transform current prevention practices and ultimately significantly reduce ACL injury incidence. This 2016 OREF Clinical Research Award focuses on the authors' work and contributions to the field. The author's acknowledge the many research groups who have contributed to the current state of knowledge in the fields of ACL injury mechanisms, injury risk screening and injury prevention strategies. PMID:27612195

  12. P-type ATPases as drug targets: tools for medicine and science.

    PubMed

    Yatime, Laure; Buch-Pedersen, Morten J; Musgaard, Maria; Morth, J Preben; Lund Winther, Anne-Marie; Pedersen, Bjørn P; Olesen, Claus; Andersen, Jens Peter; Vilsen, Bente; Schiøtt, Birgit; Palmgren, Michael G; Møller, Jesper V; Nissen, Poul; Fedosova, Natalya

    2009-04-01

    P-type ATPases catalyze the selective active transport of ions like H+, Na+, K+, Ca2+, Zn2+, and Cu2+ across diverse biological membrane systems. Many members of the P-type ATPase protein family, such as the Na+,K+-, H+,K+-, Ca2+-, and H+-ATPases, are involved in the development of pathophysiological conditions or provide critical function to pathogens. Therefore, they seem to be promising targets for future drugs and novel antifungal agents and herbicides. Here, we review the current knowledge about P-type ATPase inhibitors and their present use as tools in science, medicine, and biotechnology. Recent structural information on a variety of P-type ATPase family members signifies that all P-type ATPases can be expected to share a similar basic structure and a similar basic machinery of ion transport. The ion transport pathway crossing the membrane lipid bilayer is constructed of two access channels leading from either side of the membrane to the ion binding sites at a central cavity. The selective opening and closure of the access channels allows vectorial access/release of ions from the binding sites. Recent structural information along with new homology modeling of diverse P-type ATPases in complex with known ligands demonstrate that the most proficient way for the development of efficient and selective drugs is to target their ion transport pathway.

  13. Lysosomal Rerouting of Hsp70 Trafficking as a Potential Immune Activating Tool for Targeting Melanoma

    PubMed Central

    Juhász, Kata; Thuenauer, Roland; Spachinger, Andrea; Duda, Ernő; Horváth, Ibolya; Vígh, László; Sonnleitner, Alois; Balogi, Zsolt

    2013-01-01

    Tumor specific cell surface localization and release of the stress inducible heat shock protein 70 (Hsp70) stimulate the immune system against cancer cells. A key immune stimulatory function of tumor-derived Hsp70 has been exemplified with the murine melanoma cell model, B16 overexpressing exogenous Hsp70. Despite the therapeutic potential mechanism of Hsp70 transport to the surface and release remained poorly understood. We investigated principles of Hsp70 trafficking in B16 melanoma cells with low and high level of Hsp70. In cells with low level of Hsp70 apparent trafficking of Hsp70 was mediated by endosomes. Excess Hsp70 triggered a series of changes such as a switch of Hsp70 trafficking from endosomes to lysosomes and a concomitant accumulation of Hsp70 in lysosomes. Moreover, lysosomal rerouting resulted in an elevated concentration of surface Hsp70 and enabled active release of Hsp70. In fact, hyperthermia, a clinically applicable approach triggered immediate active lysosomal release of soluble Hsp70 from cells with excess Hsp70. Furthermore, excess Hsp70 enabled targeting of internalized surface Hsp70 to lysosomes, allowing in turn heat-induced secretion of surface Hsp70. Altogether, we show that excess Hsp70 expressed in B16 melanoma cells diverts Hsp70 trafficking from endosomes to lysosomes, thereby supporting its surface localization and lysosomal release. Controlled excess-induced lysosomal rerouting and secretion of Hsp70 is proposed as a promising tool to stimulate anti-tumor immunity targeting melanoma. PMID:22920897

  14. Evaluation of in silico tools to predict the skin sensitization potential of chemicals.

    PubMed

    Verheyen, G R; Braeken, E; Van Deun, K; Van Miert, S

    2017-01-01

    Public domain and commercial in silico tools were compared for their performance in predicting the skin sensitization potential of chemicals. The packages were either statistical based (Vega, CASE Ultra) or rule based (OECD Toolbox, Toxtree, Derek Nexus). In practice, several of these in silico tools are used in gap filling and read-across, but here their use was limited to make predictions based on presence/absence of structural features associated to sensitization. The top 400 ranking substances of the ATSDR 2011 Priority List of Hazardous Substances were selected as a starting point. Experimental information was identified for 160 chemically diverse substances (82 positive and 78 negative). The prediction for skin sensitization potential was compared with the experimental data. Rule-based tools perform slightly better, with accuracies ranging from 0.6 (OECD Toolbox) to 0.78 (Derek Nexus), compared with statistical tools that had accuracies ranging from 0.48 (Vega) to 0.73 (CASE Ultra - LLNA weak model). Combining models increased the performance, with positive and negative predictive values up to 80% and 84%, respectively. However, the number of substances that were predicted positive or negative for skin sensitization in both models was low. Adding more substances to the dataset will increase the confidence in the conclusions reached. The insights obtained in this evaluation are incorporated in a web database www.asopus.weebly.com that provides a potential end user context for the scope and performance of different in silico tools with respect to a common dataset of curated skin sensitization data.

  15. Gene silencing in Tribolium castaneum as a tool for the targeted identification of candidate RNAi targets in crop pests.

    PubMed

    Knorr, Eileen; Fishilevich, Elane; Tenbusch, Linda; Frey, Meghan L F; Rangasamy, Murugesan; Billion, Andre; Worden, Sarah E; Gandra, Premchand; Arora, Kanika; Lo, Wendy; Schulenberg, Greg; Valverde-Garcia, Pablo; Vilcinskas, Andreas; Narva, Kenneth E

    2018-02-01

    RNAi shows potential as an agricultural technology for insect control, yet, a relatively low number of robust lethal RNAi targets have been demonstrated to control insects of agricultural interest. In the current study, a selection of lethal RNAi target genes from the iBeetle (Tribolium castaneum) screen were used to demonstrate efficacy of orthologous targets in the economically important coleopteran pests Diabrotica virgifera virgifera and Meligethes aeneus. Transcript orthologs of 50 selected genes were analyzed in D. v. virgifera diet-based RNAi bioassays; 21 of these RNAi targets showed mortality and 36 showed growth inhibition. Low dose injection- and diet-based dsRNA assays in T. castaneum and D. v. virgifera, respectively, enabled the identification of the four highly potent RNAi target genes: Rop, dre4, ncm, and RpII140. Maize was genetically engineered to express dsRNA directed against these prioritized candidate target genes. T 0 plants expressing Rop, dre4, or RpII140 RNA hairpins showed protection from D. v. virgifera larval feeding damage. dsRNA targeting Rop, dre4, ncm, and RpII140 in M. aeneus also caused high levels of mortality both by injection and feeding. In summary, high throughput systems for model organisms can be successfully used to identify potent RNA targets for difficult-to-work with agricultural insect pests.

  16. Reservoir characterization of the Upper Jurassic geothermal target formations (Molasse Basin, Germany): role of thermofacies as exploration tool

    NASA Astrophysics Data System (ADS)

    Homuth, S.; Götz, A. E.; Sass, I.

    2015-06-01

    The Upper Jurassic carbonates of the southern German Molasse Basin are the target of numerous geothermal combined heat and power production projects since the year 2000. A production-orientated reservoir characterization is therefore of high economic interest. Outcrop analogue studies enable reservoir property prediction by determination and correlation of lithofacies-related thermo- and petrophysical parameters. A thermofacies classification of the carbonate formations serves to identify heterogeneities and production zones. The hydraulic conductivity is mainly controlled by tectonic structures and karstification, whilst the type and grade of karstification is facies related. The rock permeability has only a minor effect on the reservoir's sustainability. Physical parameters determined on oven-dried samples have to be corrected, applying reservoir transfer models to water-saturated reservoir conditions. To validate these calculated parameters, a Thermo-Triaxial-Cell simulating the temperature and pressure conditions of the reservoir is used and calorimetric and thermal conductivity measurements under elevated temperature conditions are performed. Additionally, core and cutting material from a 1600 m deep research drilling and a 4850 m (total vertical depth, measured depth: 6020 m) deep well is used to validate the reservoir property predictions. Under reservoir conditions a decrease in permeability of 2-3 magnitudes is observed due to the thermal expansion of the rock matrix. For tight carbonates the matrix permeability is temperature-controlled; the thermophysical matrix parameters are density-controlled. Density increases typically with depth and especially with higher dolomite content. Therefore, thermal conductivity increases; however the dominant factor temperature also decreases the thermal conductivity. Specific heat capacity typically increases with increasing depth and temperature. The lithofacies-related characterization and prediction of reservoir

  17. Statistical Tools And Artificial Intelligence Approaches To Predict Fracture In Bulk Forming Processes

    NASA Astrophysics Data System (ADS)

    Di Lorenzo, R.; Ingarao, G.; Fonti, V.

    2007-05-01

    The crucial task in the prevention of ductile fracture is the availability of a tool for the prediction of such defect occurrence. The technical literature presents a wide investigation on this topic and many contributions have been given by many authors following different approaches. The main class of approaches regards the development of fracture criteria: generally, such criteria are expressed by determining a critical value of a damage function which depends on stress and strain paths: ductile fracture is assumed to occur when such critical value is reached during the analysed process. There is a relevant drawback related to the utilization of ductile fracture criteria; in fact each criterion usually has good performances in the prediction of fracture for particular stress - strain paths, i.e. it works very well for certain processes but may provide no good results for other processes. On the other hand, the approaches based on damage mechanics formulation are very effective from a theoretical point of view but they are very complex and their proper calibration is quite difficult. In this paper, two different approaches are investigated to predict fracture occurrence in cold forming operations. The final aim of the proposed method is the achievement of a tool which has a general reliability i.e. it is able to predict fracture for different forming processes. The proposed approach represents a step forward within a research project focused on the utilization of innovative predictive tools for ductile fracture. The paper presents a comparison between an artificial neural network design procedure and an approach based on statistical tools; both the approaches were aimed to predict fracture occurrence/absence basing on a set of stress and strain paths data. The proposed approach is based on the utilization of experimental data available, for a given material, on fracture occurrence in different processes. More in detail, the approach consists in the analysis of

  18. Visuo-vestibular interaction: predicting the position of a visual target during passive body rotation.

    PubMed

    Mackrous, I; Simoneau, M

    2011-11-10

    Following body rotation, optimal updating of the position of a memorized target is attained when retinal error is perceived and corrective saccade is performed. Thus, it appears that these processes may enable the calibration of the vestibular system by facilitating the sharing of information between both reference frames. Here, it is assessed whether having sensory information regarding body rotation in the target reference frame could enhance an individual's learning rate to predict the position of an earth-fixed target. During rotation, participants had to respond when they felt their body midline had crossed the position of the target and received knowledge of result. During practice blocks, for two groups, visual cues were displayed in the same reference frame of the target, whereas a third group relied on vestibular information (vestibular-only group) to predict the location of the target. Participants, unaware of the role of the visual cues (visual cues group), learned to predict the location of the target and spatial error decreased from 16.2 to 2.0°, reflecting a learning rate of 34.08 trials (determined from fitting a falling exponential model). In contrast, the group aware of the role of the visual cues (explicit visual cues group) showed a faster learning rate (i.e. 2.66 trials) but similar final spatial error 2.9°. For the vestibular-only group, similar accuracy was achieved (final spatial error of 2.3°), but their learning rate was much slower (i.e. 43.29 trials). Transferring to the Post-test (no visual cues and no knowledge of result) increased the spatial error of the explicit visual cues group (9.5°), but it did not change the performance of the vestibular group (1.2°). Overall, these results imply that cognition assists the brain in processing the sensory information within the target reference frame. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. The Predictive Accuracy of PREDICT: A Personalized Decision-Making Tool for Southeast Asian Women With Breast Cancer

    PubMed Central

    Wong, Hoong-Seam; Subramaniam, Shridevi; Alias, Zarifah; Taib, Nur Aishah; Ho, Gwo-Fuang; Ng, Char-Hong; Yip, Cheng-Har; Verkooijen, Helena M.; Hartman, Mikael; Bhoo-Pathy, Nirmala

    2015-01-01

    Abstract Web-based prognostication tools may provide a simple and economically feasible option to aid prognostication and selection of chemotherapy in early breast cancers. We validated PREDICT, a free online breast cancer prognostication and treatment benefit tool, in a resource-limited setting. All 1480 patients who underwent complete surgical treatment for stages I to III breast cancer from 1998 to 2006 were identified from the prospective breast cancer registry of University Malaya Medical Centre, Kuala Lumpur, Malaysia. Calibration was evaluated by comparing the model-predicted overall survival (OS) with patients’ actual OS. Model discrimination was tested using receiver-operating characteristic (ROC) analysis. Median age at diagnosis was 50 years. The median tumor size at presentation was 3 cm and 54% of patients had lymph node-negative disease. About 55% of women had estrogen receptor-positive breast cancer. Overall, the model-predicted 5 and 10-year OS was 86.3% and 77.5%, respectively, whereas the observed 5 and 10-year OS was 87.6% (difference: −1.3%) and 74.2% (difference: 3.3%), respectively; P values for goodness-of-fit test were 0.18 and 0.12, respectively. The program was accurate in most subgroups of patients, but significantly overestimated survival in patients aged <40 years, and in those receiving neoadjuvant chemotherapy. PREDICT performed well in terms of discrimination; areas under ROC curve were 0.78 (95% confidence interval [CI]: 0.74–0.81) and 0.73 (95% CI: 0.68–0.78) for 5 and 10-year OS, respectively. Based on its accurate performance in this study, PREDICT may be clinically useful in prognosticating women with breast cancer and personalizing breast cancer treatment in resource-limited settings. PMID:25715267

  20. The predictive accuracy of PREDICT: a personalized decision-making tool for Southeast Asian women with breast cancer.

    PubMed

    Wong, Hoong-Seam; Subramaniam, Shridevi; Alias, Zarifah; Taib, Nur Aishah; Ho, Gwo-Fuang; Ng, Char-Hong; Yip, Cheng-Har; Verkooijen, Helena M; Hartman, Mikael; Bhoo-Pathy, Nirmala

    2015-02-01

    Web-based prognostication tools may provide a simple and economically feasible option to aid prognostication and selection of chemotherapy in early breast cancers. We validated PREDICT, a free online breast cancer prognostication and treatment benefit tool, in a resource-limited setting. All 1480 patients who underwent complete surgical treatment for stages I to III breast cancer from 1998 to 2006 were identified from the prospective breast cancer registry of University Malaya Medical Centre, Kuala Lumpur, Malaysia. Calibration was evaluated by comparing the model-predicted overall survival (OS) with patients' actual OS. Model discrimination was tested using receiver-operating characteristic (ROC) analysis. Median age at diagnosis was 50 years. The median tumor size at presentation was 3 cm and 54% of patients had lymph node-negative disease. About 55% of women had estrogen receptor-positive breast cancer. Overall, the model-predicted 5 and 10-year OS was 86.3% and 77.5%, respectively, whereas the observed 5 and 10-year OS was 87.6% (difference: -1.3%) and 74.2% (difference: 3.3%), respectively; P values for goodness-of-fit test were 0.18 and 0.12, respectively. The program was accurate in most subgroups of patients, but significantly overestimated survival in patients aged <40 years, and in those receiving neoadjuvant chemotherapy. PREDICT performed well in terms of discrimination; areas under ROC curve were 0.78 (95% confidence interval [CI]: 0.74-0.81) and 0.73 (95% CI: 0.68-0.78) for 5 and 10-year OS, respectively. Based on its accurate performance in this study, PREDICT may be clinically useful in prognosticating women with breast cancer and personalizing breast cancer treatment in resource-limited settings.

  1. Drug-therapy networks and the prediction of novel drug targets

    PubMed Central

    Spiro, Zoltan; Kovacs, Istvan A; Csermely, Peter

    2008-01-01

    A recent study in BMC Pharmacology presents a network of drugs and the therapies in which they are used. Network approaches open new ways of predicting novel drug targets and overcoming the cellular robustness that can prevent drugs from working. PMID:18710588

  2. A new approach to human microRNA target prediction using ensemble pruning and rotation forest.

    PubMed

    Mousavi, Reza; Eftekhari, Mahdi; Haghighi, Mehdi Ghezelbash

    2015-12-01

    MicroRNAs (miRNAs) are small non-coding RNAs that have important functions in gene regulation. Since finding miRNA target experimentally is costly and needs spending much time, the use of machine learning methods is a growing research area for miRNA target prediction. In this paper, a new approach is proposed by using two popular ensemble strategies, i.e. Ensemble Pruning and Rotation Forest (EP-RTF), to predict human miRNA target. For EP, the approach utilizes Genetic Algorithm (GA). In other words, a subset of classifiers from the heterogeneous ensemble is first selected by GA. Next, the selected classifiers are trained based on the RTF method and then are combined using weighted majority voting. In addition to seeking a better subset of classifiers, the parameter of RTF is also optimized by GA. Findings of the present study confirm that the newly developed EP-RTF outperforms (in terms of classification accuracy, sensitivity, and specificity) the previously applied methods over four datasets in the field of human miRNA target. Diversity-error diagrams reveal that the proposed ensemble approach constructs individual classifiers which are more accurate and usually diverse than the other ensemble approaches. Given these experimental results, we highly recommend EP-RTF for improving the performance of miRNA target prediction.

  3. The predictive value of fall assessment tools for patients admitted to hospice care.

    PubMed

    Patrick, Rebecca J; Slobodian, Dana; Debanne, Sara; Huang, Ying; Wellman, Charles

    2017-09-01

    Fall assessment tools are commonly used to evaluate the likelihood of fall. For patients found to be at high risk, patient-specific fall prevention interventions are implemented. The purposes of this study were to describe the population, evaluate and compare the efficacy of fall assessment tools, and suggest the best use for these tools in hospice. Data were downloaded from the electronic medical record for all patients who were admitted to and died in hospice care in 2013. Variables included demographic, clinical and initial fall assessment scores that had been computed on admission to hospice care, using our standard fall assessment tool. To facilitate comparison among three tools, additional fall assessment calculations were made for each patient using the Morse Fall Scale and MACH-10, two tools commonly used in a variety of healthcare settings. Data were available for 3446 hospice patients. Female patients were less likely to fall than males; Fallers lived longer than Nonfallers; and patients with a primary dementia diagnosis fell 10 days sooner than those with a primary non-dementia diagnosis. A comparison of three fall assessment tools revealed that no tool had a good positive predictive value, but each demonstrated a good negative predictive value. Fall assessment scores should not be used as the sole predictor of likelihood of fall, and are best used as a supplement to clinical judgement. Patients with a primary dementia diagnosis are likely to fall earlier in their hospice care than those with other primary diagnoses. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  4. Initial Assessment of the Risk Assessment and Prediction Tool in a Heterogeneous Neurosurgical Patient Population.

    PubMed

    Piazza, Matthew; Sharma, Nikhil; Osiemo, Benjamin; McClintock, Scott; Missimer, Emily; Gardiner, Diana; Maloney, Eileen; Callahan, Danielle; Smith, J Lachlan; Welch, William; Schuster, James; Grady, M Sean; Malhotra, Neil R

    2018-05-21

    Bundled care payments are increasingly being explored for neurosurgical interventions. In this setting, skilled nursing facility (SNF) is less desirable from a cost perspective than discharge to home, underscoring the need for better preoperative prediction of postoperative disposition. To assess the capability of the Risk Assessment and Prediction Tool (RAPT) and other preoperative variables to determine expected disposition prior to surgery in a heterogeneous neurosurgical cohort, through observational study. Patients aged 50 yr or more undergoing elective neurosurgery were enrolled from June 2016 to February 2017 (n = 623). Logistic regression was used to identify preoperative characteristics predictive of discharge disposition. Results from multivariate analysis were used to create novel grading scales for the prediction of discharge disposition that were subsequently compared to the RAPT Score using Receiver Operating Characteristic analysis. Higher RAPT Score significantly predicted home disposition (P < .001). Age 65 and greater, dichotomized RAPT walk score, and spinal surgery below L2 were independent predictors of SNF discharge in multivariate analysis. A grading scale utilizing these variables had superior discriminatory power between SNF and home/rehab discharge when compared with RAPT score alone (P = .004). Our analysis identified age, lower lumbar/lumbosacral surgery, and RAPT walk score as independent predictors of discharge to SNF, and demonstrated superior predictive power compared with the total RAPT Score when combined in a novel grading scale. These tools may identify patients who may benefit from expedited discharge to subacute care facilities and decrease inpatient hospital resource utilization following surgery.

  5. MicroRNAs in Leukemias: Emerging Diagnostic Tools and Therapeutic Targets

    PubMed Central

    Mian, Yousaf A.; Zeleznik-Le, Nancy J.

    2010-01-01

    MicroRNAs (miRNA) are small non-coding RNAs of ~22 nucleotides that regulate the translation and stability of mRNA to control different functions of the cell. Misexpression of miRNA has been linked to disruption of normal cellular functions, which results in various disorders including cancers such as leukemias. MicroRNA involvement in disease has been the subject of much attention and is increasing our current understanding of disease biology. Such linkages have been determined by high-throughput studies, which provide a framework for characterizing differential miRNA expression levels correlating to different cytogenetic abnormalities and their corresponding malignancies. In addition, functional studies of particular miRNAs have begun to define the effects of miRNA on predicted mRNA targets. It is clear that miRNAs can serve as molecular markers of leukemias and the hope is that they can also serve as new therapeutic targets. Studies are beginning to elucidate how to deliver therapeutic antagonists to attenuate overexpressed miRNAs and to replace underexpressed miRNAs. In this review, we: i) discuss the current understanding of miRNA function and expression in normal hematopoiesis, ii) provide examples of miRNAs that are misregulated in leukemias, and iii) evaluate the current status and potential future directions for the burgeoning field of antisense oligonucleotides and other therapeutic attempts to intervene in miRNA disregulation in leukemias. PMID:20370647

  6. Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space

    PubMed Central

    Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred

    2016-01-01

    Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction

  7. Genome-wide prediction of vaccine targets for human herpes simplex viruses using Vaxign reverse vaccinology

    PubMed Central

    2013-01-01

    Herpes simplex virus (HSV) types 1 and 2 (HSV-1 and HSV-2) are the most common infectious agents of humans. No safe and effective HSV vaccines have been licensed. Reverse vaccinology is an emerging and revolutionary vaccine development strategy that starts with the prediction of vaccine targets by informatics analysis of genome sequences. Vaxign (http://www.violinet.org/vaxign) is the first web-based vaccine design program based on reverse vaccinology. In this study, we used Vaxign to analyze 52 herpesvirus genomes, including 3 HSV-1 genomes, one HSV-2 genome, 8 other human herpesvirus genomes, and 40 non-human herpesvirus genomes. The HSV-1 strain 17 genome that contains 77 proteins was used as the seed genome. These 77 proteins are conserved in two other HSV-1 strains (strain F and strain H129). Two envelope glycoproteins gJ and gG do not have orthologs in HSV-2 or 8 other human herpesviruses. Seven HSV-1 proteins (including gJ and gG) do not have orthologs in all 40 non-human herpesviruses. Nineteen proteins are conserved in all human herpesviruses, including capsid scaffold protein UL26.5 (NP_044628.1). As the only HSV-1 protein predicted to be an adhesin, UL26.5 is a promising vaccine target. The MHC Class I and II epitopes were predicted by the Vaxign Vaxitop prediction program and IEDB prediction programs recently installed and incorporated in Vaxign. Our comparative analysis found that the two programs identified largely the same top epitopes but also some positive results predicted from one program might not be positive from another program. Overall, our Vaxign computational prediction provides many promising candidates for rational HSV vaccine development. The method is generic and can also be used to predict other viral vaccine targets. PMID:23514126

  8. A computational approach for predicting off-target toxicity of antiviral ribonucleoside analogues to mitochondrial RNA polymerase.

    PubMed

    Freedman, Holly; Winter, Philip; Tuszynski, Jack; Tyrrell, D Lorne; Houghton, Michael

    2018-06-22

    In the development of antiviral drugs that target viral RNA-dependent RNA polymerases, off-target toxicity caused by the inhibition of the human mitochondrial RNA polymerase (POLRMT) is a major liability. Therefore, it is essential that all new ribonucleoside analogue drugs be accurately screened for POLRMT inhibition. A computational tool that can accurately predict NTP binding to POLRMT could assist in evaluating any potential toxicity and in designing possible salvaging strategies. Using the available crystal structure of POLRMT bound to an RNA transcript, here we created a model of POLRMT with an NTP molecule bound in the active site. Furthermore, we implemented a computational screening procedure that determines the relative binding free energy of an NTP analogue to POLRMT by free energy perturbation (FEP), i.e. a simulation in which the natural NTP molecule is slowly transformed into the analogue and back. In each direction, the transformation was performed over 40 ns of simulation on our IBM Blue Gene Q supercomputer. This procedure was validated across a panel of drugs for which experimental dissociation constants were available, showing that NTP relative binding free energies could be predicted to within 0.97 kcal/mol of the experimental values on average. These results demonstrate for the first time that free-energy simulation can be a useful tool for predicting binding affinities of NTP analogues to a polymerase. We expect that our model, together with similar models of viral polymerases, will be very useful in the screening and future design of NTP inhibitors of viral polymerases that have no mitochondrial toxicity. © 2018 Freedman et al.

  9. Insights into an original pocket-ligand pair classification: a promising tool for ligand profile prediction.

    PubMed

    Pérot, Stéphanie; Regad, Leslie; Reynès, Christelle; Spérandio, Olivier; Miteva, Maria A; Villoutreix, Bruno O; Camproux, Anne-Claude

    2013-01-01

    Pockets are today at the cornerstones of modern drug discovery projects and at the crossroad of several research fields, from structural biology to mathematical modeling. Being able to predict if a small molecule could bind to one or more protein targets or if a protein could bind to some given ligands is very useful for drug discovery endeavors, anticipation of binding to off- and anti-targets. To date, several studies explore such questions from chemogenomic approach to reverse docking methods. Most of these studies have been performed either from the viewpoint of ligands or targets. However it seems valuable to use information from both ligands and target binding pockets. Hence, we present a multivariate approach relating ligand properties with protein pocket properties from the analysis of known ligand-protein interactions. We explored and optimized the pocket-ligand pair space by combining pocket and ligand descriptors using Principal Component Analysis and developed a classification engine on this paired space, revealing five main clusters of pocket-ligand pairs sharing specific and similar structural or physico-chemical properties. These pocket-ligand pair clusters highlight correspondences between pocket and ligand topological and physico-chemical properties and capture relevant information with respect to protein-ligand interactions. Based on these pocket-ligand correspondences, a protocol of prediction of clusters sharing similarity in terms of recognition characteristics is developed for a given pocket-ligand complex and gives high performances. It is then extended to cluster prediction for a given pocket in order to acquire knowledge about its expected ligand profile or to cluster prediction for a given ligand in order to acquire knowledge about its expected pocket profile. This prediction approach shows promising results and could contribute to predict some ligand properties critical for binding to a given pocket, and conversely, some key pocket

  10. Insights into an Original Pocket-Ligand Pair Classification: A Promising Tool for Ligand Profile Prediction

    PubMed Central

    Reynès, Christelle; Spérandio, Olivier; Miteva, Maria A.; Villoutreix, Bruno O.; Camproux, Anne-Claude

    2013-01-01

    Pockets are today at the cornerstones of modern drug discovery projects and at the crossroad of several research fields, from structural biology to mathematical modeling. Being able to predict if a small molecule could bind to one or more protein targets or if a protein could bind to some given ligands is very useful for drug discovery endeavors, anticipation of binding to off- and anti-targets. To date, several studies explore such questions from chemogenomic approach to reverse docking methods. Most of these studies have been performed either from the viewpoint of ligands or targets. However it seems valuable to use information from both ligands and target binding pockets. Hence, we present a multivariate approach relating ligand properties with protein pocket properties from the analysis of known ligand-protein interactions. We explored and optimized the pocket-ligand pair space by combining pocket and ligand descriptors using Principal Component Analysis and developed a classification engine on this paired space, revealing five main clusters of pocket-ligand pairs sharing specific and similar structural or physico-chemical properties. These pocket-ligand pair clusters highlight correspondences between pocket and ligand topological and physico-chemical properties and capture relevant information with respect to protein-ligand interactions. Based on these pocket-ligand correspondences, a protocol of prediction of clusters sharing similarity in terms of recognition characteristics is developed for a given pocket-ligand complex and gives high performances. It is then extended to cluster prediction for a given pocket in order to acquire knowledge about its expected ligand profile or to cluster prediction for a given ligand in order to acquire knowledge about its expected pocket profile. This prediction approach shows promising results and could contribute to predict some ligand properties critical for binding to a given pocket, and conversely, some key pocket

  11. Plasticity Tool for Predicting Shear Nonlinearity of Unidirectional Laminates Under Multiaxial Loading

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Bomarito, Geoffrey F.

    2016-01-01

    This study implements a plasticity tool to predict the nonlinear shear behavior of unidirectional composite laminates under multiaxial loadings, with an intent to further develop the tool for use in composite progressive damage analysis. The steps for developing the plasticity tool include establishing a general quadratic yield function, deriving the incremental elasto-plastic stress-strain relations using the yield function with associated flow rule, and integrating the elasto-plastic stress-strain relations with a modified Euler method and a substepping scheme. Micromechanics analyses are performed to obtain normal and shear stress-strain curves that are used in determining the plasticity parameters of the yield function. By analyzing a micromechanics model, a virtual testing approach is used to replace costly experimental tests for obtaining stress-strain responses of composites under various loadings. The predicted elastic moduli and Poisson's ratios are in good agreement with experimental data. The substepping scheme for integrating the elasto-plastic stress-strain relations is suitable for working with displacement-based finite element codes. An illustration problem is solved to show that the plasticity tool can predict the nonlinear shear behavior for a unidirectional laminate subjected to multiaxial loadings.

  12. Role of retinal slip in the prediction of target motion during smooth and saccadic pursuit.

    PubMed

    de Brouwer, S; Missal, M; Lefèvre, P

    2001-08-01

    Visual tracking of moving targets requires the combination of smooth pursuit eye movements with catch-up saccades. In primates, catch-up saccades usually take place only during pursuit initiation because pursuit gain is close to unity. This contrasts with the lower and more variable gain of smooth pursuit in cats, where smooth eye movements are intermingled with catch-up saccades during steady-state pursuit. In this paper, we studied in detail the role of retinal slip in the prediction of target motion during smooth and saccadic pursuit in the cat. We found that the typical pattern of pursuit in the cat was a combination of smooth eye movements with saccades. During smooth pursuit initiation, there was a correlation between peak eye acceleration and target velocity. During pursuit maintenance, eye velocity oscillated at approximately 3 Hz around a steady-state value. The average gain of smooth pursuit was approximately 0.5. Trained cats were able to continue pursuing in the absence of a visible target, suggesting a role of the prediction of future target motion in this species. The analysis of catch-up saccades showed that the smooth-pursuit motor command is added to the saccadic command during catch-up saccades and that both position error and retinal slip are taken into account in their programming. The influence of retinal slip on catch-up saccades showed that prediction about future target motion is used in the programming of catch-up saccades. Altogether, these results suggest that pursuit systems in primates and cats are qualitatively similar, with a lower average gain in the cat and that prediction affects both saccades and smooth eye movements during pursuit.

  13. Comparative Analysis of Predicted Plastid-Targeted Proteomes of Sequenced Higher Plant Genomes

    PubMed Central

    Schaeffer, Scott; Harper, Artemus; Raja, Rajani; Jaiswal, Pankaj; Dhingra, Amit

    2014-01-01

    Plastids are actively involved in numerous plant processes critical to growth, development and adaptation. They play a primary role in photosynthesis, pigment and monoterpene synthesis, gravity sensing, starch and fatty acid synthesis, as well as oil, and protein storage. We applied two complementary methods to analyze the recently published apple genome (Malus × domestica) to identify putative plastid-targeted proteins, the first using TargetP and the second using a custom workflow utilizing a set of predictive programs. Apple shares roughly 40% of its 10,492 putative plastid-targeted proteins with that of the Arabidopsis (Arabidopsis thaliana) plastid-targeted proteome as identified by the Chloroplast 2010 project and ∼57% of its entire proteome with Arabidopsis. This suggests that the plastid-targeted proteomes between apple and Arabidopsis are different, and interestingly alludes to the presence of differential targeting of homologs between the two species. Co-expression analysis of 2,224 genes encoding putative plastid-targeted apple proteins suggests that they play a role in plant developmental and intermediary metabolism. Further, an inter-specific comparison of Arabidopsis, Prunus persica (Peach), Malus × domestica (Apple), Populus trichocarpa (Black cottonwood), Fragaria vesca (Woodland Strawberry), Solanum lycopersicum (Tomato) and Vitis vinifera (Grapevine) also identified a large number of novel species-specific plastid-targeted proteins. This analysis also revealed the presence of alternatively targeted homologs across species. Two separate analyses revealed that a small subset of proteins, one representing 289 protein clusters and the other 737 unique protein sequences, are conserved between seven plastid-targeted angiosperm proteomes. Majority of the novel proteins were annotated to play roles in stress response, transport, catabolic processes, and cellular component organization. Our results suggest that the current state of knowledge regarding

  14. Predictive Tools for Severe Dengue Conforming to World Health Organization 2009 Criteria

    PubMed Central

    Carrasco, Luis R.; Leo, Yee Sin; Cook, Alex R.; Lee, Vernon J.; Thein, Tun L.; Go, Chi Jong; Lye, David C.

    2014-01-01

    Background Dengue causes 50 million infections per year, posing a large disease and economic burden in tropical and subtropical regions. Only a proportion of dengue cases require hospitalization, and predictive tools to triage dengue patients at greater risk of complications may optimize usage of limited healthcare resources. For severe dengue (SD), proposed by the World Health Organization (WHO) 2009 dengue guidelines, predictive tools are lacking. Methods We undertook a retrospective study of adult dengue patients in Tan Tock Seng Hospital, Singapore, from 2006 to 2008. Demographic, clinical and laboratory variables at presentation from dengue polymerase chain reaction-positive and serology-positive patients were used to predict the development of SD after hospitalization using generalized linear models (GLMs). Principal findings Predictive tools compatible with well-resourced and resource-limited settings – not requiring laboratory measurements – performed acceptably with optimism-corrected specificities of 29% and 27% respectively for 90% sensitivity. Higher risk of severe dengue (SD) was associated with female gender, lower than normal hematocrit level, abdominal distension, vomiting and fever on admission. Lower risk of SD was associated with more years of age (in a cohort with an interquartile range of 27–47 years of age), leucopenia and fever duration on admission. Among the warning signs proposed by WHO 2009, we found support for abdominal pain or tenderness and vomiting as predictors of combined forms of SD. Conclusions The application of these predictive tools in the clinical setting may reduce unnecessary admissions by 19% allowing the allocation of scarce public health resources to patients according to the severity of outcomes. PMID:25010515

  15. Numerical tools to predict the environmental loads for offshore structures under extreme weather conditions

    NASA Astrophysics Data System (ADS)

    Wu, Yanling

    2018-05-01

    In this paper, the extreme waves were generated using the open source computational fluid dynamic (CFD) tools — OpenFOAM and Waves2FOAM — using linear and nonlinear NewWave input. They were used to conduct the numerical simulation of the wave impact process. Numerical tools based on first-order (with and without stretching) and second-order NewWave are investigated. The simulation to predict force loading for the offshore platform under the extreme weather condition is implemented and compared.

  16. In silico target prediction for elucidating the mode of action of herbicides including prospective validation.

    PubMed

    Chiddarwar, Rucha K; Rohrer, Sebastian G; Wolf, Antje; Tresch, Stefan; Wollenhaupt, Sabrina; Bender, Andreas

    2017-01-01

    The rapid emergence of pesticide resistance has given rise to a demand for herbicides with new mode of action (MoA). In the agrochemical sector, with the availability of experimental high throughput screening (HTS) data, it is now possible to utilize in silico target prediction methods in the early discovery phase to suggest the MoA of a compound via data mining of bioactivity data. While having been established in the pharmaceutical context, in the agrochemical area this approach poses rather different challenges, as we have found in this work, partially due to different chemistry, but even more so due to different (usually smaller) amounts of data, and different ways of conducting HTS. With the aim to apply computational methods for facilitating herbicide target identification, 48,000 bioactivity data against 16 herbicide targets were processed to train Laplacian modified Naïve Bayesian (NB) classification models. The herbicide target prediction model ("HerbiMod") is an ensemble of 16 binary classification models which are evaluated by internal, external and prospective validation sets. In addition to the experimental inactives, 10,000 random agrochemical inactives were included in the training process, which showed to improve the overall balanced accuracy of our models up to 40%. For all the models, performance in terms of balanced accuracy of≥80% was achieved in five-fold cross validation. Ranking target predictions was addressed by means of z-scores which improved predictivity over using raw scores alone. An external testset of 247 compounds from ChEMBL and a prospective testset of 394 compounds from BASF SE tested against five well studied herbicide targets (ACC, ALS, HPPD, PDS and PROTOX) were used for further validation. Only 4% of the compounds in the external testset lied in the applicability domain and extrapolation (and correct prediction) was hence impossible, which on one hand was surprising, and on the other hand illustrated the utilization of

  17. Which screening tools can predict injury to the lower extremities in team sports?: a systematic review.

    PubMed

    Dallinga, Joan M; Benjaminse, Anne; Lemmink, Koen A P M

    2012-09-01

    Injuries to lower extremities are common in team sports such as soccer, basketball, volleyball, football and field hockey. Considering personal grief, disabling consequences and high costs caused by injuries to lower extremities, the importance for the prevention of these injuries is evident. From this point of view it is important to know which screening tools can identify athletes who are at risk of injury to their lower extremities. The aim of this article is to determine the predictive values of anthropometric and/or physical screening tests for injuries to the leg, anterior cruciate ligament (ACL), knee, hamstring, groin and ankle in team sports. A systematic review was conducted in MEDLINE (1966 to September 2011), EMBASE (1989 to September 2011) and CINAHL (1982 to September 2011). Based on inclusion criteria defined a priori, titles, abstracts and full texts were analysed to find relevant studies. The analysis showed that different screening tools can be predictive for injuries to the knee, ACL, hamstring, groin and ankle. For injuries in general there is some support in the literature to suggest that general joint laxity is a predictive measure for leg injuries. The anterior right/left reach distance >4 cm and the composite reach distance <4.0% of limb length in girls measured with the star excursion balance test (SEBT) may predict leg injuries. Furthermore, an increasing age, a lower hamstring/quadriceps (H : Q) ratio and a decreased range of motion (ROM) of hip abduction may predict the occurrence of leg injuries. Hyperextension of the knee, side-to-side differences in anterior-posterior knee laxity and differences in knee abduction moment between both legs are suggested to be predictive tests for sustaining an ACL injury and height was a predictive screening tool for knee ligament injuries. There is some evidence that when age increases, the probability of sustaining a hamstring injury increases. Debate exists in the analysed literature regarding

  18. Prediction of Thermal Fatigue in Tooling for Die-casting Copper via Finite Element Analysis

    NASA Astrophysics Data System (ADS)

    Sakhuja, Amit; Brevick, Jerald R.

    2004-06-01

    Recent research by the Copper Development Association (CDA) has demonstrated the feasibility of die-casting electric motor rotors using copper. Electric motors using copper rotors are significantly more energy efficient relative to motors using aluminum rotors. However, one of the challenges in copper rotor die-casting is low tool life. Experiments have shown that the higher molten metal temperature of copper (1085 °C), as compared to aluminum (660 °C) accelerates the onset of thermal fatigue or heat checking in traditional H-13 tool steel. This happens primarily because the mechanical properties of H-13 tool steel decrease significantly above 650 °C. Potential approaches to mitigate the heat checking problem include: 1) identification of potential tool materials having better high temperature mechanical properties than H-13, and 2) reduction of the magnitude of cyclic thermal excursions experienced by the tooling by increasing the bulk die temperature. A preliminary assessment of alternative tool materials has led to the selection of nickel-based alloys Haynes 230 and Inconel 617 as potential candidates. These alloys were selected based on their elevated temperature physical and mechanical properties. Therefore, the overall objective of this research work was to predict the number of copper rotor die-casting cycles to the onset of heat checking (tool life) as a function of bulk die temperature (up to 650 °C) for Haynes 230 and Inconel 617 alloys. To achieve these goals, a 2D thermo-mechanical FEA was performed to evaluate strain ranges on selected die surfaces. The method of Universal Slopes (Strain Life Method) was then employed for thermal fatigue life predictions.

  19. Predictive model of outcome of targeted nodal assessment in colorectal cancer.

    PubMed

    Nissan, Aviram; Protic, Mladjan; Bilchik, Anton; Eberhardt, John; Peoples, George E; Stojadinovic, Alexander

    2010-02-01

    Improvement in staging accuracy is the principal aim of targeted nodal assessment in colorectal carcinoma. Technical factors independently predictive of false negative (FN) sentinel lymph node (SLN) mapping should be identified to facilitate operative decision making. To define independent predictors of FN SLN mapping and to develop a predictive model that could support surgical decisions. Data was analyzed from 2 completed prospective clinical trials involving 278 patients with colorectal carcinoma undergoing SLN mapping. Clinical outcome of interest was FN SLN(s), defined as one(s) with no apparent tumor cells in the presence of non-SLN metastases. To assess the independent predictive effect of a covariate for a nominal response (FN SLN), a logistic regression model was constructed and parameters estimated using maximum likelihood. A probabilistic Bayesian model was also trained and cross validated using 10-fold train-and-test sets to predict FN SLN mapping. Area under the curve (AUC) from receiver operating characteristics curves of these predictions was calculated to determine the predictive value of the model. Number of SLNs (<3; P = 0.03) and tumor-replaced nodes (P < 0.01) independently predicted FN SLN. Cross validation of the model created with Bayesian Network Analysis effectively predicted FN SLN (area under the curve = 0.84-0.86). The positive and negative predictive values of the model are 83% and 97%, respectively. This study supports a minimum threshold of 3 nodes for targeted nodal assessment in colorectal cancer, and establishes sufficient basis to conclude that SLN mapping and biopsy cannot be justified in the presence of clinically apparent tumor-replaced nodes.

  20. Pretest predictions of surface strain and fluid pressures in mercury targets undergoing thermal shock

    SciT

    Taleyarkhan, R.P.; Kim, S.H.; Haines, J.

    The authors provide a perspective overview of pretest modeling and analysis work related to thermal shock effects in spallation neutron source targets that were designed for conducting thermal shock experiments at the Los Alamos Neutron Science Center (LANSCE). Data to be derived are to be used for benchmarking computational tools as well as to assess the efficacy of optical gauges for monitoring dynamic fluid pressures and phenomena such as the onset of cavitation.

  1. SU-D-BRB-01: A Predictive Planning Tool for Stereotactic Radiosurgery

    SciT

    Palefsky, S; Roper, J; Elder, E

    Purpose: To demonstrate the feasibility of a predictive planning tool which provides SRS planning guidance based on simple patient anatomical properties: PTV size, PTV shape and distance from critical structures. Methods: Ten framed SRS cases treated at Winship Cancer Institute of Emory University were analyzed to extract data on PTV size, sphericity (shape), and distance from critical structures such as the brainstem and optic chiasm. The cases consisted of five pairs. Each pair consisted of two cases with a similar diagnosis (such as pituitary adenoma or arteriovenous malformation) that were treated with different techniques: DCA, or IMRS. A Naive Bayesmore » Classifier was trained on this data to establish the conditions under which each treatment modality was used. This model was validated by classifying ten other randomly-selected cases into DCA or IMRS classes, calculating the probability of each technique, and comparing results to the treated technique. Results: Of the ten cases used to validate the model, nine had their technique predicted correctly. The three cases treated with IMRS were all identified as such. Their probabilities of being treated with IMRS ranged between 59% and 100%. Six of the seven cases treated with DCA were correctly classified. These probabilities ranged between 51% and 95%. One case treated with DCA was incorrectly predicted to be an IMRS plan. The model’s confidence in this case was 91%. Conclusion: These findings indicate that a predictive planning tool based on simple patient anatomical properties can predict the SRS technique used for treatment. The algorithm operated with 90% accuracy. With further validation on larger patient populations, this tool may be used clinically to guide planners in choosing an appropriate treatment technique. The prediction algorithm could also be adapted to guide selection of treatment parameters such as treatment modality and number of fields for radiotherapy across anatomical sites.« less

  2. Guidelines for reporting and using prediction tools for genetic variation analysis.

    PubMed

    Vihinen, Mauno

    2013-02-01

    Computational prediction methods are widely used for the analysis of human genome sequence variants and their effects on gene/protein function, splice site aberration, pathogenicity, and disease risk. New methods are frequently developed. We believe that guidelines are essential for those writing articles about new prediction methods, as well as for those applying these tools in their research, so that the necessary details are reported. This will enable readers to gain the full picture of technical information, performance, and interpretation of results, and to facilitate comparisons of related methods. Here, we provide instructions on how to describe new methods, report datasets, and assess the performance of predictive tools. We also discuss what details of predictor implementation are essential for authors to understand. Similarly, these guidelines for the use of predictors provide instructions on what needs to be delineated in the text, as well as how researchers can avoid unwarranted conclusions. They are applicable to most prediction methods currently utilized. By applying these guidelines, authors will help reviewers, editors, and readers to more fully comprehend prediction methods and their use. © 2012 Wiley Periodicals, Inc.

  3. Predicting the size-dependent tissue accumulation of agents released from vascular targeted nanoconstructs

    NASA Astrophysics Data System (ADS)

    de Tullio, Marco D.; Singh, Jaykrishna; Pascazio, Giuseppe; Decuzzi, Paolo

    2014-03-01

    Vascular targeted nanoparticles have been developed for the delivery of therapeutic and imaging agents in cancer and cardiovascular diseases. However, at authors' knowledge, a comprehensive systematic analysis on their delivery efficiency is still missing. Here, a computational model is developed to predict the vessel wall accumulation of agents released from vascular targeted nanoconstructs. The transport problem for the released agent is solved using a finite volume scheme in terms of three governing parameters: the local wall shear rate , ranging from to ; the wall filtration velocity , varying from to ; and the agent diffusion coefficient , ranging from to . It is shown that the percentage of released agent adsorbing on the vessel walls in the vicinity of the vascular targeted nanoconstructs reduces with an increase in shear rate , and with a decrease in filtration velocity and agent diffusivity . In particular, in tumor microvessels, characterized by lower shear rates () and higher filtration velocities (), an agent with a diffusivity (i.e. a 50 nm particle) is predicted to deposit on the vessel wall up to of the total released dose. Differently, drug molecules, exhibiting a smaller size and much higher diffusion coefficient (), are predicted to accumulate up to . In healthy vessels, characterized by higher and lower , the largest majority of the released agent is redistributed directly in the circulation. These data suggest that drug molecules and small nanoparticles only can be efficiently released from vascular targeted nanoconstructs towards the diseased vessel walls and tissue.

  4. ASTRYD: A new numerical tool for aircraft cabin and environmental noise prediction

    NASA Astrophysics Data System (ADS)

    Berhault, J.-P.; Venet, G.; Clerc, C.

    ASTRYD is an analytical tool, developed originally for underwater applications, that computes acoustic pressure distribution around three-dimensional bodies in closed spaces like aircraft cabins. The program accepts data from measurements or other simulations, processes them in the time domain, and delivers temporal evolutions of the acoustic pressures and accelerations, as well as the radiated/diffracted pressure at arbitrary points located in the external/internal space. A typical aerospace application is prediction of acoustic load on satellites during the launching phase. An aeronautic application is engine noise distribution on a business jet body for prediction of environmental and cabin noise.

  5. DrugECs: An Ensemble System with Feature Subspaces for Accurate Drug-Target Interaction Prediction

    PubMed Central

    Jiang, Jinjian; Wang, Nian; Zhang, Jun

    2017-01-01

    Background Drug-target interaction is key in drug discovery, especially in the design of new lead compound. However, the work to find a new lead compound for a specific target is complicated and hard, and it always leads to many mistakes. Therefore computational techniques are commonly adopted in drug design, which can save time and costs to a significant extent. Results To address the issue, a new prediction system is proposed in this work to identify drug-target interaction. First, drug-target pairs are encoded with a fragment technique and the software “PaDEL-Descriptor.” The fragment technique is for encoding target proteins, which divides each protein sequence into several fragments in order and encodes each fragment with several physiochemical properties of amino acids. The software “PaDEL-Descriptor” creates encoding vectors for drug molecules. Second, the dataset of drug-target pairs is resampled and several overlapped subsets are obtained, which are then input into kNN (k-Nearest Neighbor) classifier to build an ensemble system. Conclusion Experimental results on the drug-target dataset showed that our method performs better and runs faster than the state-of-the-art predictors. PMID:28744468

  6. Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model.

    PubMed

    Xianfang, Wang; Junmei, Wang; Xiaolei, Wang; Yue, Zhang

    2017-01-01

    The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server.

  7. Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model

    PubMed Central

    Xiaolei, Wang

    2017-01-01

    The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server. PMID:28497044

  8. Genome scale enzyme–metabolite and drug–target interaction predictions using the signature molecular descriptor

    DOE PAGES

    Faulon, Jean-Loup; Misra, Milind; Martin, Shawn; ...

    2007-11-23

    Motivation: Identifying protein enzymatic or pharmacological activities are important areas of research in biology and chemistry. Biological and chemical databases are increasingly being populated with linkages between protein sequences and chemical structures. Additionally, there is now sufficient information to apply machine-learning techniques to predict interactions between chemicals and proteins at a genome scale. Current machine-learning techniques use as input either protein sequences and structures or chemical information. We propose here a method to infer protein–chemical interactions using heterogeneous input consisting of both protein sequence and chemical information. Results: Our method relies on expressing proteins and chemicals with a common cheminformaticsmore » representation. We demonstrate our approach by predicting whether proteins can catalyze reactions not present in training sets. We also predict whether a given drug can bind a target, in the absence of prior binding information for that drug and target. Lastly, such predictions cannot be made with current machine-learning techniques requiring binding information for individual reactions or individual targets.« less

  9. A Systematic Prediction of Drug-Target Interactions Using Molecular Fingerprints and Protein Sequences.

    PubMed

    Huang, Yu-An; You, Zhu-Hong; Chen, Xing

    2018-01-01

    Drug-Target Interactions (DTI) play a crucial role in discovering new drug candidates and finding new proteins to target for drug development. Although the number of detected DTI obtained by high-throughput techniques has been increasing, the number of known DTI is still limited. On the other hand, the experimental methods for detecting the interactions among drugs and proteins are costly and inefficient. Therefore, computational approaches for predicting DTI are drawing increasing attention in recent years. In this paper, we report a novel computational model for predicting the DTI using extremely randomized trees model and protein amino acids information. More specifically, the protein sequence is represented as a Pseudo Substitution Matrix Representation (Pseudo-SMR) descriptor in which the influence of biological evolutionary information is retained. For the representation of drug molecules, a novel fingerprint feature vector is utilized to describe its substructure information. Then the DTI pair is characterized by concatenating the two vector spaces of protein sequence and drug substructure. Finally, the proposed method is explored for predicting the DTI on four benchmark datasets: Enzyme, Ion Channel, GPCRs and Nuclear Receptor. The experimental results demonstrate that this method achieves promising prediction accuracies of 89.85%, 87.87%, 82.99% and 81.67%, respectively. For further evaluation, we compared the performance of Extremely Randomized Trees model with that of the state-of-the-art Support Vector Machine classifier. And we also compared the proposed model with existing computational models, and confirmed 15 potential drug-target interactions by looking for existing databases. The experiment results show that the proposed method is feasible and promising for predicting drug-target interactions for new drug candidate screening based on sizeable features. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. Comparing sixteen scoring functions for predicting biological activities of ligands for protein targets.

    PubMed

    Xu, Weijun; Lucke, Andrew J; Fairlie, David P

    2015-04-01

    Accurately predicting relative binding affinities and biological potencies for ligands that interact with proteins remains a significant challenge for computational chemists. Most evaluations of docking and scoring algorithms have focused on enhancing ligand affinity for a protein by optimizing docking poses and enrichment factors during virtual screening. However, there is still relatively limited information on the accuracy of commercially available docking and scoring software programs for correctly predicting binding affinities and biological activities of structurally related inhibitors of different enzyme classes. Presented here is a comparative evaluation of eight molecular docking programs (Autodock Vina, Fitted, FlexX, Fred, Glide, GOLD, LibDock, MolDock) using sixteen docking and scoring functions to predict the rank-order activity of different ligand series for six pharmacologically important protein and enzyme targets (Factor Xa, Cdk2 kinase, Aurora A kinase, COX-2, pla2g2a, β Estrogen receptor). Use of Fitted gave an excellent correlation (Pearson 0.86, Spearman 0.91) between predicted and experimental binding only for Cdk2 kinase inhibitors. FlexX and GOLDScore produced good correlations (Pearson>0.6) for hydrophilic targets such as Factor Xa, Cdk2 kinase and Aurora A kinase. By contrast, pla2g2a and COX-2 emerged as difficult targets for scoring functions to predict ligand activities. Although possessing a high hydrophobicity in its binding site, β Estrogen receptor produced reasonable correlations using LibDock (Pearson 0.75, Spearman 0.68). These findings can assist medicinal chemists to better match scoring functions with ligand-target systems for hit-to-lead optimization using computer-aided drug design approaches. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Development of nonlinear acoustic propagation analysis tool toward realization of loud noise environment prediction in aeronautics

    SciT

    Kanamori, Masashi, E-mail: kanamori.masashi@jaxa.jp; Takahashi, Takashi, E-mail: takahashi.takashi@jaxa.jp; Aoyama, Takashi, E-mail: aoyama.takashi@jaxa.jp

    2015-10-28

    Shown in this paper is an introduction of a prediction tool for the propagation of loud noise with the application to the aeronautics in mind. The tool, named SPnoise, is based on HOWARD approach, which can express almost exact multidimensionality of the diffraction effect at the cost of back scattering. This paper argues, in particular, the prediction of the effect of atmospheric turbulence on sonic boom as one of the important issues in aeronautics. Thanks to the simple and efficient modeling of the atmospheric turbulence, SPnoise successfully re-creates the feature of the effect, which often emerges in the region justmore » behind the front and rear shock waves in the sonic boom signature.« less

  12. Geriatric Assessment and Tools for Predicting Treatment Toxicity in Older Adults With Cancer.

    PubMed

    Li, Daneng; Soto-Perez-de-Celis, Enrique; Hurria, Arti

    Cancer is a disease of older adults, and the majority of new cancer cases and deaths occur in people 65 years or older. However, fewer data are available regarding the risks and benefits of cancer treatment in older adults, and commonly used assessments in oncology fail to adequately evaluate factors that affect treatment efficacy and outcomes in the older patients. The geriatric assessment is a multidisciplinary evaluation that provides detailed information about a patient's functional status, comorbidities, psychological state, social support, nutritional status, and cognitive function. Among older patients with cancer, geriatric assessment has been shown to identify patients at risk of poorer overall survival, and geriatric assessment-based tools are significantly more effective in predicting chemotherapy toxicity than other currently utilized measures. In this review, we summarize the components of the geriatric assessment and provide information about existing tools used to predict treatment toxicity in older patients with cancer.

  13. Prediction of microRNA target genes using an efficient genetic algorithm-based decision tree.

    PubMed

    Rabiee-Ghahfarrokhi, Behzad; Rafiei, Fariba; Niknafs, Ali Akbar; Zamani, Behzad

    2015-01-01

    MicroRNAs (miRNAs) are small, non-coding RNA molecules that regulate gene expression in almost all plants and animals. They play an important role in key processes, such as proliferation, apoptosis, and pathogen-host interactions. Nevertheless, the mechanisms by which miRNAs act are not fully understood. The first step toward unraveling the function of a particular miRNA is the identification of its direct targets. This step has shown to be quite challenging in animals primarily because of incomplete complementarities between miRNA and target mRNAs. In recent years, the use of machine-learning techniques has greatly increased the prediction of miRNA targets, avoiding the need for costly and time-consuming experiments to achieve miRNA targets experimentally. Among the most important machine-learning algorithms are decision trees, which classify data based on extracted rules. In the present work, we used a genetic algorithm in combination with C4.5 decision tree for prediction of miRNA targets. We applied our proposed method to a validated human datasets. We nearly achieved 93.9% accuracy of classification, which could be related to the selection of best rules.

  14. Prediction of microRNA target genes using an efficient genetic algorithm-based decision tree

    PubMed Central

    Rabiee-Ghahfarrokhi, Behzad; Rafiei, Fariba; Niknafs, Ali Akbar; Zamani, Behzad

    2015-01-01

    MicroRNAs (miRNAs) are small, non-coding RNA molecules that regulate gene expression in almost all plants and animals. They play an important role in key processes, such as proliferation, apoptosis, and pathogen–host interactions. Nevertheless, the mechanisms by which miRNAs act are not fully understood. The first step toward unraveling the function of a particular miRNA is the identification of its direct targets. This step has shown to be quite challenging in animals primarily because of incomplete complementarities between miRNA and target mRNAs. In recent years, the use of machine-learning techniques has greatly increased the prediction of miRNA targets, avoiding the need for costly and time-consuming experiments to achieve miRNA targets experimentally. Among the most important machine-learning algorithms are decision trees, which classify data based on extracted rules. In the present work, we used a genetic algorithm in combination with C4.5 decision tree for prediction of miRNA targets. We applied our proposed method to a validated human datasets. We nearly achieved 93.9% accuracy of classification, which could be related to the selection of best rules. PMID:26649272

  15. Geographic profiling as a novel spatial tool for targeting infectious disease control

    PubMed Central

    2011-01-01

    Background Geographic profiling is a statistical tool originally developed in criminology to prioritise large lists of suspects in cases of serial crime. Here, we use two data sets - one historical and one modern - to show how it can be used to locate the sources of infectious disease. Results First, we re-analyse data from a classic epidemiological study, the 1854 London cholera outbreak. Using 321 disease sites as input, we evaluate the locations of 13 neighbourhood water pumps. The Broad Street pump - the outbreak's source- ranks first, situated in the top 0.2% of the geoprofile. We extend our study with an analysis of reported malaria cases in Cairo, Egypt, using 139 disease case locations to rank 59 mosquitogenic local water sources, seven of which tested positive for the vector Anopheles sergentii. Geographic profiling ranks six of these seven sites in positions 1-6, all in the top 2% of the geoprofile. In both analyses the method outperformed other measures of spatial central tendency. Conclusions We suggest that geographic profiling could form a useful component of integrated control strategies relating to a wide variety of infectious diseases, since evidence-based targeting of interventions is more efficient, environmentally friendly and cost-effective than untargeted intervention. PMID:21592339

  16. Geographic profiling as a novel spatial tool for targeting infectious disease control.

    PubMed

    Le Comber, Steven C; Rossmo, D Kim; Hassan, Ali N; Fuller, Douglas O; Beier, John C

    2011-05-18

    Geographic profiling is a statistical tool originally developed in criminology to prioritise large lists of suspects in cases of serial crime. Here, we use two data sets--one historical and one modern--to show how it can be used to locate the sources of infectious disease. First, we re-analyse data from a classic epidemiological study, the 1854 London cholera outbreak. Using 321 disease sites as input, we evaluate the locations of 13 neighbourhood water pumps. The Broad Street pump--the outbreak's source--ranks first, situated in the top 0.2% of the geoprofile. We extend our study with an analysis of reported malaria cases in Cairo, Egypt, using 139 disease case locations to rank 59 mosquitogenic local water sources, seven of which tested positive for the vector Anopheles sergentii. Geographic profiling ranks six of these seven sites in positions 1-6, all in the top 2% of the geoprofile. In both analyses the method outperformed other measures of spatial central tendency. We suggest that geographic profiling could form a useful component of integrated control strategies relating to a wide variety of infectious diseases, since evidence-based targeting of interventions is more efficient, environmentally friendly and cost-effective than untargeted intervention.

  17. A pointing facilitation system for motor-impaired users combining polynomial smoothing and time-weighted gradient target prediction models.

    PubMed

    Blow, Nikolaus; Biswas, Pradipta

    2017-01-01

    As computers become more and more essential for everyday life, people who cannot use them are missing out on an important tool. The predominant method of interaction with a screen is a mouse, and difficulty in using a mouse can be a huge obstacle for people who would otherwise gain great value from using a computer. If mouse pointing were to be made easier, then a large number of users may be able to begin using a computer efficiently where they may previously have been unable to. The present article aimed to improve pointing speeds for people with arm or hand impairments. The authors investigated different smoothing and prediction models on a stored data set involving 25 people, and the best of these algorithms were chosen. A web-based prototype was developed combining a polynomial smoothing algorithm with a time-weighted gradient target prediction model. The adapted interface gave an average improvement of 13.5% in target selection times in a 10-person study of representative users of the system. A demonstration video of the system is available at https://youtu.be/sAzbrKHivEY.

  18. Comparison of various tool wear prediction methods during end milling of metal matrix composite

    NASA Astrophysics Data System (ADS)

    Wiciak, Martyna; Twardowski, Paweł; Wojciechowski, Szymon

    2018-02-01

    In this paper, the problem of tool wear prediction during milling of hard-to-cut metal matrix composite Duralcan™ was presented. The conducted research involved the measurements of acceleration of vibrations during milling with constant cutting conditions, and evaluation of the flank wear. Subsequently, the analysis of vibrations in time and frequency domain, as well as the correlation of the obtained measures with the tool wear values were conducted. The validation of tool wear diagnosis in relation to selected diagnostic measures was carried out with the use of one variable and two variables regression models, as well as with the application of artificial neural networks (ANN). The comparative analysis of the obtained results enable.

  19. A Business Analytics Software Tool for Monitoring and Predicting Radiology Throughput Performance.

    PubMed

    Jones, Stephen; Cournane, Seán; Sheehy, Niall; Hederman, Lucy

    2016-12-01

    Business analytics (BA) is increasingly being utilised by radiology departments to analyse and present data. It encompasses statistical analysis, forecasting and predictive modelling and is used as an umbrella term for decision support and business intelligence systems. The primary aim of this study was to determine whether utilising BA technologies could contribute towards improved decision support and resource management within radiology departments. A set of information technology requirements were identified with key stakeholders, and a prototype BA software tool was designed, developed and implemented. A qualitative evaluation of the tool was carried out through a series of semi-structured interviews with key stakeholders. Feedback was collated, and emergent themes were identified. The results indicated that BA software applications can provide visibility of radiology performance data across all time horizons. The study demonstrated that the tool could potentially assist with improving operational efficiencies and management of radiology resources.

  20. Bigger data, collaborative tools and the future of predictive drug discovery

    NASA Astrophysics Data System (ADS)

    Ekins, Sean; Clark, Alex M.; Swamidass, S. Joshua; Litterman, Nadia; Williams, Antony J.

    2014-10-01

    Over the past decade we have seen a growth in the provision of chemistry data and cheminformatics tools as either free websites or software as a service commercial offerings. These have transformed how we find molecule-related data and use such tools in our research. There have also been efforts to improve collaboration between researchers either openly or through secure transactions using commercial tools. A major challenge in the future will be how such databases and software approaches handle larger amounts of data as it accumulates from high throughput screening and enables the user to draw insights, enable predictions and move projects forward. We now discuss how information from some drug discovery datasets can be made more accessible and how privacy of data should not overwhelm the desire to share it at an appropriate time with collaborators. We also discuss additional software tools that could be made available and provide our thoughts on the future of predictive drug discovery in this age of big data. We use some examples from our own research on neglected diseases, collaborations, mobile apps and algorithm development to illustrate these ideas.

  1. Bigger Data, Collaborative Tools and the Future of Predictive Drug Discovery

    PubMed Central

    Clark, Alex M.; Swamidass, S. Joshua; Litterman, Nadia; Williams, Antony J.

    2014-01-01

    Over the past decade we have seen a growth in the provision of chemistry data and cheminformatics tools as either free websites or software as a service (SaaS) commercial offerings. These have transformed how we find molecule-related data and use such tools in our research. There have also been efforts to improve collaboration between researchers either openly or through secure transactions using commercial tools. A major challenge in the future will be how such databases and software approaches handle larger amounts of data as it accumulates from high throughput screening and enables the user to draw insights, enable predictions and move projects forward. We now discuss how information from some drug discovery datasets can be made more accessible and how privacy of data should not overwhelm the desire to share it at an appropriate time with collaborators. We also discuss additional software tools that could be made available and provide our thoughts on the future of predictive drug discovery in this age of big data. We use some examples from our own research on neglected diseases, collaborations, mobile apps and algorithm development to illustrate these ideas. PMID:24943138

  2. iPat: intelligent prediction and association tool for genomic research.

    PubMed

    Chen, Chunpeng James; Zhang, Zhiwu

    2018-06-01

    The ultimate goal of genomic research is to effectively predict phenotypes from genotypes so that medical management can improve human health and molecular breeding can increase agricultural production. Genomic prediction or selection (GS) plays a complementary role to genome-wide association studies (GWAS), which is the primary method to identify genes underlying phenotypes. Unfortunately, most computing tools cannot perform data analyses for both GWAS and GS. Furthermore, the majority of these tools are executed through a command-line interface (CLI), which requires programming skills. Non-programmers struggle to use them efficiently because of the steep learning curves and zero tolerance for data formats and mistakes when inputting keywords and parameters. To address these problems, this study developed a software package, named the Intelligent Prediction and Association Tool (iPat), with a user-friendly graphical user interface. With iPat, GWAS or GS can be performed using a pointing device to simply drag and/or click on graphical elements to specify input data files, choose input parameters and select analytical models. Models available to users include those implemented in third party CLI packages such as GAPIT, PLINK, FarmCPU, BLINK, rrBLUP and BGLR. Users can choose any data format and conduct analyses with any of these packages. File conversions are automatically conducted for specified input data and selected packages. A GWAS-assisted genomic prediction method was implemented to perform genomic prediction using any GWAS method such as FarmCPU. iPat was written in Java for adaptation to multiple operating systems including Windows, Mac and Linux. The iPat executable file, user manual, tutorials and example datasets are freely available at http://zzlab.net/iPat. zhiwu.zhang@wsu.edu.

  3. A collaborative environment for developing and validating predictive tools for protein biophysical characteristics

    NASA Astrophysics Data System (ADS)

    Johnston, Michael A.; Farrell, Damien; Nielsen, Jens Erik

    2012-04-01

    The exchange of information between experimentalists and theoreticians is crucial to improving the predictive ability of theoretical methods and hence our understanding of the related biology. However many barriers exist which prevent the flow of information between the two disciplines. Enabling effective collaboration requires that experimentalists can easily apply computational tools to their data, share their data with theoreticians, and that both the experimental data and computational results are accessible to the wider community. We present a prototype collaborative environment for developing and validating predictive tools for protein biophysical characteristics. The environment is built on two central components; a new python-based integration module which allows theoreticians to provide and manage remote access to their programs; and PEATDB, a program for storing and sharing experimental data from protein biophysical characterisation studies. We demonstrate our approach by integrating PEATSA, a web-based service for predicting changes in protein biophysical characteristics, into PEATDB. Furthermore, we illustrate how the resulting environment aids method development using the Potapov dataset of experimentally measured ΔΔGfold values, previously employed to validate and train protein stability prediction algorithms.

  4. Risk determination after an acute myocardial infarction: review of 3 clinical risk prediction tools.

    PubMed

    Scruth, Elizabeth Ann; Page, Karen; Cheng, Eugene; Campbell, Michelle; Worrall-Carter, Linda

    2012-01-01

    The objective of the study was to provide comprehensive information for the clinical nurse specialist (CNS) on commonly used clinical prediction (risk assessment) tools used to estimate risk of a secondary cardiac or noncardiac event and mortality in patients undergoing primary percutaneous coronary intervention (PCI) for ST-elevation myocardial infarction (STEMI). The evolution and widespread adoption of primary PCI represent major advances in the treatment of acute myocardial infarction, specifically STEMI. The American College of Cardiology and the American Heart Association have recommended early risk stratification for patients presenting with acute coronary syndromes using several clinical risk scores to identify patients' mortality and secondary event risk after PCI. Clinical nurse specialists are integral to any performance improvement strategy. Their knowledge and understandings of clinical prediction tools will be essential in carrying out important assessment, identifying and managing risk in patients who have sustained a STEMI, and enhancing discharge education including counseling on medications and lifestyle changes. Over the past 2 decades, risk scores have been developed from clinical trials to facilitate risk assessment. There are several risk scores that can be used to determine in-hospital and short-term survival. This article critiques the most common tools: the Thrombolytic in Myocardial Infarction risk score, the Global Registry of Acute Coronary Events risk score, and the Controlled Abciximab and Device Investigation to Lower Late Angioplasty Complications risk score. The importance of incorporating risk screening assessment tools (that are important for clinical prediction models) to guide therapeutic management of patients cannot be underestimated. The ability to forecast secondary risk after a STEMI will assist in determining which patients would require the most aggressive level of treatment and monitoring postintervention including

  5. Recommendation Techniques for Drug-Target Interaction Prediction and Drug Repositioning.

    PubMed

    Alaimo, Salvatore; Giugno, Rosalba; Pulvirenti, Alfredo

    2016-01-01

    The usage of computational methods in drug discovery is a common practice. More recently, by exploiting the wealth of biological knowledge bases, a novel approach called drug repositioning has raised. Several computational methods are available, and these try to make a high-level integration of all the knowledge in order to discover unknown mechanisms. In this chapter, we review drug-target interaction prediction methods based on a recommendation system. We also give some extensions which go beyond the bipartite network case.

  6. Community-based oral health promotion practices targeted at children and adolescents in Finland--developing an assessment tool.

    PubMed

    Blomqvist, Pia; Ojala, Ellinoora; Kettunen, Tarja; Poskiparta, Marita; Kasila, Kirsti

    2014-06-01

    To develop an assessment tool for evaluating oral health promotion practices and to evaluate community-based oral health promotion practices targeted at children and adolescents with this tool. A theoretical framework about health promotion planning, implementation and evaluation was made on the basis of a literature review. Then, information about Finnish community-based oral health promotion practices (n=12) targeted at children and adolescents was collected using semi-structured interviews. Also, related documents, for example action plans and reports, were collected when available. Next, an assessment tool based on the theoretical framework was developed, and the recorded and transcribed interview data and other documents were evaluated with this tool. The assessment tool proved to be practical: it pointed out the strengths and weaknesses of the practices. The tool revealed strengths in the implementation and deficiencies in the planning and evaluation of oral health promotion practices. One-quarter of the 12 practices assessed could be considered 'good practices'. There is a need to improve the planning and evaluation of oral health promotion practices. The assessment tool developed in this study might be useful for practitioners both in the field of oral health promotion and general health promotion. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Biodiversity in environmental assessment-current practice and tools for prediction

    SciT

    Gontier, Mikael; Balfors, Berit; Moertberg, Ulla

    Habitat loss and fragmentation are major threats to biodiversity. Environmental impact assessment and strategic environmental assessment are essential instruments used in physical planning to address such problems. Yet there are no well-developed methods for quantifying and predicting impacts of fragmentation on biodiversity. In this study, a literature review was conducted on GIS-based ecological models that have potential as prediction tools for biodiversity assessment. Further, a review of environmental impact statements for road and railway projects from four European countries was performed, to study how impact prediction concerning biodiversity issues was addressed. The results of the study showed the existing gapmore » between research in GIS-based ecological modelling and current practice in biodiversity assessment within environmental assessment.« less

  8. Ensemble Methods for MiRNA Target Prediction from Expression Data.

    PubMed

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth

  9. Ensemble Methods for MiRNA Target Prediction from Expression Data

    PubMed Central

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and

  10. Wing Leading Edge RCC Rapid Response Damage Prediction Tool (IMPACT2)

    NASA Technical Reports Server (NTRS)

    Clark, Robert; Cottter, Paul; Michalopoulos, Constantine

    2013-01-01

    This rapid response computer program predicts Orbiter Wing Leading Edge (WLE) damage caused by ice or foam impact during a Space Shuttle launch (Program "IMPACT2"). The program was developed after the Columbia accident in order to assess quickly WLE damage due to ice, foam, or metal impact (if any) during a Shuttle launch. IMPACT2 simulates an impact event in a few minutes for foam impactors, and in seconds for ice and metal impactors. The damage criterion is derived from results obtained from one sophisticated commercial program, which requires hours to carry out simulations of the same impact events. The program was designed to run much faster than the commercial program with prediction of projectile threshold velocities within 10 to 15% of commercial-program values. The mathematical model involves coupling of Orbiter wing normal modes of vibration to nonlinear or linear springmass models. IMPACT2 solves nonlinear or linear impact problems using classical normal modes of vibration of a target, and nonlinear/ linear time-domain equations for the projectile. Impact loads and stresses developed in the target are computed as functions of time. This model is novel because of its speed of execution. A typical model of foam, or other projectile characterized by material nonlinearities, impacting an RCC panel is executed in minutes instead of hours needed by the commercial programs. Target damage due to impact can be assessed quickly, provided that target vibration modes and allowable stress are known.

  11. FDA approved drugs complexed to their targets: evaluating pose prediction accuracy of docking protocols.

    PubMed

    Bohari, Mohammed H; Sastry, G Narahari

    2012-09-01

    Efficient drug discovery programs can be designed by utilizing existing pools of knowledge from the already approved drugs. This can be achieved in one way by repositioning of drugs approved for some indications to newer indications. Complex of drug to its target gives fundamental insight into molecular recognition and a clear understanding of putative binding site. Five popular docking protocols, Glide, Gold, FlexX, Cdocker and LigandFit have been evaluated on a dataset of 199 FDA approved drug-target complexes for their accuracy in predicting the experimental pose. Performance for all the protocols is assessed at default settings, with root mean square deviation (RMSD) between the experimental ligand pose and the docked pose of less than 2.0 Å as the success criteria in predicting the pose. Glide (38.7 %) is found to be the most accurate in top ranked pose and Cdocker (58.8 %) in top RMSD pose. Ligand flexibility is a major bottleneck in failure of docking protocols to correctly predict the pose. Resolution of the crystal structure shows an inverse relationship with the performance of docking protocol. All the protocols perform optimally when a balanced type of hydrophilic and hydrophobic interaction or dominant hydrophilic interaction exists. Overall in 16 different target classes, hydrophobic interactions dominate in the binding site and maximum success is achieved for all the docking protocols in nuclear hormone receptor class while performance for the rest of the classes varied based on individual protocol.

  12. SELF-BLM: Prediction of drug-target interactions via self-training SVM.

    PubMed

    Keum, Jongsoo; Nam, Hojung

    2017-01-01

    Predicting drug-target interactions is important for the development of novel drugs and the repositioning of drugs. To predict such interactions, there are a number of methods based on drug and target protein similarity. Although these methods, such as the bipartite local model (BLM), show promise, they often categorize unknown interactions as negative interaction. Therefore, these methods are not ideal for finding potential drug-target interactions that have not yet been validated as positive interactions. Thus, here we propose a method that integrates machine learning techniques, such as self-training support vector machine (SVM) and BLM, to develop a self-training bipartite local model (SELF-BLM) that facilitates the identification of potential interactions. The method first categorizes unlabeled interactions and negative interactions among unknown interactions using a clustering method. Then, using the BLM method and self-training SVM, the unlabeled interactions are self-trained and final local classification models are constructed. When applied to four classes of proteins that include enzymes, G-protein coupled receptors (GPCRs), ion channels, and nuclear receptors, SELF-BLM showed the best performance for predicting not only known interactions but also potential interactions in three protein classes compare to other related studies. The implemented software and supporting data are available at https://github.com/GIST-CSBL/SELF-BLM.

  13. Predicting Drug-Target Interaction Networks Based on Functional Groups and Biological Features

    PubMed Central

    Shi, Xiao-He; Hu, Le-Le; Kong, Xiangyin; Cai, Yu-Dong; Chou, Kuo-Chen

    2010-01-01

    Background Study of drug-target interaction networks is an important topic for drug development. It is both time-consuming and costly to determine compound-protein interactions or potential drug-target interactions by experiments alone. As a complement, the in silico prediction methods can provide us with very useful information in a timely manner. Methods/Principal Findings To realize this, drug compounds are encoded with functional groups and proteins encoded by biological features including biochemical and physicochemical properties. The optimal feature selection procedures are adopted by means of the mRMR (Maximum Relevance Minimum Redundancy) method. Instead of classifying the proteins as a whole family, target proteins are divided into four groups: enzymes, ion channels, G-protein- coupled receptors and nuclear receptors. Thus, four independent predictors are established using the Nearest Neighbor algorithm as their operation engine, with each to predict the interactions between drugs and one of the four protein groups. As a result, the overall success rates by the jackknife cross-validation tests achieved with the four predictors are 85.48%, 80.78%, 78.49%, and 85.66%, respectively. Conclusion/Significance Our results indicate that the network prediction system thus established is quite promising and encouraging. PMID:20300175

  14. Gestational Diabetes Mellitus Risk score: A practical tool to predict Gestational Diabetes Mellitus risk in Tanzania.

    PubMed

    Patrick Nombo, Anna; Wendelin Mwanri, Akwilina; Brouwer-Brolsma, Elske M; Ramaiya, Kaushik L; Feskens, Edith

    2018-05-28

    Universal screening for hyperglycemia during pregnancy may be in-practical in resource constrained countries. Therefore, the aim of this study was to develop a simple, non-invasive practical tool to predict undiagnosed Gestational diabetes mellitus (GDM) in Tanzania. We used cross-sectional data of 609 pregnant women, without known diabetes, collected in six health facilities from Dar es Salaam city (urban). Women underwent screening for GDM during ante-natal clinics visit. Smoking habit, alcohol consumption, pre-existing hypertension, birth weight of the previous child, high parity, gravida, previous caesarean section, age, MUAC ≥28 cm, previous stillbirth, haemoglobin level, gestational age (weeks), family history of type 2 diabetes, intake of sweetened drinks (soda), physical activity, vegetables and fruits consumption were considered as important predictors for GDM. Multivariate logistic regression modelling was used to create the prediction model, using a cut-off value of 2.5 to minimise the number of undiagnosed GDM (false negatives). Mid-upper arm circumference (MUAC) ≥28 cm, previous stillbirth, and family history of type 2 diabetes were identified as significant risk factors of GDM with a sensitivity, specificity, positive predictive value, and negative predictive value of 69%, 53%, 12% and 95%, respectively. Moreover, the inclusion of these three predictors resulted in an area under the curve (AUC) of 0.64 (0.56-0.72), indicating that the current tool correctly classifies 64% of high risk individuals. The findings of this study indicate that MUAC, previous stillbirth, and family history of type 2 diabetes significantly predict GDM development in this Tanzanian population. However, the developed non-invasive practical tool to predict undiagnosed GDM only identified 6 out of 10 individuals at risk of developing GDM. Thus, further development of the tool is warranted, for instance by testing the impact of other known risk factors such as maternal age

  15. An Empiric HIV Risk Scoring Tool to Predict HIV-1 Acquisition in African Women.

    PubMed

    Balkus, Jennifer E; Brown, Elizabeth; Palanee, Thesla; Nair, Gonasagrie; Gafoor, Zakir; Zhang, Jingyang; Richardson, Barbra A; Chirenje, Zvavahera M; Marrazzo, Jeanne M; Baeten, Jared M

    2016-07-01

    To develop and validate an HIV risk assessment tool to predict HIV acquisition among African women. Data were analyzed from 3 randomized trials of biomedical HIV prevention interventions among African women (VOICE, HPTN 035, and FEM-PrEP). We implemented standard methods for the development of clinical prediction rules to generate a risk-scoring tool to predict HIV acquisition over the course of 1 year. Performance of the score was assessed through internal and external validations. The final risk score resulting from multivariable modeling included age, married/living with a partner, partner provides financial or material support, partner has other partners, alcohol use, detection of a curable sexually transmitted infection, and herpes simplex virus 2 serostatus. Point values for each factor ranged from 0 to 2, with a maximum possible total score of 11. Scores ≥5 were associated with HIV incidence >5 per 100 person-years and identified 91% of incident HIV infections from among only 64% of women. The area under the curve (AUC) for predictive ability of the score was 0.71 (95% confidence interval [CI]: 0.68 to 0.74), indicating good predictive ability. Risk score performance was generally similar with internal cross-validation (AUC = 0.69; 95% CI: 0.66 to 0.73) and external validation in HPTN 035 (AUC = 0.70; 95% CI: 0.65 to 0.75) and FEM-PrEP (AUC = 0.58; 95% CI: 0.51 to 0.65). A discrete set of characteristics that can be easily assessed in clinical and research settings was predictive of HIV acquisition over 1 year. The use of a validated risk score could improve efficiency of recruitment into HIV prevention research and inform scale-up of HIV prevention strategies in women at highest risk.

  16. Identification of HMX1 target genes: A predictive promoter model approach

    PubMed Central

    Boulling, Arnaud; Wicht, Linda

    2013-01-01

    Purpose A homozygous mutation in the H6 family homeobox 1 (HMX1) gene is responsible for a new oculoauricular defect leading to eye and auricular developmental abnormalities as well as early retinal degeneration (MIM 612109). However, the HMX1 pathway remains poorly understood, and in the first approach to better understand the pathway’s function, we sought to identify the target genes. Methods We developed a predictive promoter model (PPM) approach using a comparative transcriptomic analysis in the retina at P15 of a mouse model lacking functional Hmx1 (dmbo mouse) and its respective wild-type. This PPM was based on the hypothesis that HMX1 binding site (HMX1-BS) clusters should be more represented in promoters of HMX1 target genes. The most differentially expressed genes in the microarray experiment that contained HMX1-BS clusters were used to generate the PPM, which was then statistically validated. Finally, we developed two genome-wide target prediction methods: one that focused on conserving PPM features in human and mouse and one that was based on the co-occurrence of HMX1-BS pairs fitting the PPM, in human or in mouse, independently. Results The PPM construction revealed that sarcoglycan, gamma (35kDa dystrophin-associated glycoprotein) (Sgcg), teashirt zinc finger homeobox 2 (Tshz2), and solute carrier family 6 (neurotransmitter transporter, glycine) (Slc6a9) genes represented Hmx1 targets in the mouse retina at P15. Moreover, the genome-wide target prediction revealed that mouse genes belonging to the retinal axon guidance pathway were targeted by Hmx1. Expression of these three genes was experimentally validated using a quantitative reverse transcription PCR approach. The inhibitory activity of Hmx1 on Sgcg, as well as protein tyrosine phosphatase, receptor type, O (Ptpro) and Sema3f, two targets identified by the PPM, were validated with luciferase assay. Conclusions Gene expression analysis between wild-type and dmbo mice allowed us to develop a PPM

  17. Automatically detect and track infrared small targets with kernel Fukunaga-Koontz transform and Kalman prediction.

    PubMed

    Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan

    2007-11-01

    Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.

  18. Automatically detect and track infrared small targets with kernel Fukunaga-Koontz transform and Kalman prediction

    NASA Astrophysics Data System (ADS)

    Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan

    2007-11-01

    Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.

  19. New tools for targeted disruption of cholinergic synaptic transmission in Drosophila melanogaster.

    PubMed

    Mejia, Monica; Heghinian, Mari D; Marí, Frank; Godenschwege, Tanja A

    2013-01-01

    Nicotinic acetylcholine receptors (nAChRs) are pentameric ligand-gated ion channels. The α7 subtype of nAChRs is involved in neurological pathologies such as Parkinson's disease, Alzheimer's disease, addiction, epilepsy and autism spectrum disorders. The Drosophila melanogaster α7 (Dα7) has the closest sequence homology to the vertebrate α7 subunit and it can form homopentameric receptors just as the vertebrate counterpart. The Dα7 subunits are essential for the function of the Giant Fiber circuit, which mediates the escape response of the fly. To further characterize the receptor function, we generated different missense mutations in the Dα7 nAChR's ligand binding domain. We characterized the effects of targeted expression of two UAS-constructs carrying a single mutation, D197A and Y195T, as well as a UAS-construct carrying a triple D77T, L117Q, I196P mutation in a Dα7 null mutant and in a wild type background. Expression of the triple mutation was able to restore the function of the circuit in Dα7 null mutants and had no disruptive effects when expressed in wild type. In contrast, both single mutations severely disrupted the synaptic transmission of Dα7-dependent but not glutamatergic or gap junction dependent synapses in wild type background, and did not or only partially rescued the synaptic defects of the null mutant. These observations are consistent with the formation of hybrid receptors, consisting of D197A or Y195T subunits and wild type Dα7 subunits, in which the binding of acetylcholine or acetylcholine-induced conformational changes of the Dα7 receptor are altered and causes inhibition of cholinergic responses. Thus targeted expression of D197A or Y195T can be used to selectively disrupt synaptic transmission of Dα7-dependent synapses in neuronal circuits. Hence, these constructs can be used as tools to study learning and memory or addiction associated behaviors by allowing the manipulation of neuronal processing in the circuits without

  20. New Tools for Targeted Disruption of Cholinergic Synaptic Transmission in Drosophila melanogaster

    PubMed Central

    Mejia, Monica; Heghinian, Mari D.; Marí, Frank; Godenschwege, Tanja A.

    2013-01-01

    Nicotinic acetylcholine receptors (nAChRs) are pentameric ligand-gated ion channels. The α7 subtype of nAChRs is involved in neurological pathologies such as Parkinson’s disease, Alzheimer’s disease, addiction, epilepsy and autism spectrum disorders. The Drosophila melanogaster α7 (Dα7) has the closest sequence homology to the vertebrate α7 subunit and it can form homopentameric receptors just as the vertebrate counterpart. The Dα7 subunits are essential for the function of the Giant Fiber circuit, which mediates the escape response of the fly. To further characterize the receptor function, we generated different missense mutations in the Dα7 nAChR’s ligand binding domain. We characterized the effects of targeted expression of two UAS-constructs carrying a single mutation, D197A and Y195T, as well as a UAS-construct carrying a triple D77T, L117Q, I196P mutation in a Dα7 null mutant and in a wild type background. Expression of the triple mutation was able to restore the function of the circuit in Dα7 null mutants and had no disruptive effects when expressed in wild type. In contrast, both single mutations severely disrupted the synaptic transmission of Dα7-dependent but not glutamatergic or gap junction dependent synapses in wild type background, and did not or only partially rescued the synaptic defects of the null mutant. These observations are consistent with the formation of hybrid receptors, consisting of D197A or Y195T subunits and wild type Dα7 subunits, in which the binding of acetylcholine or acetylcholine-induced conformational changes of the Dα7 receptor are altered and causes inhibition of cholinergic responses. Thus targeted expression of D197A or Y195T can be used to selectively disrupt synaptic transmission of Dα7-dependent synapses in neuronal circuits. Hence, these constructs can be used as tools to study learning and memory or addiction associated behaviors by allowing the manipulation of neuronal processing in the circuits

  1. Target Fortification of Breast Milk: Predicting the Final Osmolality of the Feeds

    PubMed Central

    Choi, Arum; Fusch, Gerhard; Rochow, Niels; Fusch, Christoph

    2016-01-01

    For preterm infants, it is common practice to add human milk fortifiers to native breast milk to enhance protein and calorie supply because the growth rates and nutritional requirements of preterm infants are considerably higher than those of term infants. However, macronutrient intake may still be inadequate because the composition of native breast milk has individual inter- and intra-sample variation. Target fortification (TFO) of breast milk is a new nutritional regime aiming to reduce such variations by individually measuring and adding deficient macronutrients. Added TFO components contribute to the final osmolality of milk feeds. It is important to predict the final osmolality of TFO breast milk to ensure current osmolality recommendations are followed to minimize feeding intolerance and necrotizing enterocolitis. This study aims to develop and validate equations to predict the osmolality of TFO milk batches. To establish prediction models, the osmolalities of either native or supplemented breast milk with known amounts of fat, protein, and carbohydrates were analyzed. To validate prediction models, the osmolalities of each macronutrient and combinations of macronutrients were measured in an independent sample set. Additionally, osmolality was measured in TFO milk samples obtained from a previous clinical study and compared with predicted osmolality using the prediction equations. Following the addition of 1 g of carbohydrates (glucose polymer), 1 g of hydrolyzed protein, or 1 g of whey protein per 100 mL breast milk, the average increase in osmolality was 20, 38, and 4 mOsm/kg respectively. Adding fat decreased osmolality only marginally due to dilution effect. Measured and predicted osmolality of combinations of macronutrients as well as single macronutrient (R2 = 0.93) were highly correlated. Using clinical data (n = 696), the average difference between the measured and predicted osmolality was 3 ± 11 mOsm/kg and was not statistically significant. In

  2. Changes in predictive cuing modulate the hemispheric distribution of the P1 inhibitory response to attentional targets.

    PubMed

    Lasaponara, Stefano; D' Onofrio, Marianna; Dragone, Alessio; Pinto, Mario; Caratelli, Ludovica; Doricchi, Fabrizio

    2017-05-01

    Brain activity related to orienting of attention with spatial cues and brain responses to attentional targets are influenced the probabilistic contingency between cues and targets. Compared to predictive cues, cues predicting at chance the location of targets reduce the filtering out of uncued locations and the costs in reorienting attention to targets presented at these locations. Slagter et al. (2016) have recently suggested that the larger target related P1 component that is found in the hemisphere ipsilateral to validly cued targets reflects stimulus-driven inhibition in the processing of the unstimulated side of space contralateral to the same hemisphere. Here we verified whether the strength of this inhibition and the amplitude of the corresponding P1 wave are modulated by the probabilistic link between cues and targets. Healthy participants performed a task of endogenous orienting once with predictive and once with non-predictive directional cues. In the non-predictive condition we observed a drop in the amplitude of the P1 ipsilateral to the target and in the costs of reorienting. No change in the inter-hemispheric latencies of the P1 was found between the two predictive conditions. The N1 facilitatory component was unaffected by predictive cuing. These results show that the predictive context modulates the strength of the inhibitory P1 response and that this modulation is not matched with changes in the inter-hemispheric interaction between the P1 generators of the two hemispheres. Copyright © 2017. Published by Elsevier Ltd.

  3. Burridge-Knopoff Model as an Educational and Demonstrational Tool in Seismicity Prediction

    NASA Astrophysics Data System (ADS)

    Kato, M.

    2007-12-01

    While our effort is ongoing, the fact that predicting destructive earthquakes is not a straightforward business is hard to sell to the general public. Japan is prone to two types of destructive earthquakes; interplate events along Japan Trench and Nankai Trough, and intraplate events that often occur beneath megacities. Periodicity of interplate earthquakes is usually explained by the elastic rebound theory, but we are aware that the historical seismicity along Nankai Trough is not simply periodic. Inland intraplate events have geologically postulated recurrence intervals that are far longer than human lifetime, and we do not have ample knowledge to model their behavior that includes interaction among intraplate and interplate events. To demonstrate that accumulation and release of elastic energy is complex even in a simple system, we propose to utilize the Burridge-Knopoff (BK) model as a demonstrational tool. This original one-dimensional model is easy to construct and handle so that this is also an effective educational tool for classroom use. Our simulator is a simple realization of the original one dimensional BK, which consists of small blocks, springs and a motor. Accumulation and release of strain is visibly observable, and by guessing when the next large events occur we are able to intuitively learn that observation of strain accumulation is only one element in predicting large events. Quantitative analysis of the system is also possible by measuring the movement of blocks. While the long term average of strain energy is controlled by the loading rate, observed seismicity is neither time-predictable nor slip-predictable. Time between successive events is never a constant. Distribution of released energy obeys the power law, similar to Ishimoto- Iida and Gutenberg-Richter Law. This tool is also useful in demonstration of nonlinear behavior of complex system.

  4. Inflammation-driven malnutrition: a new screening tool predicts outcome in Crohn's disease.

    PubMed

    Jansen, Irene; Prager, Matthias; Valentini, Luzia; Büning, Carsten

    2016-09-01

    Malnutrition is a frequent feature in Crohn's disease (CD), affects patient outcome and must be recognised. For chronic inflammatory diseases, recent guidelines recommend the development of combined malnutrition and inflammation risk scores. We aimed to design and evaluate a new screening tool that combines both malnutrition and inflammation parameters that might help predict clinical outcome. In a prospective cohort study, we examined fifty-five patients with CD in remission (Crohn's disease activity index (CDAI) <200) at 0 and 6 months. We assessed disease activity (CDAI, Harvey-Bradshaw index), inflammation (C-reactive protein (CRP), faecal calprotectin (FC)), malnutrition (BMI, subjective global assessment (SGA), serum albumin, handgrip strength), body composition (bioelectrical impedance analysis) and administered the newly developed 'Malnutrition Inflammation Risk Tool' (MIRT; containing BMI, unintentional weight loss over 3 months and CRP). All parameters were evaluated regarding their ability to predict disease outcome prospectively at 6 months. At baseline, more than one-third of patients showed elevated inflammatory markers despite clinical remission (36·4 % CRP ≥5 mg/l, 41·5 % FC ≥100 µg/g). Prevalence of malnutrition at baseline according to BMI, SGA and serum albumin was 2-16 %. At 6 months, MIRT significantly predicted outcome in numerous nutritional and clinical parameters (SGA, CD-related flares, hospitalisations and surgeries). In contrast, SGA, handgrip strength, BMI, albumin and body composition had no influence on the clinical course. The newly developed MIRT was found to reliably predict clinical outcome in CD patients. This screening tool might be used to facilitate clinical decision making, including treatment of both inflammation and malnutrition in order to prevent complications.

  5. Using open source computational tools for predicting human metabolic stability and additional absorption, distribution, metabolism, excretion, and toxicity properties.

    PubMed

    Gupta, Rishi R; Gifford, Eric M; Liston, Ted; Waller, Chris L; Hohman, Moses; Bunin, Barry A; Ekins, Sean

    2010-11-01

    Ligand-based computational models could be more readily shared between researchers and organizations if they were generated with open source molecular descriptors [e.g., chemistry development kit (CDK)] and modeling algorithms, because this would negate the requirement for proprietary commercial software. We initially evaluated open source descriptors and model building algorithms using a training set of approximately 50,000 molecules and a test set of approximately 25,000 molecules with human liver microsomal metabolic stability data. A C5.0 decision tree model demonstrated that CDK descriptors together with a set of Smiles Arbitrary Target Specification (SMARTS) keys had good statistics [κ = 0.43, sensitivity = 0.57, specificity = 0.91, and positive predicted value (PPV) = 0.64], equivalent to those of models built with commercial Molecular Operating Environment 2D (MOE2D) and the same set of SMARTS keys (κ = 0.43, sensitivity = 0.58, specificity = 0.91, and PPV = 0.63). Extending the dataset to ∼193,000 molecules and generating a continuous model using Cubist with a combination of CDK and SMARTS keys or MOE2D and SMARTS keys confirmed this observation. When the continuous predictions and actual values were binned to get a categorical score we observed a similar κ statistic (0.42). The same combination of descriptor set and modeling method was applied to passive permeability and P-glycoprotein efflux data with similar model testing statistics. In summary, open source tools demonstrated predictive results comparable to those of commercial software with attendant cost savings. We discuss the advantages and disadvantages of open source descriptors and the opportunity for their use as a tool for organizations to share data precompetitively, avoiding repetition and assisting drug discovery.

  6. A novel multi-target regression framework for time-series prediction of drug efficacy

    PubMed Central

    Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin

    2017-01-01

    Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task. PMID:28098186

  7. A novel multi-target regression framework for time-series prediction of drug efficacy.

    PubMed

    Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin

    2017-01-18

    Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task.

  8. Predicting complication risk in spine surgery: a prospective analysis of a novel risk assessment tool.

    PubMed

    Veeravagu, Anand; Li, Amy; Swinney, Christian; Tian, Lu; Moraff, Adrienne; Azad, Tej D; Cheng, Ivan; Alamin, Todd; Hu, Serena S; Anderson, Robert L; Shuer, Lawrence; Desai, Atman; Park, Jon; Olshen, Richard A; Ratliff, John K

    2017-07-01

    OBJECTIVE The ability to assess the risk of adverse events based on known patient factors and comorbidities would provide more effective preoperative risk stratification. Present risk assessment in spine surgery is limited. An adverse event prediction tool was developed to predict the risk of complications after spine surgery and tested on a prospective patient cohort. METHODS The spinal Risk Assessment Tool (RAT), a novel instrument for the assessment of risk for patients undergoing spine surgery that was developed based on an administrative claims database, was prospectively applied to 246 patients undergoing 257 spinal procedures over a 3-month period. Prospectively collected data were used to compare the RAT to the Charlson Comorbidity Index (CCI) and the American College of Surgeons National Surgery Quality Improvement Program (ACS NSQIP) Surgical Risk Calculator. Study end point was occurrence and type of complication after spine surgery. RESULTS The authors identified 69 patients (73 procedures) who experienced a complication over the prospective study period. Cardiac complications were most common (10.2%). Receiver operating characteristic (ROC) curves were calculated to compare complication outcomes using the different assessment tools. Area under the curve (AUC) analysis showed comparable predictive accuracy between the RAT and the ACS NSQIP calculator (0.670 [95% CI 0.60-0.74] in RAT, 0.669 [95% CI 0.60-0.74] in NSQIP). The CCI was not accurate in predicting complication occurrence (0.55 [95% CI 0.48-0.62]). The RAT produced mean probabilities of 34.6% for patients who had a complication and 24% for patients who did not (p = 0.0003). The generated predicted values were stratified into low, medium, and high rates. For the RAT, the predicted complication rate was 10.1% in the low-risk group (observed rate 12.8%), 21.9% in the medium-risk group (observed 31.8%), and 49.7% in the high-risk group (observed 41.2%). The ACS NSQIP calculator consistently

  9. Comprehensive modeling of microRNA targets predicts functional non-conserved and non-canonical sites.

    PubMed

    Betel, Doron; Koppal, Anjali; Agius, Phaedra; Sander, Chris; Leslie, Christina

    2010-01-01

    mirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites.

  10. Perioperative Respiratory Adverse Events in Pediatric Ambulatory Anesthesia: Development and Validation of a Risk Prediction Tool.

    PubMed

    Subramanyam, Rajeev; Yeramaneni, Samrat; Hossain, Mohamed Monir; Anneken, Amy M; Varughese, Anna M

    2016-05-01

    Perioperative respiratory adverse events (PRAEs) are the most common cause of serious adverse events in children receiving anesthesia. Our primary aim of this study was to develop and validate a risk prediction tool for the occurrence of PRAE from the onset of anesthesia induction until discharge from the postanesthesia care unit in children younger than 18 years undergoing elective ambulatory anesthesia for surgery and radiology. The incidence of PRAE was studied. We analyzed data from 19,059 patients from our department's quality improvement database. The predictor variables were age, sex, ASA physical status, morbid obesity, preexisting pulmonary disorder, preexisting neurologic disorder, and location of ambulatory anesthesia (surgery or radiology). Composite PRAE was defined as the presence of any 1 of the following events: intraoperative bronchospasm, intraoperative laryngospasm, postoperative apnea, postoperative laryngospasm, postoperative bronchospasm, or postoperative prolonged oxygen requirement. Development and validation of the risk prediction tool for PRAE were performed using a split sampling technique to split the database into 2 independent cohorts based on the year when the patient received ambulatory anesthesia for surgery and radiology using logistic regression. A risk score was developed based on the regression coefficients from the validation tool. The performance of the risk prediction tool was assessed by using tests of discrimination and calibration. The overall incidence of composite PRAE was 2.8%. The derivation cohort included 8904 patients, and the validation cohort included 10,155 patients. The risk of PRAE was 3.9% in the development cohort and 1.8% in the validation cohort. Age ≤ 3 years (versus >3 years), ASA physical status II or III (versus ASA physical status I), morbid obesity, preexisting pulmonary disorder, and surgery (versus radiology) significantly predicted the occurrence of PRAE in a multivariable logistic regression

  11. A tool for prediction of risk of rehospitalisation and mortality in the hospitalised elderly: secondary analysis of clinical trial data

    PubMed Central

    Alassaad, Anna; Melhus, Håkan; Hammarlund-Udenaes, Margareta; Bertilsson, Maria; Gillespie, Ulrika; Sundström, Johan

    2015-01-01

    Objectives To construct and internally validate a risk score, the ‘80+ score’, for revisits to hospital and mortality for older patients, incorporating aspects of pharmacotherapy. Our secondary aim was to compare the discriminatory ability of the score with that of three validated tools for measuring inappropriate prescribing: Screening Tool of Older Person's Prescriptions (STOPP), Screening Tool to Alert doctors to Right Treatment (START) and Medication Appropriateness Index (MAI). Setting Two acute internal medicine wards at Uppsala University hospital. Patient data were used from a randomised controlled trial investigating the effects of a comprehensive clinical pharmacist intervention. Participants Data from 368 patients, aged 80 years and older, admitted to one of the study wards. Primary outcome measure Time to rehospitalisation or death during the year after discharge from hospital. Candidate variables were selected among a large number of clinical and drug-specific variables. After a selection process, a score for risk estimation was constructed. The 80+ score was internally validated, and the discriminatory ability of the score and of STOPP, START and MAI was assessed using C-statistics. Results Seven variables were selected. Impaired renal function, pulmonary disease, malignant disease, living in a nursing home, being prescribed an opioid or being prescribed a drug for peptic ulcer or gastroesophageal reflux disease were associated with an increased risk, while being prescribed an antidepressant drug (tricyclic antidepressants not included) was linked to a lower risk of the outcome. These variables made up the components of the 80+ score. The C-statistics were 0.71 (80+), 0.57 (STOPP), 0.54 (START) and 0.63 (MAI). Conclusions We developed and internally validated a score for prediction of risk of rehospitalisation and mortality in hospitalised older people. The score discriminated risk better than available tools for inappropriate prescribing

  12. MARK’s Quadrant scoring system: a symptom-based targeted screening tool for gastric cancer

    PubMed Central

    Tata, Mahadevan D.; Gurunathan, Ramesh; Palayan, Kandasami

    2014-01-01

    Background Gastric cancer is notably one of the leading causes of cancer-related death in the world. In Malaysia, these patients present in the advanced stage, thus narrowing the treatment options and making the surgery nearly impossible for successful curative resection. Failure to identify high-risk patients and delay in diagnostic endoscope procedure contributed to the delay in diagnosis. The aim of the study was to develop and validate a scoring system (MARK’s Quadrant) which can identify symptomatic patients who are at risk for gastric cancer. Methods A 3-phase approach was undertaken: Phase 1: development of the weighted scoring system; Phase 2: estimating positive predicting value of MARK’s Quadrant; and Phase 3: a) testing the validity of MARK’s Quadrant in an open-access endoscope system; and b) comparing its usefulness compared to conventional referral system. Results In phases 1 and 2, MARK’s Quadrant with weighted symptoms was developed. The sensitivity of MARK’s Quadrant is 88% and the specificity is 45.5% to detect cancerous and precancerous lesions of gastric. This was confirmed by the prospective data from phase 3 of this study where the diagnostic yield of MARK’s Quadrant to detect any pathological lesion was 95.2%. This score has a high accuracy efficiency of 75%, hence comparing to routine referral system it has an odds ratio (95%CI) of 10.98 (4.63-26.00), 6.71 (4.46-10.09) and 0.95 (0.06-0.15) (P<0.001 respectively) for cancer, precancerous lesion and benign lesion diagnosis respectively. Conclusion MARK’s Quadrant is a useful tool to detect early gastric cancer among symptomatic patients in a low incidence region. PMID:24714557

  13. Petri net-based prediction of therapeutic targets that recover abnormally phosphorylated proteins in muscle atrophy.

    PubMed

    Jung, Jinmyung; Kwon, Mijin; Bae, Sunghwa; Yim, Soorin; Lee, Doheon

    2018-03-05

    Muscle atrophy, an involuntary loss of muscle mass, is involved in various diseases and sometimes leads to mortality. However, therapeutics for muscle atrophy thus far have had limited effects. Here, we present a new approach for therapeutic target prediction using Petri net simulation of the status of phosphorylation, with a reasonable assumption that the recovery of abnormally phosphorylated proteins can be a treatment for muscle atrophy. The Petri net model was employed to simulate phosphorylation status in three states, i.e. reference, atrophic and each gene-inhibited state based on the myocyte-specific phosphorylation network. Here, we newly devised a phosphorylation specific Petri net that involves two types of transitions (phosphorylation or de-phosphorylation) and two types of places (activation with or without phosphorylation). Before predicting therapeutic targets, the simulation results in reference and atrophic states were validated by Western blotting experiments detecting five marker proteins, i.e. RELA, SMAD2, SMAD3, FOXO1 and FOXO3. Finally, we determined 37 potential therapeutic targets whose inhibition recovers the phosphorylation status from an atrophic state as indicated by the five validated marker proteins. In the evaluation, we confirmed that the 37 potential targets were enriched for muscle atrophy-related terms such as actin and muscle contraction processes, and they were also significantly overlapping with the genes associated with muscle atrophy reported in the Comparative Toxicogenomics Database (p-value < 0.05). Furthermore, we noticed that they included several proteins that could not be characterized by the shortest path analysis. The three potential targets, i.e. BMPR1B, ROCK, and LEPR, were manually validated with the literature. In this study, we suggest a new approach to predict potential therapeutic targets of muscle atrophy with an analysis of phosphorylation status simulated by Petri net. We generated a list of the potential

  14. A Clinical Tool for the Prediction of Venous Thromboembolism in Pediatric Trauma Patients.

    PubMed

    Connelly, Christopher R; Laird, Amy; Barton, Jeffrey S; Fischer, Peter E; Krishnaswami, Sanjay; Schreiber, Martin A; Zonies, David H; Watters, Jennifer M

    2016-01-01

    Although rare, the incidence of venous thromboembolism (VTE) in pediatric trauma patients is increasing, and the consequences of VTE in children are significant. Studies have demonstrated increasing VTE risk in older pediatric trauma patients and improved VTE rates with institutional interventions. While national evidence-based guidelines for VTE screening and prevention are in place for adults, none exist for pediatric patients, to our knowledge. To develop a risk prediction calculator for VTE in children admitted to the hospital after traumatic injury to assist efforts in developing screening and prophylaxis guidelines for this population. Retrospective review of 536,423 pediatric patients 0 to 17 years old using the National Trauma Data Bank from January 1, 2007, to December 31, 2012. Five mixed-effects logistic regression models of varying complexity were fit on a training data set. Model validity was determined by comparison of the area under the receiver operating characteristic curve (AUROC) for the training and validation data sets from the original model fit. A clinical tool to predict the risk of VTE based on individual patient clinical characteristics was developed from the optimal model. Diagnosis of VTE during hospital admission. Venous thromboembolism was diagnosed in 1141 of 536,423 children (overall rate, 0.2%). The AUROCs in the training data set were high (range, 0.873-0.946) for each model, with minimal AUROC attenuation in the validation data set. A prediction tool was developed from a model that achieved a balance of high performance (AUROCs, 0.945 and 0.932 in the training and validation data sets, respectively; P = .048) and parsimony. Points are assigned to each variable considered (Glasgow Coma Scale score, age, sex, intensive care unit admission, intubation, transfusion of blood products, central venous catheter placement, presence of pelvic or lower extremity fractures, and major surgery), and the points total is converted to a VTE

  15. Transcriptome-wide identification of Rauvolfia serpentina microRNAs and prediction of their potential targets.

    PubMed

    Prakash, Pravin; Rajakani, Raja; Gupta, Vikrant

    2016-04-01

    MicroRNAs (miRNAs) are small non-coding RNAs of ∼ 19-24 nucleotides (nt) in length and considered as potent regulators of gene expression at transcriptional and post-transcriptional levels. Here we report the identification and characterization of 15 conserved miRNAs belonging to 13 families from Rauvolfia serpentina through in silico analysis of available nucleotide dataset. The identified mature R. serpentina miRNAs (rse-miRNAs) ranged between 20 and 22nt in length, and the average minimal folding free energy index (MFEI) value of rse-miRNA precursor sequences was found to be -0.815 kcal/mol. Using the identified rse-miRNAs as query, their potential targets were predicted in R. serpentina and other plant species. Gene Ontology (GO) annotation showed that predicted targets of rse-miRNAs include transcription factors as well as genes involved in diverse biological processes such as primary and secondary metabolism, stress response, disease resistance, growth, and development. Few rse-miRNAs were predicted to target genes of pharmaceutically important secondary metabolic pathways such as alkaloids and anthocyanin biosynthesis. Phylogenetic analysis showed the evolutionary relationship of rse-miRNAs and their precursor sequences to homologous pre-miRNA sequences from other plant species. The findings under present study besides giving first hand information about R. serpentina miRNAs and their targets, also contributes towards the better understanding of miRNA-mediated gene regulatory processes in plants. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. A human-hearing-related prediction tool for soundscapes and community noise

    NASA Astrophysics Data System (ADS)

    Genuit, Klaus

    2002-11-01

    There are several methods of calculation available for the prediction of the A-weighted sound-pressure level of environmental noise, which are, however, not suitable for a qualified prediction of the residents' annoyance and physiological strain. The subjectively felt noise quality does not only depend on the A-weighted sound-pressure level, but also on other psychoacoustical parameters, such as loudness, roughness, sharpness, etc. In addition to these physical and psychoacoustical aspects of noise, the so-called psychological or cognitive aspects have to be considered, too, which means that the listeners' expectations, their mental attitude, as well as the information content of the noise finally influence the noise quality perceived by the individual persons. Within the scope of a research project SVEN (Sound Quality of Vehicle Exterior Noise), which is promoted by the EC, a new tool has been developed which allows a binaural simulation and prediction of the environmental noise to evaluate the influence of different contributions by the sound events with respect to the psychoacoustical parameters, the spatial distribution, movement, and frequency. By means of this tool it is now possible to consider completely new aspects regarding the audible perception of noise when establishing a soundscape or when planning community noise.

  17. Development of a CME-associated geomagnetic storm intensity prediction tool

    NASA Astrophysics Data System (ADS)

    Wu, C. C.; DeHart, J. M.

    2015-12-01

    From 1995 to 2012, the Wind spacecraft recorded 168 magnetic cloud (MC) events. Among those events, 79 were found to have upstream shock waves and their source locations on the Sun were identified. Using a recipe of interplanetary magnetic field (IMF) Bz initial turning direction after shock (Wu et al., 1996, GRL), it is found that the north-south polarity of 66 (83.5%) out of the 79 events were accurately predicted. These events were tested and further analyzed, reaffirming that the Bz intial turning direction was accurate. The results also indicate that 37 of the 79 MCs originate from the north (of the Sun) averaged a Dst_min of -119 nT, whereas 42 of the MCs originating from the south (of the Sun) averaged -89 nT. In an effort to provide this research to others, a website was built that incorporated various tools and pictures to predict the intensity of the geomagnetic storms. The tool is capable of predicting geomagnetic storms with different ranges of Dst_min (from no-storm to gigantic storms). This work was supported by Naval Research Lab HBCU/MI Internship program and Chief of Naval Research.

  18. Tools for beach health data management, data processing, and predictive model implementation

    ,

    2013-01-01

    This fact sheet describes utilities created for management of recreational waters to provide efficient data management, data aggregation, and predictive modeling as well as a prototype geographic information system (GIS)-based tool for data visualization and summary. All of these utilities were developed to assist beach managers in making decisions to protect public health. The Environmental Data Discovery and Transformation (EnDDaT) Web service identifies, compiles, and sorts environmental data from a variety of sources that help to define climatic, hydrologic, and hydrodynamic characteristics including multiple data sources within the U.S. Geological Survey and the National Oceanic and Atmospheric Administration. The Great Lakes Beach Health Database (GLBH-DB) and Web application was designed to provide a flexible input, export, and storage platform for beach water quality and sanitary survey monitoring data to compliment beach monitoring programs within the Great Lakes. A real-time predictive modeling strategy was implemented by combining the capabilities of EnDDaT and the GLBH-DB for timely, automated prediction of beach water quality. The GIS-based tool was developed to map beaches based on their physical and biological characteristics, which was shared with multiple partners to provide concepts and information for future Web-accessible beach data outlets.

  19. Calibration of Multiple In Silico Tools for Predicting Pathogenicity of Mismatch Repair Gene Missense Substitutions

    PubMed Central

    Thompson, Bryony A.; Greenblatt, Marc S.; Vallee, Maxime P.; Herkert, Johanna C.; Tessereau, Chloe; Young, Erin L.; Adzhubey, Ivan A.; Li, Biao; Bell, Russell; Feng, Bingjian; Mooney, Sean D.; Radivojac, Predrag; Sunyaev, Shamil R.; Frebourg, Thierry; Hofstra, Robert M.W.; Sijmons, Rolf H.; Boucher, Ken; Thomas, Alun; Goldgar, David E.; Spurdle, Amanda B.; Tavtigian, Sean V.

    2015-01-01

    Classification of rare missense substitutions observed during genetic testing for patient management is a considerable problem in clinical genetics. The Bayesian integrated evaluation of unclassified variants is a solution originally developed for BRCA1/2. Here, we take a step toward an analogous system for the mismatch repair (MMR) genes (MLH1, MSH2, MSH6, and PMS2) that confer colon cancer susceptibility in Lynch syndrome by calibrating in silico tools to estimate prior probabilities of pathogenicity for MMR gene missense substitutions. A qualitative five-class classification system was developed and applied to 143 MMR missense variants. This identified 74 missense substitutions suitable for calibration. These substitutions were scored using six different in silico tools (Align-Grantham Variation Grantham Deviation, multivariate analysis of protein polymorphisms [MAPP], Mut-Pred, PolyPhen-2.1, Sorting Intolerant From Tolerant, and Xvar), using curated MMR multiple sequence alignments where possible. The output from each tool was calibrated by regression against the classifications of the 74 missense substitutions; these calibrated outputs are interpretable as prior probabilities of pathogenicity. MAPP was the most accurate tool and MAPP + PolyPhen-2.1 provided the best-combined model (R2 = 0.62 and area under receiver operating characteristic = 0.93). The MAPP + PolyPhen-2.1 output is sufficiently predictive to feed as a continuous variable into the quantitative Bayesian integrated evaluation for clinical classification of MMR gene missense substitutions. PMID:22949387

  20. The PREM score: a graphical tool for predicting survival in very preterm births.

    PubMed

    Cole, T J; Hey, E; Richmond, S

    2010-01-01

    To develop a tool for predicting survival to term in babies born more than 8 weeks early using only information available at or before birth. 1456 non-malformed very preterm babies of 22-31 weeks' gestation born in 2000-3 in the north of England and 3382 births of 23-31 weeks born in 2000-4 in Trent. Survival to term, predicted from information available at birth, and at the onset of labour or delivery. Development of a logistic regression model (the prematurity risk evaluation measure or PREM score) based on gestation, birth weight for gestation and base deficit from umbilical cord blood. Gestation was by far the most powerful predictor of survival to term, and as few as 5 extra days can double the chance of survival. Weight for gestation also had a powerful but non-linear effect on survival, with weight between the median and 85th centile predicting the highest survival. Using this information survival can be predicted almost as accurately before birth as after, although base deficit further improves the prediction. A simple graph is described that shows how the two main variables gestation and weight for gestation interact to predict the chance of survival. The PREM score can be used to predict the chance of survival at or before birth almost as accurately as existing measures influenced by post-delivery condition, to balance risk at entry into a controlled trial and to adjust for differences in "case mix" when assessing the quality of perinatal care.

  1. A numerical tool for reproducing driver behaviour: experiments and predictive simulations.

    PubMed

    Casucci, M; Marchitto, M; Cacciabue, P C

    2010-03-01

    This paper presents the simulation tool called SDDRIVE (Simple Simulation of Driver performance), which is the numerical computerised implementation of the theoretical architecture describing Driver-Vehicle-Environment (DVE) interactions, contained in Cacciabue and Carsten [Cacciabue, P.C., Carsten, O. A simple model of driver behaviour to sustain design and safety assessment of automated systems in automotive environments, 2010]. Following a brief description of the basic algorithms that simulate the performance of drivers, the paper presents and discusses a set of experiments carried out in a Virtual Reality full scale simulator for validating the simulation. Then the predictive potentiality of the tool is shown by discussing two case studies of DVE interactions, performed in the presence of different driver attitudes in similar traffic conditions.

  2. A discrete event simulation tool to support and predict hospital and clinic staffing.

    PubMed

    DeRienzo, Christopher M; Shaw, Ryan J; Meanor, Phillip; Lada, Emily; Ferranti, Jeffrey; Tanaka, David

    2017-06-01

    We demonstrate how to develop a simulation tool to help healthcare managers and administrators predict and plan for staffing needs in a hospital neonatal intensive care unit using administrative data. We developed a discrete event simulation model of nursing staff needed in a neonatal intensive care unit and then validated the model against historical data. The process flow was translated into a discrete event simulation model. Results demonstrated that the model can be used to give a respectable estimate of annual admissions, transfers, and deaths based upon two different staffing levels. The discrete event simulation tool model can provide healthcare managers and administrators with (1) a valid method of modeling patient mix, patient acuity, staffing needs, and costs in the present state and (2) a forecast of how changes in a unit's staffing, referral patterns, or patient mix would affect a unit in a future state.

  3. Development of a screening tool to predict malnutrition among children under two years old in Zambia

    PubMed Central

    Hasegawa, Junko; Ito, Yoichi M; Yamauchi, Taro

    2017-01-01

    ABSTRACT Background: Maternal and child undernutrition is an important issue, particularly in low- and middle-income countries. Children at high risk of malnutrition should be prioritized to receive necessary interventions to minimize such risk. Several risk factors have been proposed; however, until now, there has been no appropriate evaluation method to identify these children. In sub-Saharan Africa, children commonly receive regular check-ups from community health workers. A simple and easy nutrition assessment method is therefore needed for use by semi-professional health workers. Objectives: The aim of this study was to develop and test a practical screening tool for community use in predicting growth stunting in children under two years in rural Zambia. Methods: Field research was conducted from July to August 2014 in Southern Province, Zambia. Two hundred and sixty-four mother-child pairs participated in the study. Anthropometric measurements were performed on all children and mothers, and all mothers were interviewed. Risk factors for the screening test were estimated by using least absolute shrinkage and selection operator analysis. After re-evaluating all participants using the new screening tool, a receiver operating characteristic curve was drawn to set the cut-off value. Sensitivity and specificity were also calculated. Results: The screening tool included age, weight-for-age Z-score status, birth weight, feeding status, history of sibling death, multiple birth, and maternal education level. The total score ranged from 0 to 22, and the cut-off value was eight. Sensitivity and specificity were 0.963 and 0.697 respectively. Conclusions: A screening tool was developed to predict children at high risk of malnutrition living in Zambia. Further longitudinal studies are needed to confirm the test’s validity in detecting future stunting and to investigate the effectiveness of malnutrition treatment. PMID:28730929

  4. Predicting Drug Combination Index and Simulating the Network-Regulation Dynamics by Mathematical Modeling of Drug-Targeted EGFR-ERK Signaling Pathway

    NASA Astrophysics Data System (ADS)

    Huang, Lu; Jiang, Yuyang; Chen, Yuzong

    2017-01-01

    Synergistic drug combinations enable enhanced therapeutics. Their discovery typically involves the measurement and assessment of drug combination index (CI), which can be facilitated by the development and applications of in-silico CI predictive tools. In this work, we developed and tested the ability of a mathematical model of drug-targeted EGFR-ERK pathway in predicting CIs and in analyzing multiple synergistic drug combinations against observations. Our mathematical model was validated against the literature reported signaling, drug response dynamics, and EGFR-MEK drug combination effect. The predicted CIs and combination therapeutic effects of the EGFR-BRaf, BRaf-MEK, FTI-MEK, and FTI-BRaf inhibitor combinations showed consistent synergism. Our results suggest that existing pathway models may be potentially extended for developing drug-targeted pathway models to predict drug combination CI values, isobolograms, and drug-response surfaces as well as to analyze the dynamics of individual and combinations of drugs. With our model, the efficacy of potential drug combinations can be predicted. Our method complements the developed in-silico methods (e.g. the chemogenomic profile and the statistically-inferenced network models) by predicting drug combination effects from the perspectives of pathway dynamics using experimental or validated molecular kinetic constants, thereby facilitating the collective prediction of drug combination effects in diverse ranges of disease systems.

  5. Identification of the feedforward component in manual control with predictable target signals.

    PubMed

    Drop, Frank M; Pool, Daan M; Damveld, Herman J; van Paassen, Marinus M; Mulder, Max

    2013-12-01

    In the manual control of a dynamic system, the human controller (HC) often follows a visible and predictable reference path. Compared with a purely feedback control strategy, performance can be improved by making use of this knowledge of the reference. The operator could effectively introduce feedforward control in conjunction with a feedback path to compensate for errors, as hypothesized in literature. However, feedforward behavior has never been identified from experimental data, nor have the hypothesized models been validated. This paper investigates human control behavior in pursuit tracking of a predictable reference signal while being perturbed by a quasi-random multisine disturbance signal. An experiment was done in which the relative strength of the target and disturbance signals were systematically varied. The anticipated changes in control behavior were studied by means of an ARX model analysis and by fitting three parametric HC models: two different feedback models and a combined feedforward and feedback model. The ARX analysis shows that the experiment participants employed control action on both the error and the target signal. The control action on the target was similar to the inverse of the system dynamics. Model fits show that this behavior can be modeled best by the combined feedforward and feedback model.

  6. Mathematical modeling of antibody drug conjugates with the target and tubulin dynamics to predict AUC.

    PubMed

    Byun, Jong Hyuk; Jung, Il Hyo

    2018-04-14

    Antibody drug conjugates (ADCs)are one of the most recently developed chemotherapeutics to treat some types of tumor cells. They consist of monoclonal antibodies (mAbs), linkers, and potent cytotoxic drugs. Unlike common chemotherapies, ADCs combine selectively with a target at the surface of the tumor cell, and a potent cytotoxic drug (payload) effectively prevents microtubule polymerization. In this work, we construct an ADC model that considers both the target of antibodies and the receptor (tubulin) of the cytotoxic payloads. The model is simulated with brentuximab vedotin, one of ADCs, and used to investigate the pharmacokinetic (PK) characteristics of ADCs in vivo. It also predicts area under the curve (AUC) of ADCs and the payloads by identifying the half-life. The results show that dynamical behaviors fairly coincide with the observed data and half-life and capture AUC. Thus, the model can be used for estimating some parameters, fitting experimental observations, predicting AUC, and exploring various dynamical behaviors of the target and the receptor. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Differential Expression of MicroRNA and Predicted Targets in Pulmonary Sarcoidosis

    PubMed Central

    Crouser, Elliott D.; Julian, Mark W.; Crawford, Melissa; Shao, Guohong; Yu, Lianbo; Planck, Stephen R.; Rosenbaum, James T.; Nana-Sinkam, S. Patrick

    2014-01-01

    Background Recent studies show that various inflammatory diseases are regulated at the level of RNA translation by small non-coding RNAs, termed microRNAs (miRNAs). We sought to determine whether sarcoidosis tissues harbor a distinct pattern of miRNA expression and then considered their potential molecular targets. Methods and Results Genome-wide microarray analysis of miRNA expression in lung tissue and peripheral blood mononuclear cells (PBMCs) was performed and differentially expressed (DE)-miRNAs were then validated by real-time PCR. A distinct pattern of DE-miRNA expression was identified in both lung tissue and PBMCs of sarcoidosis patients. A subgroup of DE-miRNAs common to lung and lymph node tissues were predicted to target transforming growth factor (TGFβ)-regulated pathways. Likewise, the DE-miRNAs identified in PBMCs of sarcoidosis patients were predicted to target the TGFβ-regulated “wingless and integrase-1” (WNT) pathway. Conclusions This study is the first to profile miRNAs in sarcoidosis tissues and to consider their possible roles in disease pathogenesis. Our results suggest that miRNA regulate TGFβ and related WNT pathways in sarcoidosis tissues, pathways previously incriminated in the pathogenesis of sarcoidosis. PMID:22209793

  8. Validation and Use of a Predictive Modeling Tool: Employing Scientific Findings to Improve Responsible Conduct of Research Education.

    PubMed

    Mulhearn, Tyler J; Watts, Logan L; Todd, E Michelle; Medeiros, Kelsey E; Connelly, Shane; Mumford, Michael D

    2017-01-01

    Although recent evidence suggests ethics education can be effective, the nature of specific training programs, and their effectiveness, varies considerably. Building on a recent path modeling effort, the present study developed and validated a predictive modeling tool for responsible conduct of research education. The predictive modeling tool allows users to enter ratings in relation to a given ethics training program and receive instantaneous evaluative information for course refinement. Validation work suggests the tool's predicted outcomes correlate strongly (r = 0.46) with objective course outcomes. Implications for training program development and refinement are discussed.

  9. In silico site-directed mutagenesis informs species-specific predictions of chemical susceptibility derived from the Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS) tool

    EPA Science Inventory

    The Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS) tool was developed to address needs for rapid, cost effective methods of species extrapolation of chemical susceptibility. Specifically, the SeqAPASS tool compares the primary sequence (Level 1), functiona...

  10. A Probabilistic Approach for Reliability and Life Prediction of Electronics in Drilling and Evaluation Tools

    DTIC Science & Technology

    2014-12-23

    A Probabilistic Approach for Reliability and Life Prediction of Electronics in Drilling and Evaluation Tools Amit A. Kale 1 , Katrina Carter...dielectric breakdown has been Amit Kale et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0...160.doi 10.1016/S0167- 4730(00)00005-9. BIOGRAPHIES Amit A. Kale was born in Bhopal, India on October 25 1978. He earned PhD in 2005 and MS in

  11. Risk analysis for dengue suitability in Africa using the ArcGIS predictive analysis tools (PA tools).

    PubMed

    Attaway, David F; Jacobsen, Kathryn H; Falconer, Allan; Manca, Germana; Waters, Nigel M

    2016-06-01

    Risk maps identifying suitable locations for infection transmission are important for public health planning. Data on dengue infection rates are not readily available in most places where the disease is known to occur. A newly available add-in to Esri's ArcGIS software package, the ArcGIS Predictive Analysis Toolset (PA Tools), was used to identify locations within Africa with environmental characteristics likely to be suitable for transmission of dengue virus. A more accurate, robust, and localized (1 km × 1 km) dengue risk map for Africa was created based on bioclimatic layers, elevation data, high-resolution population data, and other environmental factors that a search of the peer-reviewed literature showed to be associated with dengue risk. Variables related to temperature, precipitation, elevation, and population density were identified as good predictors of dengue suitability. Areas of high dengue suitability occur primarily within West Africa and parts of Central Africa and East Africa, but even in these regions the suitability is not homogenous. This risk mapping technique for an infection transmitted by Aedes mosquitoes draws on entomological, epidemiological, and geographic data. The method could be applied to other infectious diseases (such as Zika) in order to provide new insights for public health officials and others making decisions about where to increase disease surveillance activities and implement infection prevention and control efforts. The ability to map threats to human and animal health is important for tracking vectorborne and other emerging infectious diseases and modeling the likely impacts of climate change. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Integrating Transcriptomics with Metabolic Modeling Predicts Biomarkers and Drug Targets for Alzheimer's Disease

    PubMed Central

    Stempler, Shiri; Yizhak, Keren; Ruppin, Eytan

    2014-01-01

    Accumulating evidence links numerous abnormalities in cerebral metabolism with the progression of Alzheimer's disease (AD), beginning in its early stages. Here, we integrate transcriptomic data from AD patients with a genome-scale computational human metabolic model to characterize the altered metabolism in AD, and employ state-of-the-art metabolic modelling methods to predict metabolic biomarkers and drug targets in AD. The metabolic descriptions derived are first tested and validated on a large scale versus existing AD proteomics and metabolomics data. Our analysis shows a significant decrease in the activity of several key metabolic pathways, including the carnitine shuttle, folate metabolism and mitochondrial transport. We predict several metabolic biomarkers of AD progression in the blood and the CSF, including succinate and prostaglandin D2. Vitamin D and steroid metabolism pathways are enriched with predicted drug targets that could mitigate the metabolic alterations observed. Taken together, this study provides the first network wide view of the metabolic alterations associated with AD progression. Most importantly, it offers a cohort of new metabolic leads for the diagnosis of AD and its treatment. PMID:25127241

  13. Investigation of computational aeroacoustic tools for noise predictions of wind turbine aerofoils

    NASA Astrophysics Data System (ADS)

    Humpf, A.; Ferrer, E.; Munduate, X.

    2007-07-01

    In this work trailing edge noise levels of a research aerofoil have been computed and compared to aeroacoustic measurements using two different approaches. On the other hand, aerodynamic and aeroacoustic calculations were performed with the full Navier-Stokes CFD code Fluent [Fluent Inc 2005 Fluent 6.2 Users Guide, Lebanon, NH, USA] on the basis of a steady RANS simulation. Aerodynamic characteristics were computed by the aid of various turbulence models. By the combined usage of implemented broadband noise source models, it was tried to isolate and determine the trailing edge noise level. Throughout this work two methods of different computational cost have been tested and quantitative and qualitative results obtained. On the one hand, the semi-empirical noise prediction tool NAFNoise [Moriarty P 2005 NAFNoise User's Guide. Golden, Colorado, July. http://wind.nrel.gov/designcodes/ simulators/NAFNoise] was used to directly predict trailing edge noise by taking into consideration the nature of the experiments.

  14. Nonlinear Prediction As A Tool For Determining Parameters For Phase Space Reconstruction In Meteorology

    NASA Astrophysics Data System (ADS)

    Miksovsky, J.; Raidl, A.

    Time delays phase space reconstruction represents one of useful tools of nonlinear time series analysis, enabling number of applications. Its utilization requires the value of time delay to be known, as well as the value of embedding dimension. There are sev- eral methods how to estimate both these parameters. Typically, time delay is computed first, followed by embedding dimension. Our presented approach is slightly different - we reconstructed phase space for various combinations of mentioned parameters and used it for prediction by means of the nearest neighbours in the phase space. Then some measure of prediction's success was computed (correlation or RMSE, e.g.). The position of its global maximum (minimum) should indicate the suitable combination of time delay and embedding dimension. Several meteorological (particularly clima- tological) time series were used for the computations. We have also created a MS- Windows based program in order to implement this approach - its basic features will be presented as well.

  15. The challenge of predicting problematic chemicals using a decision analysis tool: Triclosan as a case study.

    PubMed

    Perez, Angela L; Gauthier, Alison M; Ferracini, Tyler; Cowan, Dallas M; Kingsbury, Tony; Panko, Julie

    2017-01-01

    Manufacturers lack a reliable means for determining whether a chemical will be targeted for deselection from their supply chain. In this analysis, 3 methods for determining whether a specific chemical (triclosan) would meet the criteria necessary for being targeted for deselection are presented. The methods included a list-based approach, use of a commercially available chemical assessment software tool run in 2 modes, and a public interest evaluation. Our results indicated that triclosan was included on only 6 of the lists reviewed, none of which were particularly influential in chemical selection decisions. The results from the chemical assessment tool evaluations indicated that human and ecological toxicity for triclosan is low and received scores indicating that the chemical would be considered of low concern. However, triclosan's peak public interest tracked several years in advance of increased regulatory scrutiny of this chemical suggesting that public pressure may have been influential in deselection decisions. Key data gaps and toxicity endpoints not yet regulated such as endocrine disruption potential or phototoxicity, but that are important to estimate the trajectory for deselection of a chemical, are discussed. Integr Environ Assess Manag 2017;13:198-207. © 2016 SETAC. © 2016 SETAC.

  16. Prediction of ttt curves of cold working tool steels using support vector machine model

    NASA Astrophysics Data System (ADS)

    Pillai, Nandakumar; Karthikeyan, R., Dr.

    2018-04-01

    The cold working tool steels are of high carbon steels with metallic alloy additions which impart higher hardenability, abrasion resistance and less distortion in quenching. The microstructure changes occurring in tool steel during heat treatment is of very much importance as the final properties of the steel depends upon these changes occurred during the process. In order to obtain the desired performance the alloy constituents and its ratio plays a vital role as the steel transformation itself is complex in nature and depends very much upon the time and temperature. The proper treatment can deliver satisfactory results, at the same time process deviation can completely spoil the results. So knowing time temperature transformation (TTT) of phases is very critical which varies for each type depending upon its constituents and proportion range. To obtain adequate post heat treatment properties the percentage of retained austenite should be lower and metallic carbides obtained should be fine in nature. Support vector machine is a computational model which can learn from the observed data and use these to predict or solve using mathematical model. Back propagation feedback network will be created and trained for further solutions. The points on the TTT curve for the known transformations curves are used to plot the curves for different materials. These data will be trained to predict TTT curves for other steels having similar alloying constituents but with different proportion range. The proposed methodology can be used for prediction of TTT curves for cold working steels and can be used for prediction of phases for different heat treatment methods.

  17. An Interactive Tool For Semi-automated Statistical Prediction Using Earth Observations and Models

    NASA Astrophysics Data System (ADS)

    Zaitchik, B. F.; Berhane, F.; Tadesse, T.

    2015-12-01

    We developed a semi-automated statistical prediction tool applicable to concurrent analysis or seasonal prediction of any time series variable in any geographic location. The tool was developed using Shiny, JavaScript, HTML and CSS. A user can extract a predictand by drawing a polygon over a region of interest on the provided user interface (global map). The user can select the Climatic Research Unit (CRU) precipitation or Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) as predictand. They can also upload their own predictand time series. Predictors can be extracted from sea surface temperature, sea level pressure, winds at different pressure levels, air temperature at various pressure levels, and geopotential height at different pressure levels. By default, reanalysis fields are applied as predictors, but the user can also upload their own predictors, including a wide range of compatible satellite-derived datasets. The package generates correlations of the variables selected with the predictand. The user also has the option to generate composites of the variables based on the predictand. Next, the user can extract predictors by drawing polygons over the regions that show strong correlations (composites). Then, the user can select some or all of the statistical prediction models provided. Provided models include Linear Regression models (GLM, SGLM), Tree-based models (bagging, random forest, boosting), Artificial Neural Network, and other non-linear models such as Generalized Additive Model (GAM) and Multivariate Adaptive Regression Splines (MARS). Finally, the user can download the analysis steps they used, such as the region they selected, the time period they specified, the predictand and predictors they chose and preprocessing options they used, and the model results in PDF or HTML format. Key words: Semi-automated prediction, Shiny, R, GLM, ANN, RF, GAM, MARS

  18. XBeach-G: a tool for predicting gravel barrier response to extreme storm conditions

    NASA Astrophysics Data System (ADS)

    Masselink, Gerd; Poate, Tim; McCall, Robert; Roelvink, Dano; Russell, Paul; Davidson, Mark

    2014-05-01

    Gravel beaches protect low-lying back-barrier regions from flooding during storm events and their importance to society is widely acknowledged. Unfortunately, breaching and extensive storm damage has occurred at many gravel sites and this is likely to increase as a result of sea-level rise and enhanced storminess due to climate change. Limited scientific guidance is currently available to provide beach managers with operational management tools to predict the response of gravel beaches to storms. The New Understanding and Prediction of Storm Impacts on Gravel beaches (NUPSIG) project aims to improve our understanding of storm impacts on gravel coastal environments and to develop a predictive capability by modelling these impacts. The NUPSIG project uses a 5-pronged approach to address its aim: (1) analyse hydrodynamic data collected during a proto-type laboratory experiment on a gravel beach; (2) collect hydrodynamic field data on a gravel beach under a range of conditions, including storm waves with wave heights up to 3 m; (3) measure swash dynamics and beach response on 10 gravel beaches during extreme wave conditions with wave heights in excess of 3 m; (4) use the data collected under 1-3 to develop and validate a numerical model to model hydrodynamics and morphological response of gravel beaches under storm conditions; and (5) develop a tool for end-users, based on the model formulated under (4), for predicting storm response of gravel beaches and barriers. The aim of this presentation is to present the key results of the NUPSIG project and introduce the end-user tool for predicting storm response on gravel beaches. The model is based on the numerical model XBeach, and different forcing scenarios (wave and tides), barrier configurations (dimensions) and sediment characteristics are easily uploaded for model simulations using a Graphics User Interface (GUI). The model can be used to determine the vulnerability of gravel barriers to storm events, but can also be

  19. Measurements to predict the time of target replacement of a helical tomotherapy.

    PubMed

    Kampfer, Severin; Schell, Stefan; Duma, Marciana N; Wilkens, Jan J; Kneschaurek, Peter

    2011-11-15

    Intensity-modulated radiation therapy (IMRT) requires more beam-on time than normal open field treatment. Consequently, the machines wear out and need more spare parts. A helical tomotherapy treatment unit needs a periodical tungsten target replacement, which is a time consuming event. To be able to predict the next replacement would be quite valuable. We observed unexpected variations towards the end of the target lifetime in the performed pretreatment measurements for patient plan verification. Thus, we retrospectively analyze the measurements of our quality assurance program. The time dependence of the quotient of two simultaneous dose measurements at different depths within a phantom for a fixed open field irradiation is evaluated. We also assess the time-dependent changes of an IMRT plan measurement and of a relative depth dose curve measurement. Additionally, we performed a Monte Carlo simulation with Geant4 to understand the physical reasons for the measured values. Our measurements show that the dose at a specified depth compared to the dose in shallower regions of the phantom declines towards the end of the target lifetime. This reproducible effect can be due to the lowering of the mean energy of the X-ray spectrum. These results are supported by the measurements of the IMRT plan, as well as the study of the relative depth dose curve. Furthermore, the simulation is consistent with these findings since it provides a possible explanation for the reduction of the mean energy for thinner targets. It could be due to the lowering of low energy photon self-absorption in a worn out and therefore thinner target. We state a threshold value for our measurement at which a target replacement should be initiated. Measurements to observe a change in the energy are good predictors of the need for a target replacement. However, since all results support the softening of the spectrum hypothesis, all depth-dependent setups are viable for analyzing the deterioration of the

  20. Predictive optimal control of sewer networks using CORAL tool: application to Riera Blanca catchment in Barcelona.

    PubMed

    Puig, V; Cembrano, G; Romera, J; Quevedo, J; Aznar, B; Ramón, G; Cabot, J

    2009-01-01

    This paper deals with the global control of the Riera Blanca catchment in the Barcelona sewer network using a predictive optimal control approach. This catchment has been modelled using a conceptual modelling approach based on decomposing the catchments in subcatchments and representing them as virtual tanks. This conceptual modelling approach allows real-time model calibration and control of the sewer network. The global control problem of the Riera Blanca catchment is solved using a optimal/predictive control algorithm. To implement the predictive optimal control of the Riera Blanca catchment, a software tool named CORAL is used. The on-line control is simulated by interfacing CORAL with a high fidelity simulator of sewer networks (MOUSE). CORAL interchanges readings from the limnimeters and gate commands with MOUSE as if it was connected with the real SCADA system. Finally, the global control results obtained using the predictive optimal control are presented and compared against the results obtained using current local control system. The results obtained using the global control are very satisfactory compared to those obtained using the local control.

  1. Evaluation of the efficacy of six nutritional screening tools to predict malnutrition in the elderly.

    PubMed

    Poulia, Kalliopi-Anna; Yannakoulia, Mary; Karageorgou, Dimitra; Gamaletsou, Maria; Panagiotakos, Demosthenes B; Sipsas, Nikolaos V; Zampelas, Antonis

    2012-06-01

    Malnutrition in the elderly is a multifactorial problem, more prevalent in hospitals and care homes. The absence of a gold standard in evaluating nutritional risk led us to evaluate the efficacy of six nutritional screening tools used in the elderly. Two hundred forty eight elderly patients (129 men, 119 female women, aged 75.2 ± 8.5 years) were examined. Nutritional screening was performed on admission using the following tools: Nutritional Risk Index (NRI), Geriatric Nutritional Risk Index (GNRI), Subjective Global Assessment (SGA), Mini Nutritional Assessment - Screening Form (MNA-SF), Malnutrition Universal Screening Tool (MUST) and Nutritional Risk Screening 2002 (NRS 2002). A combined index for malnutrition was also calculated. Nutritional risk and/or malnutrition varied greatly, ranging from 47.2 to 97.6%, depending on the nutritional screening tool used. MUST was the most valid screening tool (validity coefficient = 0.766, CI 95%: 0.690-0.841), while SGA was in better agreement with the combined index (κ = 0.707, p = 0.000). NRS 2002 although was the highest in sensitivity (99.4%), it was the lowest in specificity (6.1%) and positive predictive value (68.2%). MUST seem to be the most valid in the evaluation of the risk for malnutrition in the elderly upon admission to the hospital. NRS 2002 was found to overestimate nutritional risk in the elderly. Copyright © 2011 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  2. Open-source chemogenomic data-driven algorithms for predicting drug-target interactions.

    PubMed

    Hao, Ming; Bryant, Stephen H; Wang, Yanli

    2018-02-06

    While novel technologies such as high-throughput screening have advanced together with significant investment by pharmaceutical companies during the past decades, the success rate for drug development has not yet been improved prompting researchers looking for new strategies of drug discovery. Drug repositioning is a potential approach to solve this dilemma. However, experimental identification and validation of potential drug targets encoded by the human genome is both costly and time-consuming. Therefore, effective computational approaches have been proposed to facilitate drug repositioning, which have proved to be successful in drug discovery. Doubtlessly, the availability of open-accessible data from basic chemical biology research and the success of human genome sequencing are crucial to develop effective in silico drug repositioning methods allowing the identification of potential targets for existing drugs. In this work, we review several chemogenomic data-driven computational algorithms with source codes publicly accessible for predicting drug-target interactions (DTIs). We organize these algorithms by model properties and model evolutionary relationships. We re-implemented five representative algorithms in R programming language, and compared these algorithms by means of mean percentile ranking, a new recall-based evaluation metric in the DTI prediction research field. We anticipate that this review will be objective and helpful to researchers who would like to further improve existing algorithms or need to choose appropriate algorithms to infer potential DTIs in the projects. The source codes for DTI predictions are available at: https://github.com/minghao2016/chemogenomicAlg4DTIpred. Published by Oxford University Press 2018. This work is written by US Government employees and is in the public domain in the US.

  3. Lung adenocarcinoma in the era of targeted therapies: histological classification, sample prioritization, and predictive biomarkers.

    PubMed

    Conde, E; Angulo, B; Izquierdo, E; Paz-Ares, L; Belda-Iniesta, C; Hidalgo, M; López-Ríos, F

    2013-07-01

    The arrival of targeted therapies has presented both a conceptual and a practical challenge in the treatment of patients with advanced non-small cell lung carcinomas (NSCLCs). The relationship of these treatments with specific histologies and predictive biomarkers has made the handling of biopsies the key factor for success. In this study, we highlight the balance between precise histological diagnosis and the practice of conducting multiple predictive assays simultaneously. This can only be achieved where there is a commitment to multidisciplinary working by the tumor board to ensure that a sensible protocol is applied. This proposal for prioritizing samples includes both recent technological advances and the some of the latest discoveries in the molecular classification of NSCLCs.

  4. The artificial membrane insert system as predictive tool for formulation performance evaluation.

    PubMed

    Berben, Philippe; Brouwers, Joachim; Augustijns, Patrick

    2018-02-15

    In view of the increasing interest of pharmaceutical companies for cell- and tissue-free models to implement permeation into formulation testing, this study explored the capability of an artificial membrane insert system (AMI-system) as predictive tool to evaluate the performance of absorption-enabling formulations. Firstly, to explore the usefulness of the AMI-system in supersaturation assessment, permeation was monitored after induction of different degrees of loviride supersaturation. Secondly, to explore the usefulness of the AMI-system in formulation evaluation, a two-stage dissolution test was performed prior to permeation assessment. Different case examples were selected based on the availability of in vivo (intraluminal and systemic) data: (i) a suspension of posaconazole (Noxafil ® ), (ii) a cyclodextrin-based formulation of itraconazole (Sporanox ® ), and (iii) a micronized (Lipanthyl ® ) and nanosized (Lipanthylnano ® ) formulation of fenofibrate. The obtained results demonstrate that the AMI-system is able to capture the impact of loviride supersaturation on permeation. Furthermore, the AMI-system correctly predicted the effects of (i) formulation pH on posaconazole absorption, (ii) dilution on cyclodextrin-based itraconazole absorption, and (iii) food intake on fenofibrate absorption. Based on the applied in vivo/in vitro approach, the AMI-system combined with simple dissolution testing appears to be a time- and cost-effective tool for the early-stage evaluation of absorption-enabling formulations. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. DR2DI: a powerful computational tool for predicting novel drug-disease associations

    NASA Astrophysics Data System (ADS)

    Lu, Lu; Yu, Hua

    2018-05-01

    Finding the new related candidate diseases for known drugs provides an effective method for fast-speed and low-risk drug development. However, experimental identification of drug-disease associations is expensive and time-consuming. This motivates the need for developing in silico computational methods that can infer true drug-disease pairs with high confidence. In this study, we presented a novel and powerful computational tool, DR2DI, for accurately uncovering the potential associations between drugs and diseases using high-dimensional and heterogeneous omics data as information sources. Based on a unified and extended similarity kernel framework, DR2DI inferred the unknown relationships between drugs and diseases using Regularized Kernel Classifier. Importantly, DR2DI employed a semi-supervised and global learning algorithm which can be applied to uncover the diseases (drugs) associated with known and novel drugs (diseases). In silico global validation experiments showed that DR2DI significantly outperforms recent two approaches for predicting drug-disease associations. Detailed case studies further demonstrated that the therapeutic indications and side effects of drugs predicted by DR2DI could be validated by existing database records and literature, suggesting that DR2DI can be served as a useful bioinformatic tool for identifying the potential drug-disease associations and guiding drug repositioning. Our software and comparison codes are freely available at https://github.com/huayu1111/DR2DI.

  6. The East London glaucoma prediction score: web-based validation of glaucoma risk screening tool

    PubMed Central

    Stephen, Cook; Benjamin, Longo-Mbenza

    2013-01-01

    AIM It is difficult for Optometrists and General Practitioners to know which patients are at risk. The East London glaucoma prediction score (ELGPS) is a web based risk calculator that has been developed to determine Glaucoma risk at the time of screening. Multiple risk factors that are available in a low tech environment are assessed to provide a risk assessment. This is extremely useful in settings where access to specialist care is difficult. Use of the calculator is educational. It is a free web based service. Data capture is user specific. METHOD The scoring system is a web based questionnaire that captures and subsequently calculates the relative risk for the presence of Glaucoma at the time of screening. Three categories of patient are described: Unlikely to have Glaucoma; Glaucoma Suspect and Glaucoma. A case review methodology of patients with known diagnosis is employed to validate the calculator risk assessment. RESULTS Data from the patient records of 400 patients with an established diagnosis has been captured and used to validate the screening tool. The website reports that the calculated diagnosis correlates with the actual diagnosis 82% of the time. Biostatistics analysis showed: Sensitivity = 88%; Positive predictive value = 97%; Specificity = 75%. CONCLUSION Analysis of the first 400 patients validates the web based screening tool as being a good method of screening for the at risk population. The validation is ongoing. The web based format will allow a more widespread recruitment for different geographic, population and personnel variables. PMID:23550097

  7. Controller Strategies for Automation Tool Use under Varying Levels of Trajectory Prediction Uncertainty

    NASA Technical Reports Server (NTRS)

    Morey, Susan; Prevot, Thomas; Mercer, Joey; Martin, Lynne; Bienert, Nancy; Cabrall, Christopher; Hunt, Sarah; Homola, Jeffrey; Kraut, Joshua

    2013-01-01

    A human-in-the-loop simulation was conducted to examine the effects of varying levels of trajectory prediction uncertainty on air traffic controller workload and performance, as well as how strategies and the use of decision support tools change in response. This paper focuses on the strategies employed by two controllers from separate teams who worked in parallel but independently under identical conditions (airspace, arrival traffic, tools) with the goal of ensuring schedule conformance and safe separation for a dense arrival flow in en route airspace. Despite differences in strategy and methods, both controllers achieved high levels of schedule conformance and safe separation. Overall, results show that trajectory uncertainties introduced by wind and aircraft performance prediction errors do not affect the controllers' ability to manage traffic. Controller strategies were fairly robust to changes in error, though strategies were affected by the amount of delay to absorb (scheduled time of arrival minus estimated time of arrival). Using the results and observations, this paper proposes an ability to dynamically customize the display of information including delay time based on observed error to better accommodate different strategies and objectives.

  8. DR2DI: a powerful computational tool for predicting novel drug-disease associations

    NASA Astrophysics Data System (ADS)

    Lu, Lu; Yu, Hua

    2018-04-01

    Finding the new related candidate diseases for known drugs provides an effective method for fast-speed and low-risk drug development. However, experimental identification of drug-disease associations is expensive and time-consuming. This motivates the need for developing in silico computational methods that can infer true drug-disease pairs with high confidence. In this study, we presented a novel and powerful computational tool, DR2DI, for accurately uncovering the potential associations between drugs and diseases using high-dimensional and heterogeneous omics data as information sources. Based on a unified and extended similarity kernel framework, DR2DI inferred the unknown relationships between drugs and diseases using Regularized Kernel Classifier. Importantly, DR2DI employed a semi-supervised and global learning algorithm which can be applied to uncover the diseases (drugs) associated with known and novel drugs (diseases). In silico global validation experiments showed that DR2DI significantly outperforms recent two approaches for predicting drug-disease associations. Detailed case studies further demonstrated that the therapeutic indications and side effects of drugs predicted by DR2DI could be validated by existing database records and literature, suggesting that DR2DI can be served as a useful bioinformatic tool for identifying the potential drug-disease associations and guiding drug repositioning. Our software and comparison codes are freely available at https://github.com/huayu1111/DR2DI.

  9. Novel inter and intra prediction tools under consideration for the emerging AV1 video codec

    NASA Astrophysics Data System (ADS)

    Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe; Wang, Yunqing; Bankoski, Jim; Wang, Chen; Keyder, Emil

    2017-09-01

    Google started the WebM Project in 2010 to develop open source, royalty- free video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec AV1, in a consortium of major tech companies called the Alliance for Open Media, that achieves at least a generational improvement in coding efficiency over VP9. In this paper, we focus primarily on new tools in AV1 that improve the prediction of pixel blocks before transforms, quantization and entropy coding are invoked. Specifically, we describe tools and coding modes that improve intra, inter and combined inter-intra prediction. Results are presented on standard test sets.

  10. Predicting the dynamics of bacterial growth inhibition by ribosome-targeting antibiotics

    NASA Astrophysics Data System (ADS)

    Greulich, Philip; Doležal, Jakub; Scott, Matthew; Evans, Martin R.; Allen, Rosalind J.

    2017-12-01

    Understanding how antibiotics inhibit bacteria can help to reduce antibiotic use and hence avoid antimicrobial resistance—yet few theoretical models exist for bacterial growth inhibition by a clinically relevant antibiotic treatment regimen. In particular, in the clinic, antibiotic treatment is time-dependent. Here, we use a theoretical model, previously applied to steady-state bacterial growth, to predict the dynamical response of a bacterial cell to a time-dependent dose of ribosome-targeting antibiotic. Our results depend strongly on whether the antibiotic shows reversible transport and/or low-affinity ribosome binding (‘low-affinity antibiotic’) or, in contrast, irreversible transport and/or high affinity ribosome binding (‘high-affinity antibiotic’). For low-affinity antibiotics, our model predicts that growth inhibition depends on the duration of the antibiotic pulse, and can show a transient period of very fast growth following removal of the antibiotic. For high-affinity antibiotics, growth inhibition depends on peak dosage rather than dose duration, and the model predicts a pronounced post-antibiotic effect, due to hysteresis, in which growth can be suppressed for long times after the antibiotic dose has ended. These predictions are experimentally testable and may be of clinical significance.

  11. Predicting the dynamics of bacterial growth inhibition by ribosome-targeting antibiotics

    PubMed Central

    Greulich, Philip; Doležal, Jakub; Scott, Matthew; Evans, Martin R; Allen, Rosalind J

    2017-01-01

    Understanding how antibiotics inhibit bacteria can help to reduce antibiotic use and hence avoid antimicrobial resistance—yet few theoretical models exist for bacterial growth inhibition by a clinically relevant antibiotic treatment regimen. In particular, in the clinic, antibiotic treatment is time-dependent. Here, we use a theoretical model, previously applied to steady-state bacterial growth, to predict the dynamical response of a bacterial cell to a time-dependent dose of ribosome-targeting antibiotic. Our results depend strongly on whether the antibiotic shows reversible transport and/or low-affinity ribosome binding (‘low-affinity antibiotic’) or, in contrast, irreversible transport and/or high affinity ribosome binding (‘high-affinity antibiotic’). For low-affinity antibiotics, our model predicts that growth inhibition depends on the duration of the antibiotic pulse, and can show a transient period of very fast growth following removal of the antibiotic. For high-affinity antibiotics, growth inhibition depends on peak dosage rather than dose duration, and the model predicts a pronounced post-antibiotic effect, due to hysteresis, in which growth can be suppressed for long times after the antibiotic dose has ended. These predictions are experimentally testable and may be of clinical significance. PMID:28714461

  12. GPS-Lipid: a robust tool for the prediction of multiple lipid modification sites.

    PubMed

    Xie, Yubin; Zheng, Yueyuan; Li, Hongyu; Luo, Xiaotong; He, Zhihao; Cao, Shuo; Shi, Yi; Zhao, Qi; Xue, Yu; Zuo, Zhixiang; Ren, Jian

    2016-06-16

    As one of the most common post-translational modifications in eukaryotic cells, lipid modification is an important mechanism for the regulation of variety aspects of protein function. Over the last decades, three classes of lipid modifications have been increasingly studied. The co-regulation of these different lipid modifications is beginning to be noticed. However, due to the lack of integrated bioinformatics resources, the studies of co-regulatory mechanisms are still very limited. In this work, we developed a tool called GPS-Lipid for the prediction of four classes of lipid modifications by integrating the Particle Swarm Optimization with an aging leader and challengers (ALC-PSO) algorithm. GPS-Lipid was proven to be evidently superior to other similar tools. To facilitate the research of lipid modification, we hosted a publicly available web server at http://lipid.biocuckoo.org with not only the implementation of GPS-Lipid, but also an integrative database and visualization tool. We performed a systematic analysis of the co-regulatory mechanism between different lipid modifications with GPS-Lipid. The results demonstrated that the proximal dual-lipid modifications among palmitoylation, myristoylation and prenylation are key mechanism for regulating various protein functions. In conclusion, GPS-lipid is expected to serve as useful resource for the research on lipid modifications, especially on their co-regulation.

  13. BFH-OST, a new predictive screening tool for identifying osteoporosis in postmenopausal Han Chinese women

    PubMed Central

    Ma, Zhao; Yang, Yong; Lin, JiSheng; Zhang, XiaoDong; Meng, Qian; Wang, BingQiang; Fei, Qi

    2016-01-01

    Purpose To develop a simple new clinical screening tool to identify primary osteoporosis by dual-energy X-ray absorptiometry (DXA) in postmenopausal women and to compare its validity with the Osteoporosis Self-Assessment Tool for Asians (OSTA) in a Han Chinese population. Methods A cross-sectional study was conducted, enrolling 1,721 community-dwelling postmenopausal Han Chinese women. All the subjects completed a structured questionnaire and had their bone mineral density measured using DXA. Using logistic regression analysis, we assessed the ability of numerous potential risk factors examined in the questionnaire to identify women with osteoporosis. Based on this analysis, we build a new predictive model, the Beijing Friendship Hospital Osteoporosis Self-Assessment Tool (BFH-OST). Receiver operating characteristic curves were generated to compare the validity of the new model and OSTA in identifying postmenopausal women at increased risk of primary osteoporosis as defined according to the World Health Organization criteria. Results At screening, it was found that of the 1,721 subjects with DXA, 22.66% had osteoporosis and a further 47.36% had osteopenia. Of the items screened in the questionnaire, it was found that age, weight, height, body mass index, personal history of fracture after the age of 45 years, history of fragility fracture in either parent, current smoking, and consumption of three of more alcoholic drinks per day were all predictive of osteoporosis. However, age at menarche and menopause, years since menopause, and number of pregnancies and live births were irrelevant in this study. The logistic regression analysis and item reduction yielded a final tool (BFH-OST) based on age, body weight, height, and history of fracture after the age of 45 years. The BFH-OST index (cutoff =9.1), which performed better than OSTA, had a sensitivity of 73.6% and a specificity of 72.7% for identifying osteoporosis, with an area under the receiver operating

  14. PBPK Modeling - A Predictive, Eco-Friendly, Bio-Waiver Tool for Drug Research.

    PubMed

    De, Baishakhi; Bhandari, Koushik; Mukherjee, Ranjan; Katakam, Prakash; Adiki, Shanta K; Gundamaraju, Rohit; Mitra, Analava

    2017-01-01

    The world has witnessed growing complexities in disease scenario influenced by the drastic changes in host-pathogen- environment triadic relation. Pharmaceutical R&Ds are in constant search of novel therapeutic entities to hasten transition of drug molecules from lab bench to patient bedside. Extensive animal studies and human pharmacokinetics are still the "gold standard" in investigational new drug research and bio-equivalency studies. Apart from cost, time and ethical issues on animal experimentation, burning questions arise relating to ecological disturbances, environmental hazards and biodiversity issues. Grave concerns arises when the adverse outcomes of continued studies on one particular disease on environment gives rise to several other pathogenic agents finally complicating the total scenario. Thus Pharma R&Ds face a challenge to develop bio-waiver protocols. Lead optimization, drug candidate selection with favorable pharmacokinetics and pharmacodynamics, toxicity assessment are vital steps in drug development. Simulation tools like Gastro Plus™, PK Sim®, SimCyp find applications for the purpose. Advanced technologies like organ-on-a chip or human-on-a chip where a 3D representation of human organs and systems can mimic the related processes and activities, thereby linking them to major features of human biology can be successfully incorporated in the drug development tool box. PBPK provides the State of Art to serve as an optional of animal experimentation. PBPK models can successfully bypass bio-equivalency studies, predict bioavailability, drug interactions and on hyphenation with in vitro-in vivo correlation can be extrapolated to humans thus serving as bio-waiver. PBPK can serve as an eco-friendly bio-waiver predictive tool in drug development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. Identification of New Tools to Predict Surgical Performance of Novices using a Plastic Surgery Simulator.

    PubMed

    Kazan, Roy; Viezel-Mathieu, Alex; Cyr, Shantale; Hemmerling, Thomas M; Lin, Samuel J; Gilardino, Mirko S

    2018-04-09

    To identify new tools capable of predicting surgical performance of novices on an augmentation mammoplasty simulator. The pace of technical skills acquisition varies between residents and may necessitate more time than that allotted by residency training before reaching competence. Identifying applicants with superior innate technical abilities might shorten learning curves and the time to reach competence. The objective of this study is to identify new tools that could predict surgical performance of novices on a mammoplasty simulator. We recruited 14 medical students and recorded their performance in 2 skill-games: Mikado and Perplexus Epic, and in 2 video games: Star War Racer (Sony Playstation 3) and Super Monkey Ball 2 (Nintendo Wii). Then, each participant performed an augmentation mammoplasty procedure on a Mammoplasty Part-task Trainer, which allows the simulation of the essential steps of the procedure. The average age of participants was 25.4 years. Correlation studies showed significant association between Perplexus Epic, Star Wars Racer, Super Monkey Ball scores and the modified OSATS score with r s = 0.8491 (p < 0.001), r s = -0.6941 (p = 0.005), and r s = 0.7309 (p < 0.003), but not with the Mikado score r s = -0.0255 (p = 0.9). Linear regressions were strongest for Perplexus Epic and Super Monkey Ball scores with coefficients of determination of 0.59 and 0.55, respectively. A combined score (Perplexus/Super-Monkey-Ball) was computed and showed a significant correlation with the modified OSATS score having an r s = 0.8107 (p < 0.001) and R 2 = 0.75, respectively. This study identified a combination of skill games that correlated to better performance of novices on a surgical simulator. With refinement, such tools could serve to help screen plastic surgery applicants and identify those with higher surgical performance predictors. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  16. Solubility prediction, solvate and cocrystal screening as tools for rational crystal engineering.

    PubMed

    Loschen, Christoph; Klamt, Andreas

    2015-06-01

    The fact that novel drug candidates are becoming increasingly insoluble is a major problem of current drug development. Computational tools may address this issue by screening for suitable solvents or by identifying potential novel cocrystal formers that increase bioavailability. In contrast to other more specialized methods, the fluid phase thermodynamics approach COSMO-RS (conductor-like screening model for real solvents) allows for a comprehensive treatment of drug solubility, solvate and cocrystal formation and many other thermodynamics properties in liquids. This article gives an overview of recent COSMO-RS developments that are of interest for drug development and contains several new application examples for solubility prediction and solvate/cocrystal screening. For all property predictions COSMO-RS has been used. The basic concept of COSMO-RS consists of using the screening charge density as computed from first principles calculations in combination with fast statistical thermodynamics to compute the chemical potential of a compound in solution. The fast and accurate assessment of drug solubility and the identification of suitable solvents, solvate or cocrystal formers is nowadays possible and may be used to complement modern drug development. Efficiency is increased by avoiding costly quantum-chemical computations using a database of previously computed molecular fragments. COSMO-RS theory can be applied to a range of physico-chemical properties, which are of interest in rational crystal engineering. Most notably, in combination with experimental reference data, accurate quantitative solubility predictions in any solvent or solvent mixture are possible. Additionally, COSMO-RS can be extended to the prediction of cocrystal formation, which results in considerable predictive accuracy concerning coformer screening. In a recent variant costly quantum chemical calculations are avoided resulting in a significant speed-up and ease-of-use. © 2015 Royal

  17. MP3: a software tool for the prediction of pathogenic proteins in genomic and metagenomic data.

    PubMed

    Gupta, Ankit; Kapil, Rohan; Dhakan, Darshan B; Sharma, Vineet K

    2014-01-01

    The identification of virulent proteins in any de-novo sequenced genome is useful in estimating its pathogenic ability and understanding the mechanism of pathogenesis. Similarly, the identification of such proteins could be valuable in comparing the metagenome of healthy and diseased individuals and estimating the proportion of pathogenic species. However, the common challenge in both the above tasks is the identification of virulent proteins since a significant proportion of genomic and metagenomic proteins are novel and yet unannotated. The currently available tools which carry out the identification of virulent proteins provide limited accuracy and cannot be used on large datasets. Therefore, we have developed an MP3 standalone tool and web server for the prediction of pathogenic proteins in both genomic and metagenomic datasets. MP3 is developed using an integrated Support Vector Machine (SVM) and Hidden Markov Model (HMM) approach to carry out highly fast, sensitive and accurate prediction of pathogenic proteins. It displayed Sensitivity, Specificity, MCC and accuracy values of 92%, 100%, 0.92 and 96%, respectively, on blind dataset constructed using complete proteins. On the two metagenomic blind datasets (Blind A: 51-100 amino acids and Blind B: 30-50 amino acids), it displayed Sensitivity, Specificity, MCC and accuracy values of 82.39%, 97.86%, 0.80 and 89.32% for Blind A and 71.60%, 94.48%, 0.67 and 81.86% for Blind B, respectively. In addition, the performance of MP3 was validated on selected bacterial genomic and real metagenomic datasets. To our knowledge, MP3 is the only program that specializes in fast and accurate identification of partial pathogenic proteins predicted from short (100-150 bp) metagenomic reads and also performs exceptionally well on complete protein sequences. MP3 is publicly available at http://metagenomics.iiserb.ac.in/mp3/index.php.

  18. MP3: A Software Tool for the Prediction of Pathogenic Proteins in Genomic and Metagenomic Data

    PubMed Central

    Gupta, Ankit; Kapil, Rohan; Dhakan, Darshan B.; Sharma, Vineet K.

    2014-01-01

    The identification of virulent proteins in any de-novo sequenced genome is useful in estimating its pathogenic ability and understanding the mechanism of pathogenesis. Similarly, the identification of such proteins could be valuable in comparing the metagenome of healthy and diseased individuals and estimating the proportion of pathogenic species. However, the common challenge in both the above tasks is the identification of virulent proteins since a significant proportion of genomic and metagenomic proteins are novel and yet unannotated. The currently available tools which carry out the identification of virulent proteins provide limited accuracy and cannot be used on large datasets. Therefore, we have developed an MP3 standalone tool and web server for the prediction of pathogenic proteins in both genomic and metagenomic datasets. MP3 is developed using an integrated Support Vector Machine (SVM) and Hidden Markov Model (HMM) approach to carry out highly fast, sensitive and accurate prediction of pathogenic proteins. It displayed Sensitivity, Specificity, MCC and accuracy values of 92%, 100%, 0.92 and 96%, respectively, on blind dataset constructed using complete proteins. On the two metagenomic blind datasets (Blind A: 51–100 amino acids and Blind B: 30–50 amino acids), it displayed Sensitivity, Specificity, MCC and accuracy values of 82.39%, 97.86%, 0.80 and 89.32% for Blind A and 71.60%, 94.48%, 0.67 and 81.86% for Blind B, respectively. In addition, the performance of MP3 was validated on selected bacterial genomic and real metagenomic datasets. To our knowledge, MP3 is the only program that specializes in fast and accurate identification of partial pathogenic proteins predicted from short (100–150 bp) metagenomic reads and also performs exceptionally well on complete protein sequences. MP3 is publicly available at http://metagenomics.iiserb.ac.in/mp3/index.php. PMID:24736651

  19. Screening Tool for Early Postnatal Prediction of Retinopathy of Prematurity in Preterm Newborns (STEP-ROP).

    PubMed

    Ricard, Caroline A; Dammann, Christiane E L; Dammann, Olaf

    2017-01-01

    Retinopathy of prematurity (ROP) is a disorder of the preterm newborn characterized by neurovascular disruption in the immature retina that may cause visual impairment and blindness. To develop a clinical screening tool for early postnatal prediction of ROP in preterm newborns based on risk information available within the first 48 h of postnatal life. Using data submitted to the Vermont Oxford Network (VON) between 1995 and 2015, we created logistic regression models based on infants born <28 completed weeks gestational age. We developed a model with 60% of the data and identified birth weight, gestational age, respiratory distress syndrome, non-Hispanic ethnicity, and multiple gestation as predictors of ROP. We tested the model in the remaining 40%, performed tenfold cross-validation, and tested the score in ELGAN study data. Of the 1,052 newborns in the VON database, 627 recorded an ROP status. Forty percent had no ROP, 40% had mild ROP (stages 1 and 2), and 20% had severe ROP (stages 3-5). We created a weighted score to predict any ROP based on the multivariable regression model. A cutoff score of 5 had the best sensitivity (95%, 95% CI 93-97), while maintaining a strong positive predictive value (63%, 95% CI 57-68). When applied to the ELGAN data, sensitivity was lower (72%, 95% CI 69-75), but PPV was higher (80%, 95% CI 77-83). STEP-ROP is a promising screening tool. It is easy to calculate, does not rely on extensive postnatal data collection, and can be calculated early after birth. Early ROP screening may help physicians limit patient exposure to additional risk factors, and may be useful for risk stratification in clinical trials aimed at reducing ROP. © 2017 S. Karger AG, Basel.

  20. Attitudes to mesalamine questionnaire: a novel tool to predict mesalamine nonadherence in patients with IBD.

    PubMed

    Moss, Alan C; Lillis, Yvonne; Edwards George, Jessica B; Choudhry, Niteesh K; Berg, Anders H; Cheifetz, Adam S; Horowitz, Gary; Leffler, Dan A

    2014-12-01

    Poor adherence to mesalamine is common and driven by a combination of lifestyle and behavioral factors, as well as health beliefs. We sought to develop a valid tool to identify barriers to patient adherence and predict those at risk for future nonadherence. A 10-item survey was developed from patient-reported barriers to adherence. The survey was administered to 106 patients with ulcerative colitis who were prescribed mesalamine, and correlated with prospectively collected 12-month pharmacy refills (medication possession ratio (MPR)), urine levels of salicylates, and self-reported adherence (Morisky Medication Adherence Scale (MMAS)-8). From the initial 10-item survey, 8 items correlated highly with the MMAS-8 score at enrollment. Computer-generated randomization produced a derivation cohort of 60 subjects and a validation cohort of 46 subjects to assess the survey items in their ability to predict future adherence. Two items from the patient survey correlated with objective measures of long-term adherence: their belief in the importance of maintenance mesalamine even when in remission and their concerns about side effects. The additive score based on these two items correlated with 12-month MPR in both the derivation and validation cohorts (P<0.05). Scores on these two items were associated with a higher risk of being nonadherent over the subsequent 12 months (relative risk (RR) =2.2, 95% confidence interval=1.5-3.5, P=0.04). The area under the curve for the performance of this 2-item tool was greater than that of the 10-item MMAS-8 score for predicting MPR scores over 12 months (area under the curve 0.7 vs. 0.5). Patients' beliefs about the need for maintenance mesalamine and their concerns about side effects influence their adherence to mesalamine over time. These concerns could easily be raised in practice to identify patients at risk of nonadherence (Clinical Trial number NCT01349504).

  1. Experimental and Mathematical Modeling for Prediction of Tool Wear on the Machining of Aluminium 6061 Alloy by High Speed Steel Tools

    NASA Astrophysics Data System (ADS)

    Okokpujie, Imhade Princess; Ikumapayi, Omolayo M.; Okonkwo, Ugochukwu C.; Salawu, Enesi Y.; Afolalu, Sunday A.; Dirisu, Joseph O.; Nwoke, Obinna N.; Ajayi, Oluseyi O.

    2017-12-01

    In recent machining operation, tool life is one of the most demanding tasks in production process, especially in the automotive industry. The aim of this paper is to study tool wear on HSS in end milling of aluminium 6061 alloy. The experiments were carried out to investigate tool wear with the machined parameters and to developed mathematical model using response surface methodology. The various machining parameters selected for the experiment are spindle speed (N), feed rate (f), axial depth of cut (a) and radial depth of cut (r). The experiment was designed using central composite design (CCD) in which 31 samples were run on SIEG 3/10/0010 CNC end milling machine. After each experiment the cutting tool was measured using scanning electron microscope (SEM). The obtained optimum machining parameter combination are spindle speed of 2500 rpm, feed rate of 200 mm/min, axial depth of cut of 20 mm, and radial depth of cut 1.0mm was found out to achieved the minimum tool wear as 0.213 mm. The mathematical model developed predicted the tool wear with 99.7% which is within the acceptable accuracy range for tool wear prediction.

  2. 3D flexible alignment using 2D maximum common substructure: dependence of prediction accuracy on target-reference chemical similarity.

    PubMed

    Kawabata, Takeshi; Nakamura, Haruki

    2014-07-28

    A protein-bound conformation of a target molecule can be predicted by aligning the target molecule on the reference molecule obtained from the 3D structure of the compound-protein complex. This strategy is called "similarity-based docking". For this purpose, we develop the flexible alignment program fkcombu, which aligns the target molecule based on atomic correspondences with the reference molecule. The correspondences are obtained by the maximum common substructure (MCS) of 2D chemical structures, using our program kcombu. The prediction performance was evaluated using many target-reference pairs of superimposed ligand 3D structures on the same protein in the PDB, with different ranges of chemical similarity. The details of atomic correspondence largely affected the prediction success. We found that topologically constrained disconnected MCS (TD-MCS) with the simple element-based atomic classification provides the best prediction. The crashing potential energy with the receptor protein improved the performance. We also found that the RMSD between the predicted and correct target conformations significantly correlates with the chemical similarities between target-reference molecules. Generally speaking, if the reference and target compounds have more than 70% chemical similarity, then the average RMSD of 3D conformations is <2.0 Å. We compared the performance with a rigid-body molecular alignment program based on volume-overlap scores (ShaEP). Our MCS-based flexible alignment program performed better than the rigid-body alignment program, especially when the target and reference molecules were sufficiently similar.

  3. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique.

    PubMed

    Hao, Ming; Wang, Yanli; Bryant, Stephen H

    2016-02-25

    Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision-recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. Published by Elsevier B.V.

  4. Accurate and Reliable Prediction of the Binding Affinities of Macrocycles to Their Protein Targets.

    PubMed

    Yu, Haoyu S; Deng, Yuqing; Wu, Yujie; Sindhikara, Dan; Rask, Amy R; Kimura, Takayuki; Abel, Robert; Wang, Lingle

    2017-12-12

    Macrocycles have been emerging as a very important drug class in the past few decades largely due to their expanded chemical diversity benefiting from advances in synthetic methods. Macrocyclization has been recognized as an effective way to restrict the conformational space of acyclic small molecule inhibitors with the hope of improving potency, selectivity, and metabolic stability. Because of their relatively larger size as compared to typical small molecule drugs and the complexity of the structures, efficient sampling of the accessible macrocycle conformational space and accurate prediction of their binding affinities to their target protein receptors poses a great challenge of central importance in computational macrocycle drug design. In this article, we present a novel method for relative binding free energy calculations between macrocycles with different ring sizes and between the macrocycles and their corresponding acyclic counterparts. We have applied the method to seven pharmaceutically interesting data sets taken from recent drug discovery projects including 33 macrocyclic ligands covering a diverse chemical space. The predicted binding free energies are in good agreement with experimental data with an overall root-mean-square error (RMSE) of 0.94 kcal/mol. This is to our knowledge the first time where the free energy of the macrocyclization of linear molecules has been directly calculated with rigorous physics-based free energy calculation methods, and we anticipate the outstanding accuracy demonstrated here across a broad range of target classes may have significant implications for macrocycle drug discovery.

  5. Circulating and disseminated tumor cells: diagnostic tools and therapeutic targets in motion

    PubMed Central

    Lin, Peter P.; Gires, Olivier

    2017-01-01

    Enumeration of circulating tumor cells (CTCs) in peripheral blood with the gold standard CellSearchTM has proven prognostic value for tumor recurrence and progression of metastatic disease. Therefore, the further molecular characterization of isolated CTCs might have clinical relevance as liquid biopsy for therapeutic decision-making and to monitor disease progression. The direct analysis of systemic cancer appears particularly important in view of the known disparity in expression of therapeutic targets as well as epithelial-to-mesenchymal transition (EMT)-based heterogeneity between primary and systemic tumor cells, which all substantially complicate monitoring and therapeutic targeting at present. Since CTCs are the potential precursor cells of metastasis, their in-depth molecular profiling should also provide a useful resource for target discovery. The present review will discuss the use of systemically spread cancer cells as liquid biopsy and focus on potential target antigens. PMID:27683128

  6. [Mathematical modeling: an essential tool for the study of therapeutic targeting in solid tumors].

    PubMed

    Saidak, Zuzana; Giacobbi, Anne-Sophie; Morisse, Mony Chenda; Mammeri, Youcef; Galmiche, Antoine

    2017-12-01

    Recent progress in biology has made the study of the medical treatment of cancer more effective, but it has also revealed the large complexity of carcinogenesis and cell signaling. For many types of cancer, several therapeutic targets are known and in some cases drugs against these targets exist. Unfortunately, the target proteins often work in networks, resulting in functional adaptation and the development of resilience/resistance to medical treatment. The use of mathematical modeling makes it possible to carry out system-level analyses for improved study of therapeutic targeting in solid tumours. We present the main types of mathematical models used in cancer research and we provide examples illustrating the relevance of these approaches in molecular oncobiology. © 2017 médecine/sciences – Inserm.

  7. Experimental new automatic tools for robotic stereotactic neurosurgery: towards "no hands" procedure of leads implantation into a brain target.

    PubMed

    Mazzone, P; Arena, P; Cantelli, L; Spampinato, G; Sposato, S; Cozzolino, S; Demarinis, P; Muscato, G

    2016-07-01

    The use of robotics in neurosurgery and, particularly, in stereotactic neurosurgery, is becoming more and more adopted because of the great advantages that it offers. Robotic manipulators easily allow to achieve great precision, reliability, and rapidity in the positioning of surgical instruments or devices in the brain. The aim of this work was to experimentally verify a fully automatic "no hands" surgical procedure. The integration of neuroimaging to data for planning the surgery, followed by application of new specific surgical tools, permitted the realization of a fully automated robotic implantation of leads in brain targets. An anthropomorphic commercial manipulator was utilized. In a preliminary phase, a software to plan surgery was developed, and the surgical tools were tested first during a simulation and then on a skull mock-up. In such a way, several tools were developed and tested, and the basis for an innovative surgical procedure arose. The final experimentation was carried out on anesthetized "large white" pigs. The determination of stereotactic parameters for the correct planning to reach the intended target was performed with the same technique currently employed in human stereotactic neurosurgery, and the robotic system revealed to be reliable and precise in reaching the target. The results of this work strengthen the possibility that a neurosurgeon may be substituted by a machine, and may represent the beginning of a new approach in the current clinical practice. Moreover, this possibility may have a great impact not only on stereotactic functional procedures but also on the entire domain of neurosurgery.

  8. Human Splicing Finder: an online bioinformatics tool to predict splicing signals.

    PubMed

    Desmet, François-Olivier; Hamroun, Dalil; Lalande, Marine; Collod-Béroud, Gwenaëlle; Claustres, Mireille; Béroud, Christophe

    2009-05-01

    Thousands of mutations are identified yearly. Although many directly affect protein expression, an increasing proportion of mutations is now believed to influence mRNA splicing. They mostly affect existing splice sites, but synonymous, non-synonymous or nonsense mutations can also create or disrupt splice sites or auxiliary cis-splicing sequences. To facilitate the analysis of the different mutations, we designed Human Splicing Finder (HSF), a tool to predict the effects of mutations on splicing signals or to identify splicing motifs in any human sequence. It contains all available matrices for auxiliary sequence prediction as well as new ones for binding sites of the 9G8 and Tra2-beta Serine-Arginine proteins and the hnRNP A1 ribonucleoprotein. We also developed new Position Weight Matrices to assess the strength of 5' and 3' splice sites and branch points. We evaluated HSF efficiency using a set of 83 intronic and 35 exonic mutations known to result in splicing defects. We showed that the mutation effect was correctly predicted in almost all cases. HSF could thus represent a valuable resource for research, diagnostic and therapeutic (e.g. therapeutic exon skipping) purposes as well as for global studies, such as the GEN2PHEN European Project or the Human Variome Project.

  9. Human Splicing Finder: an online bioinformatics tool to predict splicing signals

    PubMed Central

    Desmet, François-Olivier; Hamroun, Dalil; Lalande, Marine; Collod-Béroud, Gwenaëlle; Claustres, Mireille; Béroud, Christophe

    2009-01-01

    Thousands of mutations are identified yearly. Although many directly affect protein expression, an increasing proportion of mutations is now believed to influence mRNA splicing. They mostly affect existing splice sites, but synonymous, non-synonymous or nonsense mutations can also create or disrupt splice sites or auxiliary cis-splicing sequences. To facilitate the analysis of the different mutations, we designed Human Splicing Finder (HSF), a tool to predict the effects of mutations on splicing signals or to identify splicing motifs in any human sequence. It contains all available matrices for auxiliary sequence prediction as well as new ones for binding sites of the 9G8 and Tra2-β Serine-Arginine proteins and the hnRNP A1 ribonucleoprotein. We also developed new Position Weight Matrices to assess the strength of 5′ and 3′ splice sites and branch points. We evaluated HSF efficiency using a set of 83 intronic and 35 exonic mutations known to result in splicing defects. We showed that the mutation effect was correctly predicted in almost all cases. HSF could thus represent a valuable resource for research, diagnostic and therapeutic (e.g. therapeutic exon skipping) purposes as well as for global studies, such as the GEN2PHEN European Project or the Human Variome Project. PMID:19339519

  10. CaFE: a tool for binding affinity prediction using end-point free energy methods.

    PubMed

    Liu, Hui; Hou, Tingjun

    2016-07-15

    Accurate prediction of binding free energy is of particular importance to computational biology and structure-based drug design. Among those methods for binding affinity predictions, the end-point approaches, such as MM/PBSA and LIE, have been widely used because they can achieve a good balance between prediction accuracy and computational cost. Here we present an easy-to-use pipeline tool named Calculation of Free Energy (CaFE) to conduct MM/PBSA and LIE calculations. Powered by the VMD and NAMD programs, CaFE is able to handle numerous static coordinate and molecular dynamics trajectory file formats generated by different molecular simulation packages and supports various force field parameters. CaFE source code and documentation are freely available under the GNU General Public License via GitHub at https://github.com/huiliucode/cafe_plugin It is a VMD plugin written in Tcl and the usage is platform-independent. tingjunhou@zju.edu.cn. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.

    PubMed

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.

  12. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science

    PubMed Central

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets. PMID:27532883

  13. A predictive software tool for optimal timing in contrast enhanced carotid MR angiography

    NASA Astrophysics Data System (ADS)

    Moghaddam, Abbas N.; Balawi, Tariq; Habibi, Reza; Panknin, Christoph; Laub, Gerhard; Ruehm, Stefan; Finn, J. Paul

    2008-03-01

    A clear understanding of the first pass dynamics of contrast agents in the vascular system is crucial in synchronizing data acquisition of 3D MR angiography (MRA) with arrival of the contrast bolus in the vessels of interest. We implemented a computational model to simulate contrast dynamics in the vessels using the theory of linear time-invariant systems. The algorithm calculates a patient-specific impulse response for the contrast concentration from time-resolved images following a small test bolus injection. This is performed for a specific region of interest and through deconvolution of the intensity curve using the long division method. Since high spatial resolution 3D MRA is not time-resolved, the method was validated on time-resolved arterial contrast enhancement in Multi Slice CT angiography. For 20 patients, the timing of the contrast enhancement of the main bolus was predicted by our algorithm from the response to the test bolus, and then for each case the predicted time of maximum intensity was compared to the corresponding time in the actual scan which resulted in an acceptable agreement. Furthermore, as a qualitative validation, the algorithm's predictions of the timing of the carotid MRA in 20 patients with high quality MRA were correlated with the actual timing of those studies. We conclude that the above algorithm can be used as a practical clinical tool to eliminate guesswork and to replace empiric formulae by a priori computation of patient-specific timing of data acquisition for MR angiography.

  14. Soil and Water Assessment Tool model predictions of annual maximum pesticide concentrations in high vulnerability watersheds.

    PubMed

    Winchell, Michael F; Peranginangin, Natalia; Srinivasan, Raghavan; Chen, Wenlin

    2018-05-01

    Recent national regulatory assessments of potential pesticide exposure of threatened and endangered species in aquatic habitats have led to increased need for watershed-scale predictions of pesticide concentrations in flowing water bodies. This study was conducted to assess the ability of the uncalibrated Soil and Water Assessment Tool (SWAT) to predict annual maximum pesticide concentrations in the flowing water bodies of highly vulnerable small- to medium-sized watersheds. The SWAT was applied to 27 watersheds, largely within the midwest corn belt of the United States, ranging from 20 to 386 km 2 , and evaluated using consistent input data sets and an uncalibrated parameterization approach. The watersheds were selected from the Atrazine Ecological Exposure Monitoring Program and the Heidelberg Tributary Loading Program, both of which contain high temporal resolution atrazine sampling data from watersheds with exceptionally high vulnerability to atrazine exposure. The model performance was assessed based upon predictions of annual maximum atrazine concentrations in 1-d and 60-d durations, predictions critical in pesticide-threatened and endangered species risk assessments when evaluating potential acute and chronic exposure to aquatic organisms. The simulation results showed that for nearly half of the watersheds simulated, the uncalibrated SWAT model was able to predict annual maximum pesticide concentrations within a narrow range of uncertainty resulting from atrazine application timing patterns. An uncalibrated model's predictive performance is essential for the assessment of pesticide exposure in flowing water bodies, the majority of which have insufficient monitoring data for direct calibration, even in data-rich countries. In situations in which SWAT over- or underpredicted the annual maximum concentrations, the magnitude of the over- or underprediction was commonly less than a factor of 2, indicating that the model and uncalibrated parameterization

  15. [Anti-tumor target prediction and activity verification of Ganoderma lucidum triterpenoids].

    PubMed

    Du, Guo-Hua; Wang, Hong-Xu; Yan, Zheng; Liu, Li-Ying; Chen, Ruo-Yun

    2017-02-01

    It has reported that Ganoderma lucidum triterpenoids had anti-tumor activity. However, the anti-tumor target is still unclear. The present study was designed to investigate the anti-tumor activity of G. lucidum triterpenoids on different tumor cells, and predict their potential targets by virtual screening. In this experiment, molecular docking was used to simulate the interactions of 26 triterpenoids isolated from G. lucidum and 11 target proteins by LibDock module of Discovery Studio2016 software, then the anti-tumor targets of triterpenoids were predicted. In addition, the in vitro anti-tumor effects of triterpenoids were evaluated by MTT assay by determining the inhibition of proliferation in 5 tumor cell lines. The docking results showed that the poses were greater than five, and Libdock Scores higher than 100, which can be used to determine whether compounds were activity. Eight triterpenoids might have anti-tumor activity as a result of good docking, five of which had multiple targets. MTT experiments demonstrated that the ganoderic acid Y had a certain inhibitory activity on lung cancer cell H460, with IC₅₀ of 22.4 μmol•L ⁻¹, followed by 7-oxo-ganoderic acid Z2, with IC₅₀ of 43.1 μmol•L ⁻¹. However, the other triterpenoids had no anti-tumor activity in the detected tumor cell lines. Taking together, molecular docking approach established here can be used for preliminary screening of anti-tumor activity of G.lucidum ingredients. Through this screening method, combined with the MTT assay, we can conclude that ganoderic acid Y had antitumor activity, especially anti-lung cancer, and 7-oxo-ganoderic acid Z2 as well as ganoderon B, to a certain extent, had anti-tumor activity. These findings can provide basis for the development of anti-tumor drugs. However, the anti-tumor mechanisms need to be further studied. Copyright© by the Chinese Pharmaceutical Association.

  16. An overview of bioinformatics tools for epitope prediction: implications on vaccine development.

    PubMed

    Soria-Guerra, Ruth E; Nieto-Gomez, Ricardo; Govea-Alonso, Dania O; Rosales-Mendoza, Sergio

    2015-02-01

    Exploitation of recombinant DNA and sequencing technologies has led to a new concept in vaccination in which isolated epitopes, capable of stimulating a specific immune response, have been identified and used to achieve advanced vaccine formulations; replacing those constituted by whole pathogen-formulations. In this context, bioinformatics approaches play a critical role on analyzing multiple genomes to select the protective epitopes in silico. It is conceived that cocktails of defined epitopes or chimeric protein arrangements, including the target epitopes, may provide a rationale design capable to elicit convenient humoral or cellular immune responses. This review presents a comprehensive compilation of the most advantageous online immunological software and searchable, in order to facilitate the design and development of vaccines. An outlook on how these tools are supporting vaccine development is presented. HIV and influenza have been taken as examples of promising developments on vaccination against hypervariable viruses. Perspectives in this field are also envisioned. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Linking removal targets to the ecological effects of invaders: a predictive model and field test.

    PubMed

    Green, Stephanie J; Dulvy, Nicholas K; Brooks, Annabelle M L; Akins, John L; Cooper, Andrew B; Miller, Skylar; Côté, Isabelle M

    Species invasions have a range of negative effects on recipient ecosystems, and many occur at a scale and magnitude that preclude complete eradication. When complete extirpation is unlikely with available management resources, an effective strategy may be to suppress invasive populations below levels predicted to cause undesirable ecological change. We illustrated this approach by developing and testing targets for the control of invasive Indo-Pacific lionfish (Pterois volitans and P. miles) on Western Atlantic coral reefs. We first developed a size-structured simulation model of predation by lionfish on native fish communities, which we used to predict threshold densities of lionfish beyond which native fish biomass should decline. We then tested our predictions by experimentally manipulating lionfish densities above or below reef-specific thresholds, and monitoring the consequences for native fish populations on 24 Bahamian patch reefs over 18 months. We found that reducing lionfish below predicted threshold densities effectively protected native fish community biomass from predation-induced declines. Reductions in density of 25–92%, depending on the reef, were required to suppress lionfish below levels predicted to overconsume prey. On reefs where lionfish were kept below threshold densities, native prey fish biomass increased by 50–70%. Gains in small (<6 cm) size classes of native fishes translated into lagged increases in larger size classes over time. The biomass of larger individuals (>15 cm total length), including ecologically important grazers and economically important fisheries species, had increased by 10–65% by the end of the experiment. Crucially, similar gains in prey fish biomass were realized on reefs subjected to partial and full removal of lionfish, but partial removals took 30% less time to implement. By contrast, the biomass of small native fishes declined by >50% on all reefs with lionfish densities exceeding reef-specific thresholds

  18. Prediction of TF target sites based on atomistic models of protein-DNA complexes

    PubMed Central

    Angarica, Vladimir Espinosa; Pérez, Abel González; Vasconcelos, Ana T; Collado-Vides, Julio; Contreras-Moreira, Bruno

    2008-01-01

    Background The specific recognition of genomic cis-regulatory elements by transcription factors (TFs) plays an essential role in the regulation of coordinated gene expression. Studying the mechanisms determining binding specificity in protein-DNA interactions is thus an important goal. Most current approaches for modeling TF specific recognition rely on the knowledge of large sets of cognate target sites and consider only the information contained in their primary sequence. Results Here we describe a structure-based methodology for predicting sequence motifs starting from the coordinates of a TF-DNA complex. Our algorithm combines information regarding the direct and indirect readout of DNA into an atomistic statistical model, which is used to estimate the interaction potential. We first measure the ability of our method to correctly estimate the binding specificities of eight prokaryotic and eukaryotic TFs that belong to different structural superfamilies. Secondly, the method is applied to two homology models, finding that sampling of interface side-chain rotamers remarkably improves the results. Thirdly, the algorithm is compared with a reference structural method based on contact counts, obtaining comparable predictions for the experimental complexes and more accurate sequence motifs for the homology models. Conclusion Our results demonstrate that atomic-detail structural information can be feasibly used to predict TF binding sites. The computational method presented here is universal and might be applied to other systems involving protein-DNA recognition. PMID:18922190

  19. Target-Independent Prediction of Drug Synergies Using Only Drug Lipophilicity

    PubMed Central

    2015-01-01

    Physicochemical properties of compounds have been instrumental in selecting lead compounds with increased drug-likeness. However, the relationship between physicochemical properties of constituent drugs and the tendency to exhibit drug interaction has not been systematically studied. We assembled physicochemical descriptors for a set of antifungal compounds (“drugs”) previously examined for interaction. Analyzing the relationship between molecular weight, lipophilicity, H-bond donor, and H-bond acceptor values for drugs and their propensity to show pairwise antifungal drug synergy, we found that combinations of two lipophilic drugs had a greater tendency to show drug synergy. We developed a more refined decision tree model that successfully predicted drug synergy in stringent cross-validation tests based on only lipophilicity of drugs. Our predictions achieved a precision of 63% and allowed successful prediction for 58% of synergistic drug pairs, suggesting that this phenomenon can extend our understanding for a substantial fraction of synergistic drug interactions. We also generated and analyzed a large-scale synergistic human toxicity network, in which we observed that combinations of lipophilic compounds show a tendency for increased toxicity. Thus, lipophilicity, a simple and easily determined molecular descriptor, is a powerful predictor of drug synergy. It is well established that lipophilic compounds (i) are promiscuous, having many targets in the cell, and (ii) often penetrate into the cell via the cellular membrane by passive diffusion. We discuss the positive relationship between drug lipophilicity and drug synergy in the context of potential drug synergy mechanisms. PMID:25026390

  20. Optima Nutrition: an allocative efficiency tool to reduce childhood stunting by better targeting of nutrition-related interventions.

    PubMed

    Pearson, Ruth; Killedar, Madhura; Petravic, Janka; Kakietek, Jakub J; Scott, Nick; Grantham, Kelsey L; Stuart, Robyn M; Kedziora, David J; Kerr, Cliff C; Skordis-Worrall, Jolene; Shekar, Meera; Wilson, David P

    2018-03-20

    Child stunting due to chronic malnutrition is a major problem in low- and middle-income countries due, in part, to inadequate nutrition-related practices and insufficient access to services. Limited budgets for nutritional interventions mean that available resources must be targeted in the most cost-effective manner to have the greatest impact. Quantitative tools can help guide budget allocation decisions. The Optima approach is an established framework to conduct resource allocation optimization analyses. We applied this approach to develop a new tool, 'Optima Nutrition', for conducting allocative efficiency analyses that address childhood stunting. At the core of the Optima approach is an epidemiological model for assessing the burden of disease; we use an adapted version of the Lives Saved Tool (LiST). Six nutritional interventions have been included in the first release of the tool: antenatal micronutrient supplementation, balanced energy-protein supplementation, exclusive breastfeeding promotion, promotion of improved infant and young child feeding (IYCF) practices, public provision of complementary foods, and vitamin A supplementation. To demonstrate the use of this tool, we applied it to evaluate the optimal allocation of resources in 7 districts in Bangladesh, using both publicly available data (such as through DHS) and data from a complementary costing study. Optima Nutrition can be used to estimate how to target resources to improve nutrition outcomes. Specifically, for the Bangladesh example, despite only limited nutrition-related funding available (an estimated $0.75 per person in need per year), even without any extra resources, better targeting of investments in nutrition programming could increase the cumulative number of children living without stunting by 1.3 million (an extra 5%) by 2030 compared to the current resource allocation. To minimize stunting, priority interventions should include promotion of improved IYCF practices as well as vitamin A

  1. Tools for Early Prediction of Drug Loading in Lipid-Based Formulations

    PubMed Central

    2015-01-01

    Identification of the usefulness of lipid-based formulations (LBFs) for delivery of poorly water-soluble drugs is at date mainly experimentally based. In this work we used a diverse drug data set, and more than 2,000 solubility measurements to develop experimental and computational tools to predict the loading capacity of LBFs. Computational models were developed to enable in silico prediction of solubility, and hence drug loading capacity, in the LBFs. Drug solubility in mixed mono-, di-, triglycerides (Maisine 35-1 and Capmul MCM EP) correlated (R2 0.89) as well as the drug solubility in Carbitol and other ethoxylated excipients (PEG400, R2 0.85; Polysorbate 80, R2 0.90; Cremophor EL, R2 0.93). A melting point below 150 °C was observed to result in a reasonable solubility in the glycerides. The loading capacity in LBFs was accurately calculated from solubility data in single excipients (R2 0.91). In silico models, without the demand of experimentally determined solubility, also gave good predictions of the loading capacity in these complex formulations (R2 0.79). The framework established here gives a better understanding of drug solubility in single excipients and of LBF loading capacity. The large data set studied revealed that experimental screening efforts can be rationalized by solubility measurements in key excipients or from solid state information. For the first time it was shown that loading capacity in complex formulations can be accurately predicted using molecular information extracted from calculated descriptors and thermal properties of the crystalline drug. PMID:26568134

  2. Tools for Early Prediction of Drug Loading in Lipid-Based Formulations.

    PubMed

    Alskär, Linda C; Porter, Christopher J H; Bergström, Christel A S

    2016-01-04

    Identification of the usefulness of lipid-based formulations (LBFs) for delivery of poorly water-soluble drugs is at date mainly experimentally based. In this work we used a diverse drug data set, and more than 2,000 solubility measurements to develop experimental and computational tools to predict the loading capacity of LBFs. Computational models were developed to enable in silico prediction of solubility, and hence drug loading capacity, in the LBFs. Drug solubility in mixed mono-, di-, triglycerides (Maisine 35-1 and Capmul MCM EP) correlated (R(2) 0.89) as well as the drug solubility in Carbitol and other ethoxylated excipients (PEG400, R(2) 0.85; Polysorbate 80, R(2) 0.90; Cremophor EL, R(2) 0.93). A melting point below 150 °C was observed to result in a reasonable solubility in the glycerides. The loading capacity in LBFs was accurately calculated from solubility data in single excipients (R(2) 0.91). In silico models, without the demand of experimentally determined solubility, also gave good predictions of the loading capacity in these complex formulations (R(2) 0.79). The framework established here gives a better understanding of drug solubility in single excipients and of LBF loading capacity. The large data set studied revealed that experimental screening efforts can be rationalized by solubility measurements in key excipients or from solid state information. For the first time it was shown that loading capacity in complex formulations can be accurately predicted using molecular information extracted from calculated descriptors and thermal properties of the crystalline drug.

  3. Antibody-drug conjugates: Promising and efficient tools for targeted cancer therapy.

    PubMed

    Nasiri, Hadi; Valedkarimi, Zahra; Aghebati-Maleki, Leili; Majidi, Jafar

    2018-09-01

    Over the recent decades, the use of antibody-drug conjugates (ADCs) has led to a paradigm shift in cancer chemotherapy. Antibody-based treatment of various human tumors has presented dramatic efficacy and is now one of the most promising strategies used for targeted therapy of patients with a variety of malignancies, including hematological cancers and solid tumors. Monoclonal antibodies (mAbs) are able to selectively deliver cytotoxic drugs to tumor cells, which express specific antigens on their surface, and has been suggested as a novel category of agents for use in the development of anticancer targeted therapies. In contrast to conventional treatments that cause damage to healthy tissues, ADCs use mAbs to specifically attach to antigens on the surface of target cells and deliver their cytotoxic payloads. The therapeutic success of future ADCs depends on closely choosing the target antigen, increasing the potency of the cytotoxic cargo, improving the properties of the linker, and reducing drug resistance. If appropriate solutions are presented to address these issues, ADCs will play a more important role in the development of targeted therapeutics against cancer in the next years. We review the design of ADCs, and focus on how ADCs can be exploited to overcome multiple drug resistance (MDR). © 2018 Wiley Periodicals, Inc.

  4. Pediatric Eating Assessment Tool-10 as an indicator to predict aspiration in children with esophageal atresia.

    PubMed

    Soyer, Tutku; Yalcin, Sule; Arslan, Selen Serel; Demir, Numan; Tanyel, Feridun Cahit

    2017-10-01

    Airway aspiration is a common problem in children with esophageal atresia (EA). Pediatric Eating Assessment Tool-10 (pEAT-10) is a self-administered questionnaire to evaluate dysphagia symptoms in children. A prospective study was performed to evaluate the validity of pEAT-10 to predict aspiration in children with EA. Patients with EA were evaluated for age, sex, type of atresia, presence of associated anomalies, type of esophageal repair, time of definitive treatment, and the beginning of oral feeding. Penetration-aspiration score (PAS) was evaluated with videofluoroscopy (VFS) and parents were surveyed for pEAT-10, dysphagia score (DS) and functional oral intake scale (FOIS). PAS scores greater than 7 were considered as risk of aspiration. EAT-10 values greater than 3 were assessed as abnormal. Higher DS scores shows dysphagia whereas higher FOIS shows better feeding abilities. Forty patients were included. Children with PAS greater than 7 were assessed as PAS+ group, and scores less than 7 were constituted as PAS- group. Demographic features and results of surgical treatments showed no difference between groups (p>0.05). The median values of PAS, pEAT-10 and DS scores were significantly higher in PAS+ group when compared to PAS- group (p<0.05). The sensitivity and specificity of pEAT-10 to predict aspiration were 88% and 77%, and the positive and negative predictive values were 22% and 11%, respectively. Type-C cases had better pEAT-10 and FOIS scores with respect to type-A cases, and both scores were statistically more reliable in primary repair than delayed repair (p<0.05). Among the postoperative complications, only leakage had impact on DS, pEAT-10, PAS and FOIS scores (p<0.05). The pEAT-10 is a valid, simple and reliable tool to predict aspiration in children. Patients with higher pEAT-10 scores should undergo detailed evaluation of deglutitive functions and assessment of risks of aspiration to improve safer feeding strategies. Level II (Development of

  5. Comprehensive predictions of target proteins based on protein-chemical interaction using virtual screening and experimental verifications.

    PubMed

    Kobayashi, Hiroki; Harada, Hiroko; Nakamura, Masaomi; Futamura, Yushi; Ito, Akihiro; Yoshida, Minoru; Iemura, Shun-Ichiro; Shin-Ya, Kazuo; Doi, Takayuki; Takahashi, Takashi; Natsume, Tohru; Imoto, Masaya; Sakakibara, Yasubumi

    2012-04-05

    Identification of the target proteins of bioactive compounds is critical for elucidating the mode of action; however, target identification has been difficult in general, mostly due to the low sensitivity of detection using affinity chromatography followed by CBB staining and MS/MS analysis. We applied our protocol of predicting target proteins combining in silico screening and experimental verification for incednine, which inhibits the anti-apoptotic function of Bcl-xL by an unknown mechanism. One hundred eighty-two target protein candidates were computationally predicted to bind to incednine by the statistical prediction method, and the predictions were verified by in vitro binding of incednine to seven proteins, whose expression can be confirmed in our cell system.As a result, 40% accuracy of the computational predictions was achieved successfully, and we newly found 3 incednine-binding proteins. This study revealed that our proposed protocol of predicting target protein combining in silico screening and experimental verification is useful, and provides new insight into a strategy for identifying target proteins of small molecules.

  6. Geodemographics as a tool for targeting neighbourhoods in public health campaigns

    NASA Astrophysics Data System (ADS)

    Petersen, Jakob; Gibin, Maurizio; Longley, Paul; Mateos, Pablo; Atkinson, Philip; Ashby, David

    2011-06-01

    Geodemographics offers the prospects of integrating, modelling and mapping health care needs and other health indicators that are useful for targeting neighbourhoods in public health campaigns. Yet reports about this application domain has to date been sporadic. The purpose of this paper is to examine the potential of a bespoke geodemographic system for neighbourhood targeting in an inner city public health authority, Southwark Primary Care Trust, London. This system, the London Output Area Classification (LOAC), is compared to six other geodemographic systems from both governmental and commercial sources. The paper proposes two new indicators for assessing the performance of geodemographic systems for neighbourhood targeting based on local hospital demand data. The paper also analyses and discusses the utility of age- and sex standardisation of geodemographic profiles of health care demand.

  7. IMHOTEP—a composite score integrating popular tools for predicting the functional consequences of non-synonymous sequence variants

    PubMed Central

    Knecht, Carolin; Mort, Matthew; Junge, Olaf; Cooper, David N.; Krawczak, Michael

    2017-01-01

    Abstract The in silico prediction of the functional consequences of mutations is an important goal of human pathogenetics. However, bioinformatic tools that classify mutations according to their functionality employ different algorithms so that predictions may vary markedly between tools. We therefore integrated nine popular prediction tools (PolyPhen-2, SNPs&GO, MutPred, SIFT, MutationTaster2, Mutation Assessor and FATHMM as well as conservation-based Grantham Score and PhyloP) into a single predictor. The optimal combination of these tools was selected by means of a wide range of statistical modeling techniques, drawing upon 10 029 disease-causing single nucleotide variants (SNVs) from Human Gene Mutation Database and 10 002 putatively ‘benign’ non-synonymous SNVs from UCSC. Predictive performance was found to be markedly improved by model-based integration, whilst maximum predictive capability was obtained with either random forest, decision tree or logistic regression analysis. A combination of PolyPhen-2, SNPs&GO, MutPred, MutationTaster2 and FATHMM was found to perform as well as all tools combined. Comparison of our approach with other integrative approaches such as Condel, CoVEC, CAROL, CADD, MetaSVM and MetaLR using an independent validation dataset, revealed the superiority of our newly proposed integrative approach. An online implementation of this approach, IMHOTEP (‘Integrating Molecular Heuristics and Other Tools for Effect Prediction’), is provided at http://www.uni-kiel.de/medinfo/cgi-bin/predictor/. PMID:28180317

  8. Using the Lives Saved Tool to aid country planning in meeting mortality targets: a case study from Mali.

    PubMed

    Keita, Youssouf; Sangho, Hamadoun; Roberton, Timothy; Vignola, Emilia; Traoré, Mariam; Munos, Melinda

    2017-11-07

    Mali is one of four countries implementing a National Evaluation Platform (NEP) to build local capacity to answer evaluation questions for maternal, newborn, child health and nutrition (MNCH&N). In 2014-15, NEP-Mali addressed questions about the potential impact of Mali's MNCH&N plans and strategies, and identified priority interventions to achieve targeted mortality reductions. The NEP-Mali team modeled the potential impact of three intervention packages in the Lives Saved Tool (LiST) from 2014 to 2023. One projection included the interventions and targets from Mali's ten-year health strategy (PDDSS) for 2014-2023, and two others modeled intervention packages that included scale up of antenatal, intrapartum, and curative interventions, as well as reductions in stunting and wasting. We modeled the change in maternal, newborn and under-five mortality rates under these three projections, as well as the number of lives saved, overall and by intervention. If Mali were to achieve the MNCH&N coverage targets from its health strategy, under-5 mortality would be reduced from 121 per 1000 live births to 93 per 1000, far from the target of 69 deaths per 1000. Projections 1 and 2 produced estimated mortality reductions from 121 deaths per 1000 to 70 and 68 deaths per 1000, respectively. With respect to neonatal mortality, the mortality rate would be reduced from 39 to 32 deaths per 1000 live births under the current health strategy, and to 25 per 1000 under projections 1 and 2. This study revealed that achieving the coverage targets for the MNCH&N interventions in the 2014-23 PDDSS would likely not allow Mali to achieve its mortality targets. The NEP-Mali team was able to identify two packages of MNCH&N interventions (and targets) that achieved under-5 and neonatal mortality rates at, or very near, the PDDSS targets. The Malian Ministry of Health and Public Hygiene is using these results to revise its plans and strategies.

  9. Research-Based Monitoring, Prediction, and Analysis Tools of the Spacecraft Charging Environment for Spacecraft Users

    NASA Technical Reports Server (NTRS)

    Zheng, Yihua; Kuznetsova, Maria M.; Pulkkinen, Antti A.; Maddox, Marlo M.; Mays, Mona Leila

    2015-01-01

    The Space Weather Research Center (http://swrc. gsfc.nasa.gov) at NASA Goddard, part of the Community Coordinated Modeling Center (http://ccmc.gsfc.nasa.gov), is committed to providing research-based forecasts and notifications to address NASA's space weather needs, in addition to its critical role in space weather education. It provides a host of services including spacecraft anomaly resolution, historical impact analysis, real-time monitoring and forecasting, tailored space weather alerts and products, and weekly summaries and reports. In this paper, we focus on how (near) real-time data (both in space and on ground), in combination with modeling capabilities and an innovative dissemination system called the integrated Space Weather Analysis system (http://iswa.gsfc.nasa.gov), enable monitoring, analyzing, and predicting the spacecraft charging environment for spacecraft users. Relevant tools and resources are discussed.

  10. An enhanced MMW and SMMW/THz imaging system performance prediction and analysis tool for concealed weapon detection and pilotage obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Murrill, Steven R.; Jacobs, Eddie L.; Franck, Charmaine C.; Petkie, Douglas T.; De Lucia, Frank C.

    2015-10-01

    The U.S. Army Research Laboratory (ARL) has continued to develop and enhance a millimeter-wave (MMW) and submillimeter- wave (SMMW)/terahertz (THz)-band imaging system performance prediction and analysis tool for both the detection and identification of concealed weaponry, and for pilotage obstacle avoidance. The details of the MATLAB-based model which accounts for the effects of all critical sensor and display components, for the effects of atmospheric attenuation, concealment material attenuation, and active illumination, were reported on at the 2005 SPIE Europe Security and Defence Symposium (Brugge). An advanced version of the base model that accounts for both the dramatic impact that target and background orientation can have on target observability as related to specular and Lambertian reflections captured by an active-illumination-based imaging system, and for the impact of target and background thermal emission, was reported on at the 2007 SPIE Defense and Security Symposium (Orlando). Further development of this tool that includes a MODTRAN-based atmospheric attenuation calculator and advanced system architecture configuration inputs that allow for straightforward performance analysis of active or passive systems based on scanning (single- or line-array detector element(s)) or staring (focal-plane-array detector elements) imaging architectures was reported on at the 2011 SPIE Europe Security and Defence Symposium (Prague). This paper provides a comprehensive review of a newly enhanced MMW and SMMW/THz imaging system analysis and design tool that now includes an improved noise sub-model for more accurate and reliable performance predictions, the capability to account for postcapture image contrast enhancement, and the capability to account for concealment material backscatter with active-illumination- based systems. Present plans for additional expansion of the model's predictive capabilities are also outlined.

  11. Predicting risk and outcomes for frail older adults: an umbrella review of frailty screening tools

    PubMed Central

    Apóstolo, João; Cooke, Richard; Bobrowicz-Campos, Elzbieta; Santana, Silvina; Marcucci, Maura; Cano, Antonio; Vollenbroek-Hutten, Miriam; Germini, Federico; Holland, Carol

    2017-01-01

    EXECUTIVE SUMMARY Background A scoping search identified systematic reviews on diagnostic accuracy and predictive ability of frailty measures in older adults. In most cases, research was confined to specific assessment measures related to a specific clinical model. Objectives To summarize the best available evidence from systematic reviews in relation to reliability, validity, diagnostic accuracy and predictive ability of frailty measures in older adults. Inclusion criteria Population Older adults aged 60 years or older recruited from community, primary care, long-term residential care and hospitals. Index test Available frailty measures in older adults. Reference test Cardiovascular Health Study phenotype model, the Canadian Study of Health and Aging cumulative deficit model, Comprehensive Geriatric Assessment or other reference tests. Diagnosis of interest Frailty defined as an age-related state of decreased physiological reserves characterized by an increased risk of poor clinical outcomes. Types of studies Quantitative systematic reviews. Search strategy A three-step search strategy was utilized to find systematic reviews, available in English, published between January 2001 and October 2015. Methodological quality Assessed by two independent reviewers using the Joanna Briggs Institute critical appraisal checklist for systematic reviews and research synthesis. Data extraction Two independent reviewers extracted data using the standardized data extraction tool designed for umbrella reviews. Data synthesis Data were only presented in a narrative form due to the heterogeneity of included reviews. Results Five reviews with a total of 227,381 participants were included in this umbrella review. Two reviews focused on reliability, validity and diagnostic accuracy; two examined predictive ability for adverse health outcomes; and one investigated validity, diagnostic accuracy and predictive ability. In total, 26 questionnaires and brief assessments and eight frailty

  12. Acceptability of the Predicting Abusive Head Trauma (PredAHT) clinical prediction tool: A qualitative study with child protection professionals.

    PubMed

    Cowley, Laura E; Maguire, Sabine; Farewell, Daniel M; Quinn-Scoggins, Harriet D; Flynn, Matthew O; Kemp, Alison M

    2018-05-09

    The validated Predicting Abusive Head Trauma (PredAHT) tool estimates the probability of abusive head trauma (AHT) based on combinations of six clinical features: head/neck bruising; apnea; seizures; rib/long-bone fractures; retinal hemorrhages. We aimed to determine the acceptability of PredAHT to child protection professionals. We conducted qualitative semi-structured interviews with 56 participants: clinicians (25), child protection social workers (10), legal practitioners (9, including 4 judges), police officers (8), and pathologists (4), purposively sampled across southwest United Kingdom. Interviews were recorded, transcribed and imported into NVivo for thematic analysis (38% double-coded). We explored participants' evaluations of PredAHT, their opinions about the optimal way to present the calculated probabilities, and their interpretation of probabilities in the context of suspected AHT. Clinicians, child protection social workers and police thought PredAHT would be beneficial as an objective adjunct to their professional judgment, to give them greater confidence in their decisions. Lawyers and pathologists appreciated its value for prompting multidisciplinary investigations, but were uncertain of its usefulness in court. Perceived disadvantages included: possible over-reliance and false reassurance from a low score. Interpretations regarding which percentages equate to 'low', 'medium' or 'high' likelihood of AHT varied; participants preferred a precise % probability over these general terms. Participants would use PredAHT with provisos: if they received multi-agency training to define accepted risk thresholds for consistent interpretation; with knowledge of its development; if it was accepted by colleagues. PredAHT may therefore increase professionals' confidence in their decision-making when investigating suspected AHT, but may be of less value in court. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Deriving a clinical prediction rule to target sexual healthcare to women attending British General Practices.

    PubMed

    Edelman, N L; Cassell, J A; Mercer, C H; Bremner, S A; Jones, C I; Gersten, A; deVisser, R O

    2018-07-01

    Some women attending General Practices (GPs) are at higher risk of unintended pregnancy (RUIP) and sexually transmitted infections (STI) than others. A clinical prediction rule (CPR) may help target resources using psychosocial questions as an acceptable, effective means of assessment. The aim was to derive a CPR that discriminates women who would benefit from sexual health discussion and intervention. Participants were recruited to a cross-sectional survey from six GPs in a city in South-East England in 2016. On arrival, female patients aged 16-44 years were invited to complete a questionnaire that addressed psychosocial factors, and the following self-reported outcomes: 2+ sexual partners in the last year (2PP) and RUIP. For each sexual risk, psychosocial questions were retained from logistic regression modelling which best discriminated women at risk using the C-statistic. Sensitivity and specificity were established in consultation with GP staff. The final sample comprised N = 1238 women. 2PP was predicted by 11 questions including age, binge-drinking weekly, ever having a partner who insulted you often, current smoking, and not cohabiting (C-statistic = 0.83, sensitivity = 73% and specificity = 77%). RUIP was predicted by 5 questions including sexual debut <16 years, and emergency contraception use in the last 6 months (C-statistic = 0.70, sensitivity = 69% and specificity = 57%). 2PP was better discriminated than RUIP but neither to a clinically-useful degree. The finding that different psychosocial factors predicted each outcome has implications for prevention strategies. Further research should investigate causal links between psychosocial factors and sexual risk. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Predictive modeling of EEG time series for evaluating surgery targets in epilepsy patients.

    PubMed

    Steimer, Andreas; Müller, Michael; Schindler, Kaspar

    2017-05-01

    During the last 20 years, predictive modeling in epilepsy research has largely been concerned with the prediction of seizure events, whereas the inference of effective brain targets for resective surgery has received surprisingly little attention. In this exploratory pilot study, we describe a distributional clustering framework for the modeling of multivariate time series and use it to predict the effects of brain surgery in epilepsy patients. By analyzing the intracranial EEG, we demonstrate how patients who became seizure free after surgery are clearly distinguished from those who did not. More specifically, for 5 out of 7 patients who obtained seizure freedom (= Engel class I) our method predicts the specific collection of brain areas that got actually resected during surgery to yield a markedly lower posterior probability for the seizure related clusters, when compared to the resection of random or empty collections. Conversely, for 4 out of 5 Engel class III/IV patients who still suffer from postsurgical seizures, performance of the actually resected collection is not significantly better than performances displayed by random or empty collections. As the number of possible collections ranges into billions and more, this is a substantial contribution to a problem that today is still solved by visual EEG inspection. Apart from epilepsy research, our clustering methodology is also of general interest for the analysis of multivariate time series and as a generative model for temporally evolving functional networks in the neurosciences and beyond. Hum Brain Mapp 38:2509-2531, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Application of the Streamflow Prediction Tool to Estimate Sediment Dredging Volumes in Texas Coastal Waterways

    NASA Astrophysics Data System (ADS)

    Yeates, E.; Dreaper, G.; Afshari, S.; Tavakoly, A. A.

    2017-12-01

    Over the past six fiscal years, the United States Army Corps of Engineers (USACE) has contracted an average of about a billion dollars per year for navigation channel dredging. To execute these funds effectively, USACE Districts must determine which navigation channels need to be dredged in a given year. Improving this prioritization process results in more efficient waterway maintenance. This study uses the Streamflow Prediction Tool, a runoff routing model based on global weather forecast ensembles, to estimate dredged volumes. This study establishes regional linear relationships between cumulative flow and dredged volumes over a long-term simulation covering 30 years (1985-2015), using drainage area and shoaling parameters. The study framework integrates the National Hydrography Dataset (NHDPlus Dataset) with parameters from the Corps Shoaling Analysis Tool (CSAT) and dredging record data from USACE District records. Results in the test cases of the Houston Ship Channel and the Sabine and Port Arthur Harbor waterways in Texas indicate positive correlation between the simulated streamflows and actual dredging records.

  16. A novel tool for the prediction of tablet sticking during high speed compaction.

    PubMed

    Abdel-Hamid, Sameh; Betz, Gabriele

    2012-01-01

    During tableting, capping is a problem of cohesion while sticking is a problem of adhesion. Sticking is a multi-composite problem; causes are either material or machine related. Nowadays, detecting such a problem is a pre-requisite in the early stages of development. The aim of our study was to investigate sticking by radial die-wall pressure monitoring guided by compaction simulation. This was done by using the highly sticking drug; Mefenamic acid (MA) at different drug loadings with different fillers compacted at different pressures and speeds. By increasing MA loading, we found that viscoelastic fillers showed high residual radial pressure after compaction while plastic/brittle fillers showed high radial pressure during compaction, p < 0.05. Visually, plastic/brittle fillers showed greater tendencies for adhesion to punches than viscoelastic fillers while the later showed higher tendencies for adhesion to the die-wall. This was confirmed by higher values of axial stress transmission for plastic/brittle than viscoelastic fillers (higher punch surface/powder interaction), and higher residual die-wall and ejection forces for viscoelastic than plastic/brittle fillers, p < 0.05. Take-off force was not a useful tool to estimate sticking due to cohesive failure of the compacts. Radial die-wall pressure monitoring is suggested as a robust tool to predict sticking.

  17. ProBiS tools (algorithm, database, and web servers) for predicting and modeling of biologically interesting proteins.

    PubMed

    Konc, Janez; Janežič, Dušanka

    2017-09-01

    ProBiS (Protein Binding Sites) Tools consist of algorithm, database, and web servers for prediction of binding sites and protein ligands based on the detection of structurally similar binding sites in the Protein Data Bank. In this article, we review the operations that ProBiS Tools perform, provide comments on the evolution of the tools, and give some implementation details. We review some of its applications to biologically interesting proteins. ProBiS Tools are freely available at http://probis.cmm.ki.si and http://probis.nih.gov. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Systems biology approaches and tools for analysis of interactomes and multi-target drugs.

    PubMed

    Schrattenholz, André; Groebe, Karlfried; Soskic, Vukic

    2010-01-01

    Systems biology is essentially a proteomic and epigenetic exercise because the relatively condensed information of genomes unfolds on the level of proteins. The flexibility of cellular architectures is not only mediated by a dazzling number of proteinaceous species but moreover by the kinetics of their molecular changes: The time scales of posttranslational modifications range from milliseconds to years. The genetic framework of an organism only provides the blue print of protein embodiments which are constantly shaped by external input. Indeed, posttranslational modifications of proteins represent the scope and velocity of these inputs and fulfil the requirements of integration of external spatiotemporal signal transduction inside an organism. The optimization of biochemical networks for this type of information processing and storage results in chemically extremely fine tuned molecular entities. The huge dynamic range of concentrations, the chemical diversity and the necessity of synchronisation of complex protein expression patterns pose the major challenge of systemic analysis of biological models. One further message is that many of the key reactions in living systems are essentially based on interactions of moderate affinities and moderate selectivities. This principle is responsible for the enormous flexibility and redundancy of cellular circuitries. In complex disorders such as cancer or neurodegenerative diseases, which initially appear to be rooted in relatively subtle dysfunctions of multimodal physiologic pathways, drug discovery programs based on the concept of high affinity/high specificity compounds ("one-target, one-disease"), which has been dominating the pharmaceutical industry for a long time, increasingly turn out to be unsuccessful. Despite improvements in rational drug design and high throughput screening methods, the number of novel, single-target drugs fell much behind expectations during the past decade, and the treatment of "complex

  19. A point-based tool to predict conversion from mild cognitive impairment to probable Alzheimer's disease.

    PubMed

    Barnes, Deborah E; Cenzer, Irena S; Yaffe, Kristine; Ritchie, Christine S; Lee, Sei J

    2014-11-01

    Our objective in this study was to develop a point-based tool to predict conversion from amnestic mild cognitive impairment (MCI) to probable Alzheimer's disease (AD). Subjects were participants in the first part of the Alzheimer's Disease Neuroimaging Initiative. Cox proportional hazards models were used to identify factors associated with development of AD, and a point score was created from predictors in the final model. The final point score could range from 0 to 9 (mean 4.8) and included: the Functional Assessment Questionnaire (2‒3 points); magnetic resonance imaging (MRI) middle temporal cortical thinning (1 point); MRI hippocampal subcortical volume (1 point); Alzheimer's Disease Cognitive Scale-cognitive subscale (2‒3 points); and the Clock Test (1 point). Prognostic accuracy was good (Harrell's c = 0.78; 95% CI 0.75, 0.81); 3-year conversion rates were 6% (0‒3 points), 53% (4‒6 points), and 91% (7‒9 points). A point-based risk score combining functional dependence, cerebral MRI measures, and neuropsychological test scores provided good accuracy for prediction of conversion from amnestic MCI to AD. Copyright © 2014 The Alzheimer's Association. All rights reserved.

  20. XenoSite server: a web-available site of metabolism prediction tool.

    PubMed

    Matlock, Matthew K; Hughes, Tyler B; Swamidass, S Joshua

    2015-04-01

    Cytochrome P450 enzymes (P450s) are metabolic enzymes that process the majority of FDA-approved, small-molecule drugs. Understanding how these enzymes modify molecule structure is key to the development of safe, effective drugs. XenoSite server is an online implementation of the XenoSite, a recently published computational model for P450 metabolism. XenoSite predicts which atomic sites of a molecule--sites of metabolism (SOMs)--are modified by P450s. XenoSite server accepts input in common chemical file formats including SDF and SMILES and provides tools for visualizing the likelihood that each atomic site is a site of metabolism for a variety of important P450s, as well as a flat file download of SOM predictions. XenoSite server is available at http://swami.wustl.edu/xenosite. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.

    PubMed

    Huang, Lihan

    2017-12-04

    The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.

  2. Review-of-systems questionnaire as a predictive tool for psychogenic nonepileptic seizures.

    PubMed

    Robles, Liliana; Chiang, Sharon; Haneef, Zulfi

    2015-04-01

    Patients with refractory epilepsy undergo video-electroencephalography for seizure characterization, among whom approximately 10-30% will be discharged with the diagnosis of psychogenic nonepileptic seizures (PNESs). Clinical PNES predictors have been described but in general are not sensitive or specific. We evaluated whether multiple complaints in a routine review-of-system (ROS) questionnaire could serve as a sensitive and specific marker of PNESs. We performed a retrospective analysis of a standardized ROS questionnaire completed by patients with definite PNESs and epileptic seizures (ESs) diagnosed in our adult epilepsy monitoring unit. A multivariate analysis of covariance (MANCOVA) was used to determine whether groups with PNES and ES differed with respect to the percentage of complaints in the ROS questionnaire. Tenfold cross-validation was used to evaluate the predictive error of a logistic regression classifier for PNES status based on the percentage of positive complaints in the ROS questionnaire. A total of 44 patients were included for analysis. Patients with PNESs had a significantly higher number of complaints in the ROS questionnaire compared to patients with epilepsy. A threshold of 17% positive complaints achieved a 78% specificity and 85% sensitivity for discriminating between PNESs and ESs. We conclude that the routine ROS questionnaire may be a sensitive and specific predictive tool for discriminating between PNESs and ESs. Published by Elsevier Inc.

  3. A risk-based predictive tool to prevent accidental introductions of nonindigenous marine species.

    PubMed

    Floerl, Oliver; Inglis, Graeme J; Hayden, Barbara J

    2005-06-01

    Preventing the introduction of nonindigenous species (NIS) is the most efficient way to avoid the costs and impacts of biological invasions. The transport of fouling species on ship hulls is an important vector for the introduction of marine NIS. We use quantitative risk screening techniques to develop a predictive tool of the abundance and variety of organisms being transported by ocean-going yachts. We developed and calibrated an ordinal rank scale of the abundance of fouling assemblages on the hulls of international yacht hulls arriving in New Zealand. Fouling ranks were allocated to 783 international yachts that arrived in New Zealand between 2002 and 2004. Classification tree analysis was used to identify relationships between the fouling ranks and predictor variables that described the maintenance and travel history of the yachts. The fouling ranks provided reliable indications of the actual abundance and variety of fouling assemblages on the yachts and identified most (60%) yachts that had fouling on their hulls. However, classification tree models explained comparatively little of the variation in the distribution of fouling ranks (22.1%), had high misclassification rates (approximately 43%), and low predictive power. In agreement with other studies, the best model selected the age of the toxic antifouling paint on yacht hulls as the principal risk factor for hull fouling. Our study shows that the transport probability of fouling organisms is the result of a complex suite of interacting factors and that large sample sizes will be needed for calibration of robust risk models.

  4. Analysis tool and methodology design for electronic vibration stress understanding and prediction

    NASA Astrophysics Data System (ADS)

    Hsieh, Sheng-Jen; Crane, Robert L.; Sathish, Shamachary

    2005-03-01

    The objectives of this research were to (1) understand the impact of vibration on electronic components under ultrasound excitation; (2) model the thermal profile presented under vibration stress; and (3) predict stress level given a thermal profile of an electronic component. Research tasks included: (1) retrofit of current ultrasonic/infrared nondestructive testing system with sensory devices for temperature readings; (2) design of software tool to process images acquired from the ultrasonic/infrared system; (3) developing hypotheses and conducting experiments; and (4) modeling and evaluation of electronic vibration stress levels using a neural network model. Results suggest that (1) an ultrasonic/infrared system can be used to mimic short burst high vibration loads for electronics components; (2) temperature readings for electronic components under vibration stress are consistent and repeatable; (3) as stress load and excitation time increase, temperature differences also increase; (4) components that are subjected to a relatively high pre-stress load, followed by a normal operating load, have a higher heating rate and lower cooling rate. These findings are based on grayscale changes in images captured during experimentation. Discriminating variables and a neural network model were designed to predict stress levels given temperature and/or grayscale readings. Preliminary results suggest a 15.3% error when using grayscale change rate and 12.8% error when using average heating rate within the neural network model. Data were obtained from a high stress point (the corner) of the chip.

  5. Predicting 2D target velocity cannot help 2D motion integration for smooth pursuit initiation.

    PubMed

    Montagnini, Anna; Spering, Miriam; Masson, Guillaume S

    2006-12-01

    Smooth pursuit eye movements reflect the temporal dynamics of bidimensional (2D) visual motion integration. When tracking a single, tilted line, initial pursuit direction is biased toward unidimensional (1D) edge motion signals, which are orthogonal to the line orientation. Over 200 ms, tracking direction is slowly corrected to finally match the 2D object motion during steady-state pursuit. We now show that repetition of line orientation and/or motion direction does not eliminate the transient tracking direction error nor change the time course of pursuit correction. Nonetheless, multiple successive presentations of a single orientation/direction condition elicit robust anticipatory pursuit eye movements that always go in the 2D object motion direction not the 1D edge motion direction. These results demonstrate that predictive signals about target motion cannot be used for an efficient integration of ambiguous velocity signals at pursuit initiation.

  6. The Adaptation for Conservation Targets (ACT) framework: a tool for incorporating climate change into natural resource management.

    PubMed

    Cross, Molly S; Zavaleta, Erika S; Bachelet, Dominique; Brooks, Marjorie L; Enquist, Carolyn A F; Fleishman, Erica; Graumlich, Lisa J; Groves, Craig R; Hannah, Lee; Hansen, Lara; Hayward, Greg; Koopman, Marni; Lawler, Joshua J; Malcolm, Jay; Nordgren, John; Petersen, Brian; Rowland, Erika L; Scott, Daniel; Shafer, Sarah L; Shaw, M Rebecca; Tabor, Gary M

    2012-09-01

    As natural resource management agencies and conservation organizations seek guidance on responding to climate change, myriad potential actions and strategies have been proposed for increasing the long-term viability of some attributes of natural systems. Managers need practical tools for selecting among these actions and strategies to develop a tailored management approach for specific targets at a given location. We developed and present one such tool, the participatory Adaptation for Conservation Targets (ACT) framework, which considers the effects of climate change in the development of management actions for particular species, ecosystems and ecological functions. Our framework is based on the premise that effective adaptation of management to climate change can rely on local knowledge of an ecosystem and does not necessarily require detailed projections of climate change or its effects. We illustrate the ACT framework by applying it to an ecological function in the Greater Yellowstone Ecosystem (Montana, Wyoming, and Idaho, USA)--water flows in the upper Yellowstone River. We suggest that the ACT framework is a practical tool for initiating adaptation planning, and for generating and communicating specific management interventions given an increasingly altered, yet uncertain, climate.

  7. The Adaptation for Conservation Targets (ACT) Framework: A tool for incorporating climate change into natural resource management

    Cross, Molly S.; Zavaleta, Erika S.; Bachelet, Dominique; Brooks, Marjorie L.; Enquist, Carolyn A.F.; Fleishman, Erica; Graumlich, Lisa J.; Groves, Craig R.; Hannah, Lee; Hansen, Lara J.; Hayward, Gregory D.; Koopman, Marni; Lawler, Joshua J.; Malcolm, Jay; Nordgren, John R.; Petersen, Brian; Rowland, Erika; Scott, Daniel; Shafer, Sarah L.; Shaw, M. Rebecca; Tabor, Gary

    2012-01-01

    As natural resource management agencies and conservation organizations seek guidance on responding to climate change, myriad potential actions and strategies have been proposed for increasing the long-term viability of some attributes of natural systems. Managers need practical tools for selecting among these actions and strategies to develop a tailored management approach for specific targets at a given location. We developed and present one such tool, the participatory Adaptation for Conservation Targets (ACT) framework, which considers the effects of climate change in the development of management actions for particular species, ecosystems and ecological functions. Our framework is based on the premise that effective adaptation of management to climate change can rely on local knowledge of an ecosystem and does not necessarily require detailed projections of climate change or its effects. We illustrate the ACT framework by applying it to an ecological function in the Greater Yellowstone Ecosystem (Montana, Wyoming, and Idaho, USA)—water flows in the upper Yellowstone River. We suggest that the ACT framework is a practical tool for initiating adaptation planning, and for generating and communicating specific management interventions given an increasingly altered, yet uncertain, climate.

  8. Outcome Prediction of Consciousness Disorders in the Acute Stage Based on a Complementary Motor Behavioural Tool.

    PubMed

    Pignat, Jean-Michel; Mauron, Etienne; Jöhr, Jane; Gilart de Keranflec'h, Charlotte; Van De Ville, Dimitri; Preti, Maria Giulia; Meskaldji, Djalel E; Hömberg, Volker; Laureys, Steven; Draganski, Bogdan; Frackowiak, Richard; Diserens, Karin

    2016-01-01

    Attaining an accurate diagnosis in the acute phase for severely brain-damaged patients presenting Disorders of Consciousness (DOC) is crucial for prognostic validity; such a diagnosis determines further medical management, in terms of therapeutic choices and end-of-life decisions. However, DOC evaluation based on validated scales, such as the Revised Coma Recovery Scale (CRS-R), can lead to an underestimation of consciousness and to frequent misdiagnoses particularly in cases of cognitive motor dissociation due to other aetiologies. The purpose of this study is to determine the clinical signs that lead to a more accurate consciousness assessment allowing more reliable outcome prediction. From the Unit of Acute Neurorehabilitation (University Hospital, Lausanne, Switzerland) between 2011 and 2014, we enrolled 33 DOC patients with a DOC diagnosis according to the CRS-R that had been established within 28 days of brain damage. The first CRS-R assessment established the initial diagnosis of Unresponsive Wakefulness Syndrome (UWS) in 20 patients and a Minimally Consciousness State (MCS) in the remaining13 patients. We clinically evaluated the patients over time using the CRS-R scale and concurrently from the beginning with complementary clinical items of a new observational Motor Behaviour Tool (MBT). Primary endpoint was outcome at unit discharge distinguishing two main classes of patients (DOC patients having emerged from DOC and those remaining in DOC) and 6 subclasses detailing the outcome of UWS and MCS patients, respectively. Based on CRS-R and MBT scores assessed separately and jointly, statistical testing was performed in the acute phase using a non-parametric Mann-Whitney U test; longitudinal CRS-R data were modelled with a Generalized Linear Model. Fifty-five per cent of the UWS patients and 77% of the MCS patients had emerged from DOC. First, statistical prediction of the first CRS-R scores did not permit outcome differentiation between classes; longitudinal

  9. Evaluation of infrared thermography as a diagnostic tool to predict heat stress events in feedlot cattle.

    PubMed

    Unruh, Ellen M; Theurer, Miles E; White, Brad J; Larson, Robert L; Drouillard, James S; Schrag, Nora

    2017-07-01

    OBJECTIVE To determine whether infrared thermographic images obtained the morning after overnight heat abatement could be used as the basis for diagnostic algorithms to predict subsequent heat stress events in feedlot cattle exposed to high ambient temperatures. ANIMALS 60 crossbred beef heifers (mean ± SD body weight, 385.8 ± 20.3 kg). PROCEDURES Calves were housed in groups of 20 in 3 pens without any shade. During the 6 am and 3 pm hours on each of 10 days during a 14-day period when the daily ambient temperature was forecasted to be > 29.4°C, an investigator walked outside each pen and obtained profile digital thermal images of and assigned panting scores to calves near the periphery of the pen. Relationships between infrared thermographic data and panting scores were evaluated with artificial learning models. RESULTS Afternoon panting score was positively associated with morning but not afternoon thermographic data (body surface temperature). Evaluation of multiple artificial learning models indicated that morning body surface temperature was not an accurate predictor of an afternoon heat stress event, and thermographic data were of little predictive benefit, compared with morning and forecasted weather conditions. CONCLUSIONS AND CLINICAL RELEVANCE Results indicated infrared thermography was an objective method to monitor beef calves for heat stress in research settings. However, thermographic data obtained in the morning did not accurately predict which calves would develop heat stress later in the day. The use of infrared thermography as a diagnostic tool for monitoring heat stress in feedlot cattle requires further investigation.

  10. Landscape capability models as a tool to predict fine-scale forest bird occupancy and abundance

    Loman, Zachary G.; DeLuca, William; Harrison, Daniel J.; Loftin, Cynthia S.; Rolek, Brian W.; Wood, Petra B.

    2018-01-01

    ContextSpecies-specific models of landscape capability (LC) can inform landscape conservation design. Landscape capability is “the ability of the landscape to provide the environment […] and the local resources […] needed for survival and reproduction […] in sufficient quantity, quality and accessibility to meet the life history requirements of individuals and local populations.” Landscape capability incorporates species’ life histories, ecologies, and distributions to model habitat for current and future landscapes and climates as a proactive strategy for conservation planning.ObjectivesWe tested the ability of a set of LC models to explain variation in point occupancy and abundance for seven bird species representative of spruce-fir, mixed conifer-hardwood, and riparian and wooded wetland macrohabitats.MethodsWe compiled point count data sets used for biological inventory, species monitoring, and field studies across the northeastern United States to create an independent validation data set. Our validation explicitly accounted for underestimation in validation data using joint distance and time removal sampling.ResultsBlackpoll warbler (Setophaga striata), wood thrush (Hylocichla mustelina), and Louisiana (Parkesia motacilla) and northern waterthrush (P. noveboracensis) models were validated as predicting variation in abundance, although this varied from not biologically meaningful (1%) to strongly meaningful (59%). We verified all seven species models [including ovenbird (Seiurus aurocapilla), blackburnian (Setophaga fusca) and cerulean warbler (Setophaga cerulea)], as all were positively related to occupancy data.ConclusionsLC models represent a useful tool for conservation planning owing to their predictive ability over a regional extent. As improved remote-sensed data become available, LC layers are updated, which will improve predictions.

  11. Iranian risk model as a predictive tool for retinopathy in patients with type 2 diabetes.

    PubMed

    Azizi-Soleiman, Fatemeh; Heidari-Beni, Motahar; Ambler, Gareth; Omar, Rumana; Amini, Masoud; Hosseini, Sayed-Mohsen

    2015-10-01

    Diabetic retinopathy (DR) is the leading cause of blindness in patients with type 1 or type 2 diabetes. The gold standard for the detection of DR requires expensive equipment. This study was undertaken to develop a simple and practical scoring system to predict the probability of DR. A total of 1782 patients who had first-degree relatives with type II diabetes were selected. Eye examinations were performed by an expert ophthalmologist. Biochemical and anthropometric predictors of DR were measured. Logistic regression was used to develop a statistical model that can be used to predict DR. Goodness of fit was examined using the Hosmer-Lemeshow test and the area under the receiver operating characteristic (ROC) curve. The risk model demonstrated good calibration and discrimination (ROC area=0.76) in the validation sample. Factors associated with DR in our model were duration of diabetes (odds ratio [OR]=2.14, confidence interval [CI] 95%=1.87 to 2.45); glycated hemoglobin (A1C) (OR=1.21, CI 95%=1.13 to 1.30); fasting plasma glucose (OR=1.83, CI 95%=1.28 to 2.62); systolic blood pressure (OR=1.01, CI 95%= 1.00 to 1.02); and proteinuria (OR=1.37, CI 95%=1.01 to 1.85). The only factor that had a protective effect against DR were body mass index and education level (OR=0.95, CI 95%=0.92 to 0.98). The good performance of our risk model suggests that it may be a useful risk-prediction tool for DR. It consisted of the positive predictors like A1C, diabetes duration, sex (male), fasting plasma glucose, systolic blood pressure and proteinuria, as well as negative risk factors like body mass index and education level. Copyright © 2015 Canadian Diabetes Association. Published by Elsevier Inc. All rights reserved.

  12. The role of attitudes towards the targets of behaviour in predicting and informing prenatal testing choices.

    PubMed

    Bryant, Louise D; Green, Josephine M; Hewison, Jenny

    2010-12-01

    Research considering the role of attitudes in prenatal testing choices has commonly focused on the relationship between the attitude towards undergoing testing and actual testing behaviour. In contrast, this study focused on the relationship between testing behaviour and attitudes towards the targets of the behaviour (in this case people with Down syndrome (DS) and having a baby with DS). A cross-sectional, prospective survey of 197 pregnant women measured attitudes towards the targets of prenatal testing along with intentions to use screening and diagnostic testing, and the termination of an affected pregnancy. Screening uptake was established via patient records. Although attitudes towards DS and having a baby with DS were significantly associated with screening uptake and testing and termination intentions, unfavourable attitudes were better than favourable ones at predicting these outcomes. For example, in the quartile of women with the 'most favourable' attitude towards people with DS 67% used screening although only 8% said they would terminate an affected pregnancy. Qualitative data suggested that not all women considered personal attitudes towards DS to be relevant to their screening decisions. This finding has implications for the way in which informed choice is currently understood and measured in the prenatal testing context.

  13. Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline

    PubMed Central

    Zhang, Jie; Li, Qingyang; Caselli, Richard J.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin

    2017-01-01

    Alzheimer’s Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms. PMID:28943731

  14. Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline.

    PubMed

    Zhang, Jie; Li, Qingyang; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin

    2017-06-01

    Alzheimer's Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets